Life in an operations team can be complex and sometimes even chaotic.

During our customer research when we used to asked the ops teams to describe their day-to-day, they often said, "No single day is the same." Of course, their main responsibility is to ensure everything on the ground is running smoothly. But in the real world as we all know, a lot of things go haywire all the time!

Data Driven or Not Really?

Today, markets and growth teams have a plethora of tools  (tools like Google Analytics, Mixpanel, Heap, Clevetap and Amplitude) available at their disposal to get insights and analytics on their websites, web apps or mobile apps. These tools have made it so simple for teams to create cohorts and run different experiments on different cohorts.

Unfortunately, though, our real world (where demand, supply and operations interact with each other) is far behind. Today, there is almost a sense of missing experimentation in the ops teams today, so much so that they don't have the right metrics to even get full visibility of what's happening on the ground.

Uber realized this fairly early in their journey andsolved for it right from the start by not only giving the ops teams their own dashboards and visualization but letting them add the context and nuances of different city as well! After all, no two cities behave in the same way.

Optimum Conditions to Run Experiments

Life is a city or operations manager is pretty hard.  There are so many things going wrong a lot of times. In the operations world, often once we implement an algorithm or a strategy, it doesn't change.

A culture of rapid experiementation is essential to know what works best in different locations. Example: What happens with you change the driver's incentives? How does the conversion change when you change the SLAs or surge pricing?

What is the optimum workflow that enables and promotes this culture?

Condition 1: Having Visibility on Your Fingertips
The foundational base of the workflow lies in having visibiltiy with your operational metrics and KPIs. It helps you act tactically and solve for immediate issues that crop out and provides some clarity in stituations of chaos.
Example: Where have the user cancellations been occuring the most?

Condition 2: Ability to Drill Down Data
The second criteria is the having access to data in a way that enables us to drill down across areas, time, users & categories. This helps us pinpointing the underlying issues by finding patterns historically. Debugging this way can help with strategic decisions.
Example: User cancellations have been occuring in the eastern part of the city, especially during evenings.

Condition 3: Ability to Collaborate Across Teams
Often, the operation teams by themselves don't have all the context into the problem. It then becomes necessary to collaborate with teams in such a way that the bottlenecks can be figured out together. Once the issues are known, a hypothesis needs to be formed.  
Example: Cancellations are occuring because of high ETAs and high ETAs might be a result of lesser DEs available in that area.

Condition 4: Forming Hypothesis to Run Experiments
On forming the hypothesis, the experiments need to be rolled out in a smaller part and then monitered on its performance.
Example: Let's provision some more DEs in the eastern part of the city and see if delays are reducing or not.

Condition 5: Tracking Results
Then comes the tracking and testing piece. I am often surprised how many companies actually ignore this bit. How would you even know what you are doing is successful?
Example: Since, in our experiement we are trying to minimize cancellations with delays, we need to keep monitoring and tracking how both these metrics perform.

If you want to read more this and why heatmaps particularly suck for this culture of experimentation, check this out:

Making Location-Based Experimentation a part of our DNA
What can we learn about experiments on the ground from web-based experimentation?

The Current Landscape & its Challenges

The Current Workflow That Gets Followed
Let's talk about the top challenges to acheive those challenges.  What is hindering this workflow today? What is the biggest pain point?

The process today to get insights is completely broken. It takes 4 tools to get  answers to very basic questions. Data is extracted from 3-4 sources where data is present (sources of demand, supply & operations are stored in disparate databases), it is transformed and joined using R, Python or SLQ and then using excel sheets visualized on BI tools (like Tableau, Metabase), or open source tools (Kepler, QGIS) or on internal dashboards.

What happens as a result of this?

  • As a business user who needs constant data driven,  hyperlocal insights to inform their decision-making, quite a number of subsequent questions remain unanswered often.
  • The dashboards for the operations teams have a fixed set of metrics and any new metrics that need to added have to added in the engineering spring which take anywhere from 30-45 days!
  • Engineering teams have to spend significant time and resources creating dashboards in cases where the list of backlog is really high!
Locale.ai Dashboards

How is it possible to run quick experiments in response to an external event? What happens when we need to deploy different strategies for different areas?

Why does Locale make sense?

In case you are a data scientist, business user, city team member struggling with a high turnaround time with outdated dashboards, welcome to Locale. Locale is a no-code location analytics product built for analysts and business teams to give them operational visibility and intelligence.

How does Locale work?

The Workflow with Locale

Typically companies collect three kinds of location data:

  • Demand Data: Data about users and their app events. Collected in Mixpanel, Clevertap, Mixpanel, Amplitude.
  • Supply Data: Data about vehicles or delivery partners. Collected from their apps or censors and stored in Amazon s3, Cassandra etc.
  • Operations Data: Data about the deliveries and trips. Collected in MongoDB or PostgreSQL.

Locale ingests all location data from different sources, cleans and aggregates data in such a way that any insight you want to know about operations, you can easily get in a matter of 3 clicks.

With Locale, our aim is to be in the forefront of ops driven growth which has become critical at a time of optimizations and efficiencies. How, you may ask? Read on!

  • ETL Pipeline: With Locale, you get an in-built pipeline that combines location data across databases, formats and systems.
  • Visualizations: At Locale, our visualizations are pre-built and completely fixed to the type of use case and decision you want to take.
  • Data Catalog: Locale serves as company-wide location dashboard with all metrics, dashboards, insights and decisions across teams in one place. You can also invite other teams onto the platforms.
  • Actionability: We have a workflows module that lets you take certain actions or send notifications every time a set of conditions are met.

To know more, check out our 2 min product video here: https://www.youtube.com/watch?v=Juzmg0OaclI


Similar Reads:

Comparing Locale.ai and Uber’s Kepler.gl on their Capabilities
Comparision of how Locale is different from open-source Kepler.gl on capabilities