Introduction

When you are at a very early stage and you've just started out with an idea, you are looking to validate your hypothesis first. The priority is to build & ship things quick and get feedback, iterate and improve as fast as you possibly can. This is where we were as well when we started Locale.

Picking a database for long-term operations is challenging. You have several parameters to evaluate a database on; capabilities to work with different data models, balancing read and write speeds, support system and community around the database.

It is always easy to justify technology choices in hindsight but hard to do so when looking ahead. This article is about how we made those decisions with the ever changing environment that Locale faced.

Since our product scale and functionalities were changing rapidly, we did not optimise for the problem before we had that problem. We started by using PostgreSQL as the primary database for both traditional storage and analytics since it had all the features we wanted at the time:

  • Relational Database features for CRUD.
  • JSONB for storing configurations and JSON in general.
  • PostGIS for the core GIS related functionality behind Locale.

The Problem

One of the key advantages of Locale is that there is no fixed schema in which you need to feed data into our systems. This gives our clients a lot of flexibility and opens up the platform to do visualisations not only across time and geo-location but also by combining other business entities. As we onboarded more clients, the problems and scale we were trying to solve for also increased.

In the beginning, we were handling datasets of a few hundred thousand rows a month which has now quickly scaled to more than 2 billion rows.

This introduced a host of new challenges that included:

  • Scaling our backend to handle writes
  • Along with writes, the read speeds have to be scaled to span across different entities and metrics
  • Customised business problems like fleet time utilisation
Measuring Fleet Utilization & Idle time Using Location Analytics
Learn how to use location analytics, to analyse fleet data and measure various metrics from it in real time, improving operational efficiency.

OLAP queries don't do well in PostgreSQL not because it is built badly but for the reason that it is primarily an OLTP database. That brings us to another topic - Row Stores vs Column Stores.

Row Stores vs Column Stores

There are a lot of articles out there that explain differences and tradeoffs of using row stores vs column stores. This article here does it in a very lucid way:

Deciding between Row- and Columnar-Stores | Why We Chose Both
Deciding which database to use for a service requires some understanding of how different databases store data, especially at scale. Choosing between a database that stores data as rows or columns…

To summarize:

  • OLAP uses columnar storage instead of using row based storage behind the scenes
  • Columnar Storage is more efficient in terms of I/O since most OLAP queries do not fetch all of the columns and fetch only the required columns
  • Since columns are grouped together, the database can employ efficient compression algorithms which further reduces the I/O loads
  • With columnar data, the optimisation can then be brought to the CPU level by usage of SIMD instructions

In data intensive applications, it is imperative that all components of a computer, particularly the CPU, work well together.

The Infinite Space Between Words
Computer performance is a bit of a shell game [http://blog.codinghorror.com/the-computer-performance-shell-game/]. You’re always waiting for one of four things: * Disk * CPU * Memory * Network But which one? How long will you wait? And what will you do while you’re waiting? Did you see the mo…

Why ClickHouse?

At this point we were sure that we wanted an OLAP database but we were not sure which database to pick until we looked at ClickHouse.

Open Source

"Homo sapiens were so successful because of the fact that we were the only animals who are capable of large-scale cooperation." - Yuval Noah Harari (Sapiens– A Brief History of Humankind)

If it was not for open source, different companies across the world would all be solving the same problems over and over again. Open source enabled innovation in a way that it became a win-win for everyone involved. When we pick tools for the cloud, we make sure that it is a combination of Open Source+Cloud Native. In this way, we are not vendor locked in and are also able to leverage the power of the open source community. ClickHouse is licensed under the Apache License that is probably the most Open Source friendly license that exists.

Massive Parallel Processing

ClickHouse is not just an OLAP database but it also has an MPP (Massively Parallel Processing) and SNA (Shared Nothing Architecture) features built in from the ground up to enable massive scale. For the scale and nature of data that Locale handles, the database should not just perform well but also scale with the rapidly expanding data needs. MPP does just that.

Proven At Scale

Before we started to use ClickHouse inside Locale, we wanted to make sure that it was battle hardened. It turns out that it was already operating at scale at Yandex where it was originally born:

History | ClickHouse Documentation
ClickHouse History ClickHouse has been developed initially to power Yandex.Metrica, the second largest web analytics pla

Along with Yandex, several other companies were using ClickHouse at scale:

Feature Set and PostgreSQL Compatibility

There were a lot of features that we were using in PostgreSQL that were natively present in ClickHouse as well. Some of those are mentioned below:

ClickHouse is also compatible with a lot of PostgreSQL queries, i.e its SQL dialect is not very different except in the case where we want functionalities specific to ClickHouse.

Altinity

Open Source is built on the foundation of collaboration. It is different from enterprise software and it doesn't make sense for all the companies to try to build in-house core competency.

Here is where we want to give a shout out to Altinity, a consultancy that provides hosting, support and a lot more for ClickHouse at a reasonable price point. We've been using their services and it's been great so far.

After ClickHouse - Scaling 100x

Prior to using ClickHouse we were using PostgreSQL as mentioned above. The following benchmarking was done on a dataset of 50M rows for performance optimisation that we got after using ClickHouse.

Performance Improvement of 10x to 100x can be easily seen
Performance Improvement of 10x to 100x can be easily seen

The impact of using ClickHouse for our complex queries has been monumental to say the least. Queries that used to take 30-40 seconds in PostgreSQL take fraction of a second in ClickHouse. Certain analytical queries (like Polygon queries with 50M rows) took more than 30 minutes to execute in PostgreSQL and so it was impractical to run the same there as the queries use to time out in most of the cases. Now with ClickHouse, it  just takes 2-3 seconds to execute them. This has enabled us to add polygon and get results in real time.

The migration pattern we followed to move from PostgreSQL to ClickHouse was also gradual. We migrated one customer at a time and also one table at a time. The improvements we've observed are not just in terms of speed but also increased usability of the app. Nobody likes to sit in front of a dashboard and wait for it to load.

There ain't no such thing as a free lunch, be it in life or in computer science in specific.

If ClickHouse offers so many performance advantages, then what's the catch? In the next section we review the often overlooked concept of writes in ClickHouse.

Write Challenges in ClickHouse

OLAP databases hate writes simply because they store data in columnar format. Each write has to be written to all columns making it more costlier when compared to row based systems. We should not use ClickHouse as a data store for frequent writes. But what if we want to ingest data in bulk?

ClickHouse has something called MergeTrees which are basically a modified form of LSM trees and can enable fast writes to columnar data structures. The data is first collected in this format and then slowly merged back into the tables by using a background process. There are several variants of MergeTree such as ReplacingMergeTree that just removes the duplicates based on a sort order and AggregatingMergeTree that aggregates rows based on an aggregate function instead of replacing the rows. The full list of the engines under the MergeTree family can referred in the documentation. This comes with its own caveats, i.e. eventual consistency. You can't have everything at the same time (CAP theorem says hello).

Despite the write challenges associated with ClickHouse, it offers a meaningful way to scale writes that does not significantly slow down the database. On this note, here is a nice article by the folks at Altinity on how much ClickHouse can scale with respect to writes.

To recap (memes are from Practical Dev) -

Understand the Core of the Problem

https://github.com/thepracticaldev/orly-full-res
Don't do this please.

Understand the core of the problem and the architecture. We cannot solve things by going around them.

Choose an Architecture that makes sense to the problem

Solutioning First Problems
The problem always comes first. Solutioning comes later. Thinking from a solution perspective introduces bias and makes us try to force fit the problem to the solution rather than the other way around.

Open Source for the Win

Even in situations where proprietary systems make a better choice, try to solve it using open source tools first. If this approach fails, it provides you a solid ground to justify moving to proprietary platforms and tools. In fact, many proprietary systems also have an open source fork upon which they have built their tools.

Maturity of the Software

Software Maturity is Important

Has the Software/Library been battle tested? If not, then does it make sense for your company to spend time and resources on that testing? For open source software, the maintainers do it in their spare time, so it is not fair for us as users to expect them to fix something according to a specific company's business need. It is always better to choose a well tested software with a large community so we do not run into such problems. This goes for commercial tools as well, it is unlikely that companies will fix issues with greater importance when a tool is at the beginning/beta testing stages.

Commercial Support.

For most companies it makes a lot of sense to focus on their core competency i.e building the business and commercial support can come in very handy. Open Source + Commercial Support is a good combination since we are not vendor locked and can always choose a different service provider if we are not satisfied. We also have the option of improving our in-house knowledge.

Best Practices

Let's look at a few best practices and lessons that we learnt using ClickHouse:

  • It is important to batch your writes properly and not do direct inserts which bypass the MergeTrees to directly write on disk.
  • Partition your tables in a meaningful manner.
  • Avoid doing too many joins. Denormalisation is the way to go for design but need to balance it as well with writes.
  • Migrating from other databases can be simple/complex depending on how your application is tailored towards that database dialect.
  • Get familiar with the internals. ClickHouse has enough support for monitoring within the database itself.

Conclusion

We've touched upon a lot of topics and hopefully this article would help the readers understand how we use ClickHouse at Locale.ai for analytics and would also give them an idea on how to choose a technology. I hope you're able to draw parallels from our experience for your own engineering team's journey.

Locale is not just yet another analytics tool. It's world's first-ever customizable control tower for achieving operational efficiency in real-world business operations. We are building a product that is and creating a new category. Our vision is to become the "God Mode" for every commercial asset that moves in the world. Try out the demo here.
Comparing Locale.ai and Uber’s Kepler.gl on their Capabilities
Comparision of how Locale is different from open-source Kepler.gl on capabilities

ClickHouse is just one piece of the puzzle. Stay tuned for more blogs around Locale's architecture and how we do engineering. Liked the blog? Share it across your favourite social media and shoot out any questions on the bird app.