Going to Demuxed? Need a better live origin server? Let's chat!

Distributed SQL, distributed cache

A dynamic duo: TiDB and Momento Cache

Pete Naylor
Author

Share

My father used to tell me “horses for courses”—usually when I was hurriedly trying to complete a mechanical chore using the wrong wrench (and making a frustrating mess of things). It’s another way of saying “use the right tool for the job”. I’ve worked a lot with AWS databases in recent years (particularly DynamoDB) and I’ve come to see that the AWS “purpose built” database strategy is an application of the same solid guidance. There is no free lunch in databases—it’s all about selecting for the right set of trade-offs (CAP and PACELC teach us this).

When people look to SQL for powerful query capability, they have expectations for consistency, transactional isolation and atomicity across multiple records. Historically, databases that target these properties relied on vertical scaling – centralizing all the storage, compute, and connection handling on a single shard. This brings scalability concerns, and carrying these properties through to a horizontally-scaled model is a tough distributed systems problem to solve. A decade ago, NewSQL databases started to build in a simpler approach for sharding of these SQL databases, but in recent years there is a shift toward true distributed SQL (where developers can make the same global assumptions around consistency as they would in a non-sharded database). We definitely find ourselves in something of a golden age for database evolution: there are companies and projects committed to tackling the global consistency challenge in the SQL databases you know and love!

via GIPHY

Having your cake and eating it too.

Recently I’ve been learning about an exciting option in this distributed SQL space: TiDB (by PingCAP). There’s a lot to like: TiDB is open-source, MySQL-compatible (use your regular MySQL client/driver), and it scales horizontally across shards as required but retains global consistency across your data while supporting your transactional (OLTP) needs. And here is the part that I find particularly intriguing: TiDB stores your data in both row form (optimal for OLTP) and column form (helpful for a lot of heavy analytical queries – OLAP). Hybrid transactional/analytical processing (HTAP) has long been a possibility in OLTP SQL databases – but constrained by row-based storage. TiDB shifts the bounds and makes HTAP a much more flexible and reasonable approach for a broader range of use case requirements.

Some horse trading: what can you get if you give a little elsewhere?

Remember I mentioned trade-offs? Here’s the thing – not all reads from a database need to be consistent or respect transactional isolation – and if you can do without those properties you can potentially get gains in performance and availability. Adjusting those reads to use a separate, eventually consistent store can also bring benefits in cost and scaling smoothness for the data operations which do need to make use of the consistent transactional store. You may have seen this with some traditional SQL databases which use asynchronous replication to separate nodes as a means to improve scale for eventually consistent reads – in fact, this is also what’s happening within DynamoDB when you request an eventually consistent read. Another approach is to apply an in-memory store as a cache. Unfortunately, adding caching to your architecture has generally meant tackling more infrastructure management and operational complexity. That’s a particularly unattractive proposition if you’re wanting to simplify your architecture and reduce operational overhead by using components like PingCAP’s TiDB Cloud offerings. Good news: now you can have the simplicity and cost savings of a truly serverless cache, with Momento Cache!

Putting it all together.

TiDB Cloud and Momento Cache make a compelling combination. In fact, one of the clever folks on the TiDB team has been building out example AWS implementations: tidb-momento-example. I tried the Read-Aside Cache example first—a Java application in Lambda function form. The application retrieves data from TiDB, and stores it in the cache for future reads. Using X-Ray, it’s easy to see the latency improvement that can be gained for reads where eventual consistency is acceptable.

Then I tried the Spring Boot example application from the same repo. It shows how to easily build in read-aside caching using annotations. Spring annotations allow developers to associate metadata with their classes for interpretation by the compiler or framework – basically allowing a lot of time-saving flexibility. To learn more about why this kind of capability is important in complex software environments, take a look at Edwin Rivera’s article: Fascinating facts about facades at CBS Sports. With the application running, there are two endpoints you can access on localhost—one always goes directly to TiDB with an update to a row, followed immediately by a read of the same item. The other endpoint attempts to retrieve the item from the cache first, and if it’s not found, it then performs the update/read as normal and writes the result into the cache. Exercising the endpoints is easy with a client such as curl or wget—and I found it informative to time my curl commands. The performance win from caching then became very clear.

What did I learn? And what’s next?

These examples are well worth experimenting with in my opinion—not just because they show how easy it is to build with TiDB and Momento Cache, but because they meaningfully demonstrate why caching in a read-aside pattern can be the right fit for many use cases. I look forward to seeing more examples for other programming languages as this repo grows. Please contribute if you’re interested!

I’m also very interested to hear about other integration ideas you might have for Momento Cache. If you want to chat sometime, reach out on TwitterLinkedIn, or via email.

Share