{"id":303430,"date":"2020-03-19T07:48:36","date_gmt":"2020-03-19T14:48:36","guid":{"rendered":"https:\/\/css-tricks.com\/?p=303430"},"modified":"2020-03-26T14:31:20","modified_gmt":"2020-03-26T21:31:20","slug":"consistent-backends-and-ux-what-are-the-barriers-to-adoption","status":"publish","type":"post","link":"https:\/\/css-tricks.com\/consistent-backends-and-ux-what-are-the-barriers-to-adoption\/","title":{"rendered":"Consistent Backends and UX: What are the Barriers to Adoption?"},"content":{"rendered":"\n
There are very few scenarios in which an eventually consistent database is preferable over a strongly consistent database. Further, in a multi-region application scenario where scaling is necessary, choosing either an undistributed database or an eventually consistent database is even more questionable. So what motivates engineers to ignore strongly consistent distributed databases? We have seen many reasons, but wrong assumptions drive most of them.<\/p>\n\n\n\n\n\n\n
As we explained in Part 1 of this series<\/a>, the CAP theorem is widely accepted yet often misinterpreted. When many people misinterpret a well-known theorem, it leaves a mark. In this case, many engineers still believe that eventual consistency is a necessary evil.<\/p>\n\n\n It is slowly sinking in that consistency should not be sacrificed, yet many databases still put consistency second. Why is that? Some popular databases offer options that deliver higher consistency, but only at the cost of potentially very high latencies. Their sales messaging might even claim that delivering consistency at low latencies in a multi-region distributed database is incredibly hard or even impossible, and the developer audience has salient memories of experiencing very poor latencies in databases that were not built for consistency. Combined, they jointly fortify the misconception that strong consistency in a distributed database with relatively low latencies is impossible.<\/p>\n\n\n Many engineers build according to the \u201cPremature optimization is the root of all evil\u201d<\/a> (Donald Knuth) principle, but that statement is only meant to apply to small inefficiencies<\/a>. Building your startup on a strongly consistent distributed scalable database might seem like a premature optimization, because initially, your application doesn’t require scale and might not require distribution. However, we are not talking about small inefficiencies here. The requirement to scale or distribute might arise overnight when your application becomes popular. At that point, your users have a terrible experience, and you are looking at a substantial challenge to change your infrastructure and code. This used to have some truth to it since distributed databases were new, and many came with severe limitations. They did not allow joins, only allowed key-value storage, or required you to query your data according to predefined sharding keys, which you couldn’t change any more. Today, we have distributed databases that have flexible models and provide the flexibility you are used to with traditional databases. This point is very related to the previous point, which ignores that nowadays, starting to programming against a strongly consistent distributed database is just as easy and probably easier in the long run compared to a traditional database. If it’s just as easy, then why not optimize from the start?<\/p>\n\n\n\u201cBuilding a strongly consistent distributed database is too hard\/impossible\u201d<\/h3>\n\n\n
\u201cPremature optimization is the root of all evil\u201d<\/h3>\n\n\n
<\/p>\n\n\n\u201cIt\u2019s hard to program against a distributed database\u201d<\/h3>\n\n\n
Working with an eventually consistent database is like…<\/h3>\n\n\n