The in and outs of caching

by | Nov 26, 2020 | Engineering

Whether it’s watching your favourite movie or some yummy recipe, we don’t like things to buffer or to load slow. With ever-decreasing attention span alongside forever increasing content diversity (people from all around the world are uploading interesting content), this is becoming an extremely challenging problem to tackle. This article will dive into the pros and cons of caching your content and how you can avoid costly traps along the way.

Why Content Diversity is a Problem?

For the sake of this article suppose you have a video sharing platform like Youtube but smaller. Let’s call the platform smalltube. You’re using your savings to run the company and so have four servers at the following geographical locations:

  1. Germany
  2. United States
  3. India
  4. Brazil

One of your customers in Germany is interested in exploring a new Indian dish because his favourite television show recommended it. He found some content creator in smalltube who makes delicious Indian curry. As you know, transferring data takes time and resources (i.e. money), and your customers want to see the video as fast as possible. But you and your company have to choose between two things:

  1. Fast Delivery – If you want to deliver the content fast, you can replicate the content in your nearby servers before it’s even requested. Fast delivery will help the user to fetch content from the nearby server and not from some server in another continent. But, how will you decide which servers are nearby and you want to replicate the data to? You can’t decide, so you have to replicate the data to all of the servers.
  2. Economical Delivery – You can in this approach wait till the data is requested and then replicate the data to customers nearby server, i.e. the server in Germany. Now, as more people see the television show and decide to learn about the recipe, you can serve the replicated content. But, economical delivery is slow for the initial customer.

Why is Fast Delivery not a good option?

Fast delivery is heavy on cost. You shouldn’t replicate all the content to all the servers you own because it’s expensive on storage and bandwidth. That translates to money, i.e. your savings. If you are to scale smalltube to the size of youtube, i.e. 500 hours of video content are uploaded every minute and copied to all the servers, you’ll have to deal with a few challenges. Firstly, you need to build a company near your datacenters which can provide you with storage solutions. Then you might fail to cope with your increasing storage requirements over time. Secondly, the huge transfer will throttle your data connection even if you had speeds ten times better than those of a data good centre nowadays. The worst part is you don’t even know whether the data will be requested by customers living near other data center and will waste your storage and bandwidth.

You can understand the economic approach is the best one, and as a fact: every company on earth follows the same approach.

What is caching?

When I said we replicate data, it was not completely true. We cache data on a server. Caching is a way to store immutable data (data which doesn’t change quickly like uploaded video) on servers with a specific TTL (time to live) and a hash so that it can be accessed faster.

For simplicity, remember every file results to a different hash. If you change the content slightly, the hash completely changes. It acts as a security net and helps forgery while sharing data. TTL can be used as a way to help the server understand how long the server should wait before discarding the file.

Replication vs caching

Replication is costly as there is no TTL issued, and in the background the server has to keep fetching the file. This is costly regarding bandwidth as the whole file is fetched. And suppose the file is deleted in the main server mistakenly or the server malfunctions all files on another server will be deleted instantaneously.

In caching you don’t need to fetch the whole file every time as you can compare the hash which occasionally is a 256 characters string and conclude whether you need to fetch the file before TTL expires. After TTL expires you fetch a new copy of the file from a newer server or peer servers which has a fresher copy of the file. Suppose your main data centre has some issues; caching provides ways to recover the file fully.

How TTL expires?

TTL expires with time. Every cached file has a time limit till it’s valid and after that, the cached file is anonymized. Another way is forced expiry. Here the main server instructs the cache server to invalidate the cache. Then if the instruction says to update the file, it’s done from the main or peer server. If the instruction says nothing, then it meant the file must be discarded.

What Data is Cached?

Data which doesn’t change fast is cached because fast-changing data needs to be updated instantaneously and caching doesn’t help there. You can think of live stream or comment section on a twitch stream. Caching is done for data which are immutable like a video, set of images in a webpage or complete search result in Google. Yes, google caches whole of its search results to save computational power.

To conclude

I hope you have understood how caching works and how important is it to the smooth functioning of the internet. Next time you find a website or video load fast remember so many things happen in the background. I am adding a few references if you want to know more about caching.?

Reference Links

TTL- https://www.cloudflare.com/en-in/learning/cdn/glossary/time-to-live-ttl/

Caching – https://www.cloudflare.com/en-in/learning/cdn/what-is-caching/

Search cache – https://purevisibility.com/cached-pages-google-mean/#:~:text=Search results on Google often come with a,last visited the site and indexed its content.

Explore more

Serverless, FaaS and why do you need them?

In recent years, serverless adoption has started, with more and more individuals depending on serverless technology to meet organizations’ specific needs. A survey conducted by Serverless Inc showed in 2018 that half of the respondents used serverless in their job,...

read more

The DevOps Roadmap: Unikernels

Containerization is one of the core building principles of clouds and DevOps, but traditional VMs and containers lack the security and agility that modern infrastructure craves. We are moving towards workloads that are smaller, faster, and more secure than the...

read more

The DevOps Roadmap: Virtualization

The Full-Stack Developer's Roadmap Part 1: FrontendThe Full-Stack Developer's Roadmap Part 2: BackendThe Full-Stack Developer's Roadmap Part 3: DatabasesThe Full-Stack Developer's Roadmap Part 4: APIsThe DevOps Roadmap: Fundamentals with CI/CDThe DevOps Roadmap: 7...

read more

Cloud Computing models: SaaS vs IaaS vs PaaS

Companies embrace cloud computing worldwide, and the forecasted size of 1025.9 billion USD by 2026 says the same story. Owning and managing infrastructure comes with a considerable cost and improper utilization of human resources. Companies are meant to foster...

read more

Interested in what we do? Looking for help? Wanna talk about software strategy?