Enter the CDN
For example, your sample product image can be delivered by a CDN POP close to your end customer in whatever city and country they are in, thereby taking the stress off of your webserver and ensuring that your assets are always delivered in the most efficient means possible. Oh, and it is MASSIVELY scalable, redundant and get it, automatic!
There are a plethora of CDN services out there, with Cloudflare, Azure and Amazon being the biggest. All are amazing in what they do, but Cloudflare excels in its additional features and transport backbone network. One of the coolest features about Cloudflare's CDN is its ability to perform 'tiered caching'. The simplist way to describe tiered caching is that there is a master CDN POP, and the other POPs pull from it. SO, a user requests an image from your website and he in a certain city, Cloudflare delivers it to the user, and saves a copy in the master POP and the POP closest to the user. Any subsequest users in that area pull the image from that POP, users in other cities get the image delivered from the POP closest to them, but that POP pulls it from the master POP, NOT from your server. All in milliseconds, making your website lighting fast. As with any caching system, you specify the age or freshness of the file, from 1 hour to 1 year.
You can quickly start to see the benefits of using a globally available content delivery network.
Because these files rarely change over time, they make great prospects for long cache times like a year.
A great CDN will also handle all of the technical aspects, so you set it and forget it (for the most part).
Tiered Cache uses the size of Cloudflare’s network to reduce requests to customer origins by dramatically increasing cache hit ratios. With data centers around the world, Cloudflare caches content very close to end users. However, if a piece of content is not in cache, the Cloudflare edge data centers must contact the origin server to receive the cacheable content.
Tiered Cache works by dividing Cloudflare’s data centers into a hierarchy of lower-tiers and upper-tiers. If content is not cached in lower-tier data centers (generally the ones closest to a visitor), the lower-tier must ask an upper-tier to see if it has the content. If the upper-tier does not have the content, only the upper-tier can ask the origin for content. This practice improves bandwidth efficiency by limiting the number of data centers that can ask the origin for content, which reduces origin load and makes websites more cost-effective to operate.
Additionally, Tiered Cache concentrates connections to origin servers so they come from a small number of data centers rather than the full set of network locations. This results in fewer open connections using server resources.
Argo Smart Routing detects real-time network issues and routes traffic across the most efficient network path. These benefits are most apparent for users farthest from your origin server.
HTTP/3 is a major revision of the Web’s protocol designed to take advantage of QUIC, a new encrypted-by-default Internet transport protocol that provides a number of improvements designed to accelerate HTTP traffic as well as make it more secure.
Instead of using TCP as the transport layer for a http/https session, we use QUIC, a new Internet transport protocol, which, among other things, introduces streams as first-class citizens at the transport layer. QUIC streams share the same QUIC connection, so no additional handshakes and slow starts are required to create new ones, but QUIC streams are delivered independently such that in most cases packet loss affecting one stream doesn't affect others. This is possible because QUIC packets are encapsulated on top of UDP datagrams.
QUIC and HTTP/3 are very exciting standards, promising to address many of the shortcomings of previous standards (HTTP1 & HTTP2) and ushering in a new era of performance on the web.