Colocation offers benefits that include increased flexibility, better redundancy, improved security and cost savings. With widespread adoption of cloud architectures, another colocation benefit is being realized. As organizations move applications to the Public Cloud, latency problems increase for certain workloads. Colocation, because it makes it possible to move workloads closer to end users, can be an effective way to reduce location-based latency problems.

Latency is the measure in milliseconds of the Round Trip Time (RTT) required for a data request to be serviced. Many workloads perform well enough without special accommodations. Though studies indicate more than half will typically abandon a website if it takes more than 3 seconds to load, most website requests are serviced in less than a second. Research by Google indicates that if a search request takes a half second (500 ms) to yield results there will be a reduction in traffic of about 20%. Amazon estimates every latency increase of 100 ms will result in a 1% reduction in sales.

Though many experience latency below 100 ms, we should not conclude we are in the clear as far as latency is concerned. There are applications where very low latency is essential. Virtual Reality headsets can become disorienting if latency exceeds about 10 ms with some users saying they experience something akin to sea sickness. Multi Player games, Autonomous Vehicles, Internet of Things (IoT) devices, health care monitors, and factory robots all require very low latency with some demanding latency below 10 ms. The trend is an increasing requirement for low latency applications.

Latency is impacted by a number of variables including available bandwidth, network congestion, packet switching quality, network configuration, and the performance of network equipment. The route taken by a particular transmission is especially important. Typically, an internet connection passes through 10 to 12 gateway nodes as packets are processed and forwarded from one router to the next. Delays imposed by these “hops” are a major source of latency and are not mitigated by simply adding more bandwidth.

Public Cloud (AWS, Azure, Google Cloud) users are coming to the realization that latency can pose a significant problem for certain cloud workloads. This is due to the number of hops to be traversed and the geographic location of the various nodes in a transmission. Though packets are sent at the speed of light, the actual distances to be covered can total thousands of kilometers. Packets can be sent across the country and back for delivery to a nearby destination depending on the routing algorithms in use.

If end users are clustered in a particular region then a nearby colocation facility should be considered to address Public Cloud based latency issues. By hosting servers near end users you will dramatically reduce the distance that packets must traverse. You will also likely reduce the number of hops.

Connecticut has many organizations that may benefit from moving some workloads from the Public Cloud to a local colocation facility. This is because these organizations primarily serve data to local end users. The state is home to many universities where students access information from systems that are close to where they live. Hospitals and other health care providers also primarily serve local clients and have increasing demands for low latency to support real-time monitors. Municipalities and state government agencies also have a local footprint and deploy IoT applications that require low latency. Manufacturers and many other companies in Connecticut have low latency requirements and a local focus and thus may be able to improve their data delivery performance.

Please contact CAPS if your organization serves a local end user population here in Connecticut and you are concerned about latency. We can help you determine if colocation and/or our local cloud services can provide better responsiveness to your clients by locating your workloads closer to your end users.