Colocation data centers provide tenants with high availability, secure infrastructure hosting. They do this by leveraging investments in redundant power, environmental, networking, and security systems. Continuous monitoring of key parameters is critical to assure data centers function properly. Alerts are generated in real-time whenever a threshold is exceeded. Though there are many conditions that can be tracked, here are 7 of the most important parameters to be monitored.

Power

Power is the most important thing to monitor in a data center. Power is so vital that most facilities have several layers of redundancy to automatically back up the utility power that comes from the electric company. When primary power is interrupted, Uninterruptible Power Supply (UPS) systems take over. Generators back up the UPS systems for longer term power until utility power is restored. Though power backup is automatic from utility power to UPS to generator, alerts are issued to data center personnel when a problem is first detected so they can determine the cause of the power outage.

Heat

Temperature ranges are monitored in the data center to make sure air conditioning systems are functioning properly. Elevated temperatures can damage IT systems and interrupt operations. Alerts are issued in real-time whenever a temperature threshold is exceeded in the data center.

Humidity

Relative humidity is also measured and reported whenever a reading is outside of an established range. Too much or too little humidity can affect performance of servers and other information technology systems.

Network

Colocation data centers provide a variety of internet bandwidth services. Since connectivity is essential, automatic failover to a redundant backup service is frequently employed. Network performance parameters such as packet loss counts, and total loss of internet service are monitored and reported.

Colocation clients have the option of procuring their own internet connectivity or they may procure these services from the data center. Clients who get internet services through the colocation data center benefit from automatic failover to a backup internet service provider circuit. Some colocation providers also monitor each client’s internet service availability. An alert may be issued if service is unavailable for a specified period of time (i.e., typically 2 minutes). Clients may be contacted to let them know there may be a problem with their firewall or some other problem that is impacting internet access.

Fire

Fire suppression systems are installed at locations throughout the data center. These systems report alerts to the data center engineers and to the building supervisors in the event that a fire suppression system is deployed.

Water

Monitors are placed in the raised floor areas in the data center to detect water and other fluids.

Data Center Access

People with authorized security cards may enter the facility at any time. However, all entry to the data center is logged and the activities of people in the data center are video recorded 24/7/365. Doors to the data center will issue an alarm whenever they are opened by someone without proper credentials or whenever doors are left open for more than a few seconds. When data center access alerts are issued, staff can review video images to determine if activity in the data center requires their attention.

Colocation data centers such as CAPS rely on proactive monitoring to assure that availability, performance and security is maintained at the standards required by our clients. Much like the wild rabbits of Connecticut whose very survival depends on keeping alert, our sensors work continuously to make sure all systems are functioning properly.

Reliable offsite data backup and restoral is essential to respond quickly to potential disruptions. Whether online data is compromised by Ransomware or something else, the ability to recover a current clean copy is the key to minimizing costly business interruptions.

Backup as a Service (BaaS) is popular for organizations looking to ensure they can recover critical data quickly. Service providers employ software-based tools to automatically copy production data at predefined intervals. Data is often encrypted as copies are sent to one or more remote storage sites. Tests are performed to verify that a backup copy has been successfully completed. Then, if needed, the backup can be restored to resume normal operations.

There are many options for offsite data backup services in Connecticut. When evaluating different services, we recommend checking to see if the following three criteria are met.

  1. Data backup services should be full featured, easy to use, and cost-effective
  2. The service provider should have sufficient IT infrastructure to meet the client’s requirements
  3. Technical support should be available to address specific challenges

Powerful Backup Software Works Well and is Affordable

CAPS has been delivering offsite data backup services for years using several different tools. For the past few years, we have been using Veeam’s Availability Suite software. Our engineers are impressed with the software’s versatility and with its ease of use. With Veeam we can configure a backup solution tailored to our client’s unique requirements. The software handles most environments and includes the ability to restore data on a granular level, i.e. for an individual Virtual Machine or file. Training is straight forward so clients get up to speed quickly. Also, the Veeam solution is cost-effective.

Multiple Data Centers for Backup with Optimal Physical Separation

The IT infrastructure available from suppliers of BaaS in Connecticut varies from one provider to the next. Some have a single data center. Those with multiple data centers can architect a variety of offsite backup alternatives with primary and secondary backup sites.

In cases where a service provider has multiple data centers, the physical separation between data centers is important to consider. For example, one Connecticut provider has two data centers located about ten miles apart. This is a concern because a disaster that affects one data center will probably affect the other.

Another Connecticut service provider has a secondary data center more than 1,000 miles away. This great distance increases both network and travel costs. Greater distance between data centers also increases the time it can take to create remote backups.

CAPS believes the physical separation of its three data centers across the metropolitan New York region is ideal. The CAPS data center in Shelton, CT is about 67 miles from our data center in New York and about 120 miles from our data center in New Jersey.

Knowledgeable Support Willing to Address Unique Needs

Larger data backup service providers usually do not provide much individualized support to clients. Smaller service providers may not have the resources to tailor customized backup solutions. A mid-sized BaaS provider, such as CAPS, can deliver individualized data backup and recovery services.

There are many different providers of offsite data backup services in Connecticut. With a little research you should be able to find the one that is just right for your organization.

We often get questions at cocktail parties when people find out we work at a data center. They may ask us to explain colocation or to discuss the difference between a Private Cloud and the Public Cloud. The best offsite data backup practices are another popular topic. However, sometimes questions come from a darker place- “What really happens at your data center late at night?”

After the sun goes down and the workday is over, many of the IT systems hosted at CAPS’ Shelton data center are still quite active. Some client workloads run around the clock. For others there are peak periods and slack times. Regardless, most clients’ servers must be available 100% of the time.

“Does anybody work at the data center overnight?” is a typical question. Some clients come to the data center in the middle of the night to work. This is to minimize the impact on their customers as they modify systems. In other cases, they work at night because it is when they can get a block of time free from the distractions that beset an IT infrastructure professional.

Access and Security

Another question has to do with how security is maintained in the middle of the night. Of course, nobody can gain access to the data center, at any time, without proper credentials which must be established in advance. When clients arrive after hours they first must use the security card issued to them by CAPS to open one of the main doors to the building which are locked at night. All visitors are monitored by the guards at the central security desk and captured on the building’s video recording system.

Once inside the building, clients require their security card to pass through the data center’s main entrance. They then proceed through two more security card access doors before entering the data center itself. As clients move through the facility they are recorded by the data center video recording system. These videos are kept for 60 days for subsequent review if necessary.

Once inside the data center clients proceed to where their systems are located. They use the unique key they were issued by CAPS to unlock their cage or cabinet door to gain access to their systems. The support team at CAPS keeps a copy of these keys if lost.

Monitoring and Response

People also ask “What happens if an environmental alert is issued at night?” The systems that monitor electrical power, temperature, humidity, and internet service issue alerts in real-time to the system engineers responsible for maintaining the data center. These trained professionals all live within a half hour of the data center are on call 24/7/365. The CAPS engineers also are supported by a Network Operations Center which is staffed around the clock at company headquarters in Pearl River, New York.

As the cocktail parties extend into the wee hours we occasionally we get more whimsical questions. “Do we keep the lights on in the data center at night because aren’t servers afraid of the dark?” “No,” we answer patiently, “the servers are very accustomed to the dark.” To save energy, even during the day,  we turn on the lights in the data center only when somebody is on site.

Picture is of a full moon and the Heublein Tower in Simsbury, CT

July is usually the hottest month of the year in Connecticut, so this is a good time to consider how data centers cope with elevated temperatures.

Heat is a byproduct of the power provided to servers and other IT systems at a data center. Effective heat management is essential because excessive heat can damage these systems and disrupt operations.

For example, record heat in Europe (104 degrees F) recently forced the temporary closure of data centers in the UK. Both the London based Google and Oracle data centers were powered down to prevent significant damage to servers and other equipment that could have caused prolonged outages.

There are many things that can be done to manage heat in a data center. Here is a list of some of the most important things you can do.

  1. Provide Enough Cooling Capacity for Your Data Center
  2. Maintain Air Conditioning Systems
  3. Design Your Data Center to Optimize Cooling
  4. Disperse High Heat Generating Cabinets
  5. Continuously Monitor Heat at Critical Locations Throughout the Data Center
  6. Respond Immediately to Heat Alerts
  7. Deploy Additional Localized Cooling To Address Heat Spikes

Let’s consider each of these recommended steps in detail.

First, the data center must have adequate cooling capacity in its air conditioning systems to handle the maximum power consumption and resulting heat generation that can be anticipated. Provisioning more cooling capacity than will ever be needed provides a safety factor and is good practice.

Having adequate cooling capacity is not enough. Maintaining CRAC units, condensers, and other air handling systems is an ongoing requirement to make sure these vital heat management systems function properly.

The floor layout of the data center also can be used to manage heat. Rows of cabinets are positioned so the rear sides of alternating rows face each other. By setting up a Hot Aisle/Cool Aisle configuration, cooled air coming from the Computer Room Air Conditioning (CRAC) systems is passed first to the front of the server cabinets. Heat is transferred to the cooled air from the powered equipment in each cabinet. Warmed air is then returned to the CRAC systems after exiting from the back of the cabinets so that it can be cooled once again and sent back to continue the process. The Hot Aisle/Cool Aisle design is proven to be more efficient than a layout where both cool air and hot air are intermixed in a single aisle.

Dispersion of higher heat generating cabinets is another way to minimize the impact of high temperatures in a data center. The amount of heat output by the systems in a cabinet can be highly variable. Some processor intensive servers consume a lot of power and thus generate a lot of heat. Other equipment may be much cooler. When possible, dispersing higher heat cabinets throughout the data center can minimize concentrations of heat.

Continuous temperature monitoring throughout the data center is essential. Colocation data centers establish a target temperature range where servers operate safely but where energy is not wasted. The cooler the temperature, the more energy is required. The key is to set a temperature target that is cool enough to protect IT systems but not so cool that energy costs are excessive. When monitoring systems determine a temperature is above the established threshold an alert message is issued. Data center personnel receive alerts immediately at any time of the day or night.

When an alert is issued data center professionals respond as quickly as possible to determine the cause of the elevated temperature. Once the cause is identified they work to rectify problems so that equipment is returned to a safe operating temperature.

Sometimes specialized local cooling systems are employed to address a temperature spike. Data Center engineers can quickly set up these systems to focus additional cooling to a cabinet or cabinets where elevated temperatures have been reported. This prevents a serious problem while the source of the higher temperature is determined.

Managing heat in a data center is an essential ongoing responsibility. Proper air conditioning systems coupled with an experienced staff ensure operations without disruptions. CAPS’ team is proud it has successfully managed the heat of over 20 Connecticut summers without experiencing a single unscheduled data center outage.

Colocation, Public Cloud, and Private Cloud are the three primary alternatives to hosting IT infrastructure on premises. Internet bandwidth availability and price is one of the most crucial factors to consider when determining where to locate IT systems.

Bandwidth is the capacity of a communications circuit to transmit data. It is typically expressed in Megabits per Second (Mbps) or Gigabits per Second (Gbps). The bandwidth of a communications circuit is analogous to the maximum number of cars that can travel on a highway. This is all too familiar to people in Connecticut. Our little state is home to some of the most congested roads in the U.S.

Broadband services (defined as bandwidth of at least 25 Mbps download and 3 Mbps upload) became affordable in the early 2000’s. Prices of $20 per Mbps per month or more have declined to less than $5 per Mbps per month during the last twenty years. As prices came down, adoption of broadband services rose dramatically. By 2004 more than half of all U.S. internet users had replaced their dial-up modems (typically at 56 Kbps) with broadband services.

High Bandwidth Circuits Enable Remote Data Centers

AWS introduced the first Public Cloud in 2006. This was more than 10 years after CAPS first opened its data center in Shelton, CT. It was also when the cost of broadband services first started to become affordable. Higher bandwidth circuits were required to make remote data centers viable. This was to keep network latency low enough that user response time was maintained at an acceptable level even though data was transmitted over longer distances.

Leveraging virtualization technology and lower cost bandwidth, Public Cloud vendors built large data centers in locations where both power costs and taxes were low. Economies of scale made it possible for Public Cloud vendors to provide low introductory prices for data services. Infrastructure as a Service (IaaS) took the industry by storm by offering an inexpensive way to create internet-based businesses that required no capital expense. IaaS is popular because it is flexible, scalable, and low cost (at least initially).

Public Cloud and Colocation Billing Differs

There is a difference in the way Public Cloud and Colocation providers charge for internet bandwidth. Public Cloud vendors typically bill for monthly total data transfer whereas colocation providers charge for the bandwidth rate provided.

Public Cloud providers monitor the amount of data transferred during a month (typically in Gigabytes). Both inbound and outbound data transfer is counted. Most Public Cloud providers charge nothing for inbound data transfer. They usually allow for a certain level of outbound data transfer but then charge for every outbound byte transferred after that. The problem with this approach is that egress fees can ramp up quickly. The fact that Public Cloud data transfer fees are variable from one month to the next and can be difficult to predict is also a problem.

No Surprises With Colocation

Colocation providers offer fixed monthly internet bandwidth pricing for a specific guaranteed data rate. Clients can order the amount of bandwidth they expect to need. If they decide to change their bandwidth they can typically increase or reduce their rate within a day. The benefit is their bandwidth costs are pre-established and there are no surprises when the monthly colocation invoice is received.

The availability and cost of internet bandwidth and the pricing mechanism used for billing can influence the best place to host a specific workload. For clients who want predictable and affordable monthly network expenses, the best choice is colocation

Colocation has been an important IT infrastructure option for decades. Recently, as a direct response to the COVID pandemic, there is a new reason to use colocation.

COVID forced many employees to Work From Home (WFH) over the past 2 years. As WFH became more accepted, another use case for colocation has been identified. The ability to quickly and cost effectively place IT systems in a secure and conveniently located data center reduces risk when moving to a remote work environment.

More than 2 years after the onset of the pandemic, companies are changing how they work. Office leases are not being renewed. Smaller offices with flexible layouts are being set up to save money and to support hybrid work models where employees come to the office a few days a week. Some companies have completely abandoned their office to have employees work from home all the time.

For most companies, business cannot be conducted if critical computer systems are not available. The process of moving an office requires powering down IT equipment so it is vitally important to prepare a plan that minimizes disruption.

Moving An Office Can Be Risky

Planning an office move can be stressful. The final decision to not renew an office lease is often made with only a few months left on a contract. Once a move date is set, the pressure is on to take care of a multitude of tasks. To minimize the risk of disruption of critical business operations during a transition it is important to prepare a detailed plan.

Most organizations have migrated some computer workloads to the cloud. However, there are usually residual applications that are not best in the cloud. For example, database applications that require a large amount of outbound data transfer are extremely expensive when hosted in the Public Cloud due to costly egress fees. Other applications require low latency or high security and thus should be placed locally and not in the cloud.

For those applications already provisioned through a public or private cloud, the move from an office should not be disruptive. Once internet service is available at the new location, the applications may be used.

Other workloads may be suitable for the cloud but may not have been migrated yet. These applications should not be migrated to the cloud as part of the office move. It is too risky to add these types of rehosting projects to the primary task of a major office move. These workloads should be placed at the colocation facility temporarily until they can be safely migrated to the cloud at a future time.

Colocation Reduces Risk

With colocation it is possible to move workloads that are not suited for the cloud to a secure local data center. By decoupling the move of IT infrastructure from the rest of the office relocation, organizations can reduce the risk of a service interruption. Once computer systems have been placed at the colocation facility the rest of the office move can be completed at any time without concern about the day-to-day functioning of the business.

A growing number of companies in Connecticut and Westchester County planning an office down-sizing or a move to WFH have used CAPS’ colocation services to reduce risk and provide a bridge to the future.

Pictured above is the Old Drake Hill Flower Bridge. Originally built in 1892 this bridge spans the Farmington River in Simsbury, Connecticut. Exactly one hundred years after construction, cars were banned and the bridge was designated for pedestrian use only. A few years later it was decorated with flower boxes.

What are the most important factors to consider when choosing a colocation service provider? Here is a short list-

  • Redundant power
  • Reliable air conditioning to control temperature and humidity levels
  • Resilient internet connectivity with automatic failover
  • Advanced security systems
  • Remote Hands services
  • Convenient location

Location and Cost Drive Colocation Selection

Power with back-ups, multiple environmental systems, high availability internet services, security protection, and flexible support are must-have requirements for all colocation service providers. Data centers must check all these boxes to succeed in the competitive colocation business. Ultimately, the colocation facility’s location is the factor, other than cost, that dictates which data center is selected.

Which factors should be considered when choosing the location of a colocation facility? The facility should be close enough for staff visits as needed. Yet it should be far enough away to reduce the risk of the same environmental events that might impact the primary office location. The site also should be near major roads to minimize drive time. It is even better if the drive to the colocation facility is against traffic during those times when employees typically visit the data center.

It is also best if the colocation provider is powered by a different electric utility than the one that powers the primary place of work. Though the total loss of utility power is rare, the consequences of such a loss can be devastating. The probability of two separate electric utilities losing power at the same time is far less than the chance of a total outage at either one.

Finally, here in Connecticut colocation costs can vary a lot based on real-estate costs. The cost per square foot for a data center in lower Fairfield County can be 2 or 3 times higher than the cost for the same amount of space in places like Shelton where CAPS’ data center is located.

Higher Elevations Lower Risk

The data center’s elevation above sea-level is another location-based factor to consider- especially in Connecticut. Our state has many low-lying areas that are close to the shoreline, rivers and lakes. Though hurricanes and tornedos can wreak havoc here, these extreme storms are rare. Floods, whether caused by storm surges or heavy rains, are much more common. The best way to avoid floods is to locate critical IT infrastructure at higher elevations.

All things being equal, it is best to aim for higher ground when looking for a lower risk place for your critical IT infrastructure.  Connecticut, unlike our neighbors to the north, is a relatively flat state. We rank 36th in terms of the states with the highest elevation. Our tallest peak is Mount Frissell in the northwest corner of the state which is 2,379 feet above sea-level.

So why not build a data center on Mount Frissell? There are data centers at very high elevations around the world like the one in Tibet at 11,995 feet above sea-level. Though the flood risk at such heights is minimized the cost to build a data center on the top of a mountain is very expensive. Also, at higher elevations air conditioning is more expensive. This is because the air is thinner at higher elevations so more air has to flow over electronic systems in order to remove heat.  Finally, since Connecticut has few tall mountains we should probably leave Mount Frissell for our hikers.

The CAPS data center in Shelton is head and shoulders above most of the other colocation sites in Connecticut. High above the Upper Valley at 290 feet above sea-level you can look down upon the restaurants and hotels along Bridgeport Avenue and see the cars speeding along Route 8 from the top level of the parking garage that is adjacent to the data center.

The fact that CAPS’ clients have not experienced an unscheduled power outage in over 20 years is due, in part, to the location of our data center in a flood-free zone far above sea-level.

The dictionary defines colocation as when two or more things are located together. When the term is used with respect to IT infrastructure, most IT professionals know we are talking about specific data center services. A colocation facility is a data center where multiple clients can move their servers and other equipment to improve availability, increase security, and save money.

What is the difference between colocation and the Public Cloud? One way to answer this question is to consider the difference between living in an apartment and staying at a hotel. For those who love analogies, we can say Colocation is to an apartment as the Public Cloud is to a hotel room.

The analogy is timely because the market in Connecticut for houses and apartments is booming; just as there is growing interest in colocation. The COVID-19 pandemic drove many New York City residents to the Connecticut suburbs to live in a less congested environment. This led to a shortage of affordable single-family homes. Many who would like to purchase a home are now settling for an apartment as they wait for home prices to recede.

COVID-19 also spawned the Work From Home transformation. Even as the pandemic subsides, many companies plan to have their employees continue to work remotely. Some companies have decided to downsize their offices or shutter them completely to save money while employees work from home. In these cases, colocation provides a proper home for those IT systems that are not suitable for the Cloud.

Colocation is like renting an apartment in several ways. Whether renting an apartment or collocating IT systems, the client provides the infrastructure. Client owned servers and related IT equipment are housed at the colocation data center just as tenants provide the furnishings for the apartments in which they live.

Though it is possible in both a colocation agreement and an apartment lease for the client to be billed directly for utilities, it is more common for these services to be bundled into the monthly fee.

Finally, the period of the lease is comparable for both apartment rentals and colocation agreements. Most leases for apartments, as well as colocation contracts, are signed for a period of 1 or more years.

The Public Cloud is more like staying at a hotel. Services from AWS, Azure, or Google Cloud provide processing, memory, storage and connectivity resources to the client on demand. In a similar manner, a  hotel client expects their room to be outfitted with beds, a television, a refrigerator, linens, and more.

Whether ordering Public Cloud services or making a hotel reservation, arrangements can be made in a matter of minutes. In both cases contracts can be for a day or less. A long term commitment is not required.

Hotels and Public Cloud providers offer a great deal of flexibility but occasionally there can be surprises at the end of an engagement. Though most hotel expenses are predictable, there can be some unexpected charges upon checkout. Who knew the cocktails and snacks available from the in room mini- bar would be so expensive? In a similar way, unanticipated cloud charges due to egress fees and peak hour surcharges can create budget overruns that are difficult to explain to management.

CAPS has been a leading provider of colocation services to organizations in Connecticut and New York for over twenty-five years. If you are looking for a better place for your servers please contact us.

Senior management does not like surprises; especially budget overruns. That is why colocation is so appealing to CIO’s and CTO’s at small and medium sized businesses. Recognition that the Public Cloud can be more expensive than colocation is causing some organizations to repatriate workloads. The inability to accurately predict monthly expenses is another reason companies are choosing colocation over the Public Cloud.

Public Cloud Cost Overruns Are Common

A recent survey of 750 IT professionals by Pepperdata reported one third had Public Cloud budget overages. In some cases actual monthly costs exceeded budget by as much as 40%. In 2019, NASA spent 53% more on Public Cloud expenses than budget. Much of the $30 million dollar overrun was due to unexpected data egress fees. Though going way over budget at a large federal agency may not be a career buster, the consequences are likely to be more severe for IT professionals at a small or medium sized company.

The inability to accurately predict monthly expenses is due to the pricing methodology used by Public Cloud vendors. Cloud services are billed based upon actual resource utilization. While this sounds good (you only pay for what you use) this approach can wreak havoc with budgets. Pricing algorithms are complex and monthly charges can vary a lot based on when services are used and where data flows.

Colocation Monthly Prices Are Fixed

Most colocation providers charge a fixed price for internet bandwidth services. The rate for an internet circuit with automatic failover to a backup circuit will be a fixed monthly fee based on the bandwidth (Mbps) of the circuit. Colocation customers know their internet charges will be the same from one month to the next. This is also true for monthly power and environmental charges.

Public Cloud providers typically price internet services based on the amount of data transferred during the month. Though there is often no charge for inbound data, the cost for outbound data transfer (egress) can be high. Public Cloud data transfer charges also may vary based on when data is sent and which data centers are involved in the transmission. Though pricing is based on actual network utilization, it can be very difficult to forecast Public Cloud internet costs for a given month.

Public Cloud charges for compute and storage services also can vary based on when they are utilized. Sophisticated pricing models reward off peak hour usage. In theory users can save money by accessing services during slack times. However, many clients are not willing or able to adapt to take advantage of lower rate periods. The result is higher expenses and higher variability from one month to the next.

Cost Management is an Ongoing Requirement for Public Cloud

Ongoing cost management is a requirement for Public Cloud users that colocation customers do not have to worry about. Unlike colocation, where monthly fees are the same from one month to the next, the variability of Public Cloud expenses creates an ongoing management responsibility. Many organizations assign someone the task of monitoring Public Cloud expenses each month to determine the cause of cost increases and to modify usage patterns if needed.

To address the Public Cloud cost management challenge there are a growing number of cost management and cost optimization tools. Though each of the Public Cloud providers offer free tools such as AWS Cost Explorer, Azure Cost Management, and GCP Billing these tools require trained personnel to use them effectively. Other third party tools like Harness Cloud Cost Management have more capabilities than the free Public Cloud tools. However, these advanced solutions can be expensive and also require a commitment to have a trained employee oversee their use.

There are use cases where the Public Cloud is the best IT infrastructure choice. However, just as there is a growing realization that Public Cloud may be more expensive for certain workloads, the unpredictable nature of Public Cloud monthly expenses often makes colocation the better choice.

For IT managers in Connecticut who would like to avoid the need to explain a big budget overrun to management, CAPS is pleased to offer colocation services with predictable monthly pricing from our secure data center in Shelton.

Hybrid Cloud is fast becoming the data architecture of choice. Hybrid Clouds incorporate a mix of on-premises, colocation, Public Cloud, and Private Cloud resources. Using orchestration software and networking, a flexible, optimized architecture can be built.

Characteristics of Public and Private Clouds

Public Cloud services such as AWS, Microsoft Azure, and Google Cloud offer flexibility and scalability with minimal capital expenses. Services can be brought online in minutes via self-service portals. A wide variety of processing and storage options are available. However, Public Cloud services employ a Shared Responsibility model requiring knowledge of complex and changing environments to assure adequate security. Pricing models are difficult to understand and costs can increase unexpectedly due to egress fees. Latency can also be a problem with Public Clouds as can ensuring compliance requirements are met.

Private Cloud is typically more expensive than Public Cloud but it offers better security, lower latency, and better compliance assurance. The cost of Private Cloud services is usually more transparent than Public Cloud services.

Colocation Characteristics

Colocation data centers are often selected to provide low latency services. Security and compliance are better with colocation than Public Clouds. Though capital expenses are higher than Public or Private Clouds, colocation may not require much capital expense if the IT systems to be used have already been purchased. Customized solutions, ongoing support, and predictable pricing are features of colocation. The ability to add or delete services immediately and self-service functions are not generally available with colocation.

On Premises Characteristics

On Premises solutions are sometimes preferred. Systems installed onsite are well understood and have been proven over time. Infrastructure may have already been paid for and adequate space may be available. Applications that are unique often perform best on premises and latency can be minimized. However, availability may be jeopardized due to lower levels of redundancy. On Premise solutions are not an option when organizations close their offices to Work From Home.

Types of Workloads

There are many different types of workloads. Each workload may require different infrastructure features for optimal performance. Listed below are several common workload types and the infrastructure best suited to meet specific requirements.

  1. Websites

Websites benefit from the elasticity and scalability of Public Cloud solutions. In almost all cases the Public Cloud is the best choice for hosting websites. Exceptions would be for websites that require extremely low latency, that have strict compliance demands, or that require large data outbound transfers where egress fees can become exorbitant.

  1. Financial Trading Applications

Private Cloud or Colocation services are usually preferred to provide low latency, high security, and high compliance solutions.

  1. SaaS Applications

Public Cloud services are usually the best choice for SaaS due to their low entry costs, scalability, and flexibility. In some cases, Private Cloud services are used to provide enhanced security and compliance.

  1. Workloads with Big Data Transfer Requirements

Applications that transfer large amounts of data from a database and applications transmitting large outbound video files can quickly become extremely expensive if hosted on the Public Cloud. This is because Public Cloud providers charge egress fees for outbound data transfers. Organizations are repatriating these workloads to colocation facilities to save money.

  1. Offsite Data Backup and Restoral

Offsite data backup is essential to protect against ransomware and other cyber breaches. Though the Public Cloud provides a low cost option for storing data backups and for long term data archival, the egress cost of downloading this data for tests or data restoral can become excessive. In these cases, offsite data backup is best done at a colocation facility.

Each workload has different requirements for optimal performance. The flexibility of the Hybrid Cloud architecture makes it possible to host each application on the most appropriate infrastructure. Please contact CAPS to discuss your Hybrid Cloud colocation and Private Cloud service requirements.