SOC reports are audit reports that adhere to guidelines developed by the AICPA (American Institute of Certified Public Accountants). They are commonly used to provide an independent professional review of the operations of a service provider such as a data center. Let’s consider the value of SOC reports to clients seeking colocation, data backup, and business continuity services at data centers.

First, a little background about our company’s history with respect to SOC reports. CAPS has been providing data center infrastructure services in Connecticut since 1995. In 2009 we began contracting for annual independent audits. We have engaged with an approved auditor every year since then (that’s 14 years and counting) to provide our clients with a means to independently verify our data center operations.

SOC reports have evolved over the years. The AICPA first defined an audit report requirement known as  SAS1 in 1972. Two decades later the SAS 70 Statement on Auditing Standard No. 70 was released. This document replaced SAS1 and became the standard until it was replaced in 2011 by SSAE 16 (Statement on Standards for Attestation Engagements No. 16). SSAE 16 defined SOC 1, SOC 2, and SOC 3 System and Organization Controls. These standards were updated in 2017 when SSAE 18 was adopted.

Is 2 Better Than 1?

SOC 1 Type I and II, SOC 2 Type I and II, and SOC 3 Type II are the current standards defined by SSAE 18. SOC 1 is a financial audit report that is primarily concerned with evaluating the suitability of the design and operating effectiveness of the controls a service provider has in place. It is often used to fulfill the annual independent audit requirements imposed on financial organizations and publicly held companies by the Sarbanes-Oxley Act (SOX) of 2002. A SOC 1 Type II report covering an audit over 6 or more months is typically the version of the report used for data centers.

SOC 2 consists of 5 Trust Services categories. The first category deals with security and is mandatory. The four remaining categories relate to Availability, Processing Integrity, Confidentiality, and Privacy and are optional. That is, each service provider may choose which, if any, of these categories to be included in their SOC 2 audit. A SOC 2 Type II report covering an audit over 6 or more months is typically the report used for data centers.  SOC 2 reports are growing in popularity because of their focus on security. However, they are not considered adequate to fulfill the SOX requirements of public companies and other financial institutions. That remains the domain of SOC 1.

SOC 3 is a modified version of SOC 2 that excludes proprietary information and thus can be released without a Non-Disclosure Agreement (NDA). SOC 1 and SOC 2 reports include proprietary information about the audited company and are not to be released without an NDA.

A Non-Issue for Many

Though it takes time and money to prepare a SOC report each year, many of our clients are not interested in these independent audits. If they are not required by regulation to receive an independent audit of their data center services provider, they may not request a SOC report.

CAPS and Blue Hill Data Services have always been committed to providing high quality IT infrastructure services to our clients. The SOC reports we contract for each year offer a professional, independent evaluation of our data center operations. We are happy to share these SOC reports to clients and prospective clients who request them.

For many organizations, SOCs are not required. Just as there are those who wear dress shoes without socks (especially here in Connecticut), SOCs are often a valuable addition.

Servers don’t last forever. When a technology refresh is required, there are several options. New servers can be purchased, or workloads can be migrated to the public cloud to avoid the need to purchase systems. APEX, a new service from Dell, offers an attractive alternative whenever a server refresh is required. It combines many of the advantages of the public cloud with some of the benefits of on-premises solutions.

Benefits Versus the Cloud

The Dell APEX program is a new “Pay Per Use” service that makes it possible to procure new servers without incurring capital expenses. This new service offers the benefits of the public cloud in terms of flexibility and scalability while addressing some of the problems of the cloud. Dell APEX solutions may offer better performance, more security, and reduced latency than cloud-based implementations. They also may make it easier to achieve compliance. Finally, monthly expenses with Dell APEX are known in advance. This is a big advantage versus monthly cloud expenses which are unpredictable, and which can be highly variable.

With APEX, clients order the systems they need and Dell installs them either at the client’s premise or at the site of an approved colocation service provider. A monthly fee is charged but Dell retains ownership and is responsible for maintaining the equipment. The agreement is similar to a lease but clients can add or remove functionality as needed via the Dell APEX console. APEX agreements are typically for 3 years. Monthly fees are adjusted so customers only pay for the resources they use. This provides flexibility and scalability normally only available with cloud services.

Dell’s APEX service is like HPE’s Greenlake pay per use service which was introduced about a year before APEX.  Lenovo also offers a pay per use service. These new services are expected to grow in popularity over the next few years. They offer advantages versus the cloud while making it possible for users to refresh technology without increasing capital expenses.

Dell Certifies CAPS for APEX

Recently a CAPS colocation client decided to order servers and associated equipment from Dell via the APEX program. First, the Dell team qualified the CAPS data center as suitable for the APEX program. This included verifying the dimensions of all doors between the loading dock and the data center to be sure Dell cabinets could be transported from the delivery truck to their ultimate location in the data center. Dell ships completely configured and tested cabinets to the client’s facility or to their colocation service provider’s data center.  Power availability, access to internet carriers, data center security, and technical support services were also evaluated before authorizing the data center for the APEX program.

After a series of Zoom planning meetings, the Dell APEX system was delivered to CAPS’ Shelton, CT data center. Then a team consisting of the client’s IT personnel, CAPS’ system engineers, and Dell engineers installed the new system. Since power and internet services had already been pre-installed, the cutover was completed quickly.

As we enter 2023, we expect to see more Dell APEX installations at our colocation facility. We believe the opportunity to get all new Dell servers and related equipment to replace aging systems for a reasonable monthly fee with ongoing support from Dell and the ability to quickly scale services up or down will be increasingly popular in the year ahead.

Did you know CAPS was one of the first companies in Connecticut to provide Business Continuity services? In 1995 the company began offering a secure alternate workplace for companies that wanted to minimize the risk of service disruptions. Though much has changed over the 27 years since CAPS first opened its doors, the need to manage risk is more important than ever.

Risk management is big in Connecticut. The state is home to many financial advisors and is the headquarters of some the country’s leading insurance companies. All businesses in our state must plan to avoid outages that can literally affect viability. Business Continuity service providers help organizations manage risk by providing backup facilities to limit the impact of service disruptions.

Much of business continuity has to do with IT. Information technology and communications are essential to many organizations’ operations. The best business continuity service providers offer facilities designed to assure IT systems are always operational. These high-end alternate workplaces are available 24/7/365 and have comprehensive security systems in place. They also offer trained professionals to assist in business continuity planning and periodic testing to assure preparedness.

In the past decade, the risk of service disruption due to cyber breaches has grown dramatically so data protection, backup, and recovery are now critical components of a business continuity plan. On demand, conveniently located office space with uninterrupted power and always-on internet service are the three other essential components of business continuity.

Though business continuity is a priority for most companies in Connecticut, each organization manages risk in its own way. The business impact of potential service disruptions varies from one company to the next. Each organization has its own business continuity risk appetite which is based on the likelihood of a service disruption and the estimated cost to the company of an outage. A comprehensive Business Impact Analysis (BIA) should be conducted periodically to calibrate risk.

Let’s review 5 of the most common Business Continuity approaches employed by Connecticut organizations ranging from the most rudimentary to the most complete and lowest risk solutions.

Work From A Public Place With WiFi

Relying on public WiFi at a local library or coffee shop for business continuity is not appropriate for any but the smallest organizations. Still many companies rely on this approach to access the Public Cloud when operations are interrupted at their office. Security concerns make this a risky choice. Conversations can be overheard, and WiFi communications can be intercepted.

Work From Home

Since much of work is now provisioned from the Public Cloud, there is a growing trend to depend on home offices for business continuity. If the home has power and internet service, this solution is both convenient and cost-effective. However home-based business continuity services are not optimal. Some workloads are not hosted in the cloud so there can be gaps in what can be done from the home. Power and internet services are subject to problems when working from home and security is less robust. Relying on home offices for business continuity can create client concerns especially for financial companies that must publicize their Business Continuity Plans on their websites as is mandated by FINRA Rule 4370.

Work From Another Corporate Office

Organizations with multiple offices may develop Business Continuity plans where employees work from other offices in case of an outage at their primary place of work. This approach can be effective if other offices are not exposed to the same outage conditions and if they are a commutable distance.

Work From A Shared Office Facility

Some organizations will reserve alternate workspace from shared office space companies like Regus. Regus has many locations in the state of Connecticut where a private office can be rented. The office may be furnished with a company’s systems and is usually available for use 24/7/365. However, generator power is not always available and multiple internet service providers with automatic failover is not typical. Internet services, unless specially provisioned, are shared with other tenants which limits bandwidth and security. Colocation services are not available at Shared Office facilities and technical support is not provided.

Work From An Alternate Workplace At A Secure Data Center

For organizations that have a low appetite for Business Continuity risk, a secure alternate workplace such as CAPS’ facility in Shelton, CT provides the best alternative. Clients may reserve Shared Seats or Dedicated Seats or a combination of both. Colocation services are available at the same data canter to maximize system availability. The high security business continuity workspace facility is powered by redundant UPS and generator systems with 24/7/365 access for authorized personnel. Redundant internet service with automatic failover provides high availability communications with the level of bandwidth required by each client. Trained professionals assist in the planning and periodic testing of each organization’s unique business continuity plan. They can quickly configure each client’s unique workstation user interface by employing the Virtual Desktop Infrastructure (VDI) at the data center.

For more than a quarter century CAPS has been a leading provider of business continuity services to organizations in Connecticut. Clients include investment companies, banks, and other financial institutions. There are also hospitality companies and other non-financial institutions that have decided to minimize their business continuity risk. Business Continuity, Colocation, Data Backup and Recovery, and Private Cloud services are all available from CAPS. Please contact us if you have lost your appetite for risk.

Colocation data centers provide tenants with high availability, secure infrastructure hosting. They do this by leveraging investments in redundant power, environmental, networking, and security systems. Continuous monitoring of key parameters is critical to assure data centers function properly. Alerts are generated in real-time whenever a threshold is exceeded. Though there are many conditions that can be tracked, here are 7 of the most important parameters to be monitored.

Power

Power is the most important thing to monitor in a data center. Power is so vital that most facilities have several layers of redundancy to automatically back up the utility power that comes from the electric company. When primary power is interrupted, Uninterruptible Power Supply (UPS) systems take over. Generators back up the UPS systems for longer term power until utility power is restored. Though power backup is automatic from utility power to UPS to generator, alerts are issued to data center personnel when a problem is first detected so they can determine the cause of the power outage.

Heat

Temperature ranges are monitored in the data center to make sure air conditioning systems are functioning properly. Elevated temperatures can damage IT systems and interrupt operations. Alerts are issued in real-time whenever a temperature threshold is exceeded in the data center.

Humidity

Relative humidity is also measured and reported whenever a reading is outside of an established range. Too much or too little humidity can affect performance of servers and other information technology systems.

Network

Colocation data centers provide a variety of internet bandwidth services. Since connectivity is essential, automatic failover to a redundant backup service is frequently employed. Network performance parameters such as packet loss counts, and total loss of internet service are monitored and reported.

Colocation clients have the option of procuring their own internet connectivity or they may procure these services from the data center. Clients who get internet services through the colocation data center benefit from automatic failover to a backup internet service provider circuit. Some colocation providers also monitor each client’s internet service availability. An alert may be issued if service is unavailable for a specified period of time (i.e., typically 2 minutes). Clients may be contacted to let them know there may be a problem with their firewall or some other problem that is impacting internet access.

Fire

Fire suppression systems are installed at locations throughout the data center. These systems report alerts to the data center engineers and to the building supervisors in the event that a fire suppression system is deployed.

Water

Monitors are placed in the raised floor areas in the data center to detect water and other fluids.

Data Center Access

People with authorized security cards may enter the facility at any time. However, all entry to the data center is logged and the activities of people in the data center are video recorded 24/7/365. Doors to the data center will issue an alarm whenever they are opened by someone without proper credentials or whenever doors are left open for more than a few seconds. When data center access alerts are issued, staff can review video images to determine if activity in the data center requires their attention.

Colocation data centers such as CAPS rely on proactive monitoring to assure that availability, performance and security is maintained at the standards required by our clients. Much like the wild rabbits of Connecticut whose very survival depends on keeping alert, our sensors work continuously to make sure all systems are functioning properly.

Reliable offsite data backup and restoral is essential to respond quickly to potential disruptions. Whether online data is compromised by Ransomware or something else, the ability to recover a current clean copy is the key to minimizing costly business interruptions.

Backup as a Service (BaaS) is popular for organizations looking to ensure they can recover critical data quickly. Service providers employ software-based tools to automatically copy production data at predefined intervals. Data is often encrypted as copies are sent to one or more remote storage sites. Tests are performed to verify that a backup copy has been successfully completed. Then, if needed, the backup can be restored to resume normal operations.

There are many options for offsite data backup services in Connecticut. When evaluating different services, we recommend checking to see if the following three criteria are met.

  1. Data backup services should be full featured, easy to use, and cost-effective
  2. The service provider should have sufficient IT infrastructure to meet the client’s requirements
  3. Technical support should be available to address specific challenges

Powerful Backup Software Works Well and is Affordable

CAPS has been delivering offsite data backup services for years using several different tools. For the past few years, we have been using Veeam’s Availability Suite software. Our engineers are impressed with the software’s versatility and with its ease of use. With Veeam we can configure a backup solution tailored to our client’s unique requirements. The software handles most environments and includes the ability to restore data on a granular level, i.e. for an individual Virtual Machine or file. Training is straight forward so clients get up to speed quickly. Also, the Veeam solution is cost-effective.

Multiple Data Centers for Backup with Optimal Physical Separation

The IT infrastructure available from suppliers of BaaS in Connecticut varies from one provider to the next. Some have a single data center. Those with multiple data centers can architect a variety of offsite backup alternatives with primary and secondary backup sites.

In cases where a service provider has multiple data centers, the physical separation between data centers is important to consider. For example, one Connecticut provider has two data centers located about ten miles apart. This is a concern because a disaster that affects one data center will probably affect the other.

Another Connecticut service provider has a secondary data center more than 1,000 miles away. This great distance increases both network and travel costs. Greater distance between data centers also increases the time it can take to create remote backups.

CAPS believes the physical separation of its three data centers across the metropolitan New York region is ideal. The CAPS data center in Shelton, CT is about 67 miles from our data center in New York and about 120 miles from our data center in New Jersey.

Knowledgeable Support Willing to Address Unique Needs

Larger data backup service providers usually do not provide much individualized support to clients. Smaller service providers may not have the resources to tailor customized backup solutions. A mid-sized BaaS provider, such as CAPS, can deliver individualized data backup and recovery services.

There are many different providers of offsite data backup services in Connecticut. With a little research you should be able to find the one that is just right for your organization.

We often get questions at cocktail parties when people find out we work at a data center. They may ask us to explain colocation or to discuss the difference between a Private Cloud and the Public Cloud. The best offsite data backup practices are another popular topic. However, sometimes questions come from a darker place- “What really happens at your data center late at night?”

After the sun goes down and the workday is over, many of the IT systems hosted at CAPS’ Shelton data center are still quite active. Some client workloads run around the clock. For others there are peak periods and slack times. Regardless, most clients’ servers must be available 100% of the time.

“Does anybody work at the data center overnight?” is a typical question. Some clients come to the data center in the middle of the night to work. This is to minimize the impact on their customers as they modify systems. In other cases, they work at night because it is when they can get a block of time free from the distractions that beset an IT infrastructure professional.

Access and Security

Another question has to do with how security is maintained in the middle of the night. Of course, nobody can gain access to the data center, at any time, without proper credentials which must be established in advance. When clients arrive after hours they first must use the security card issued to them by CAPS to open one of the main doors to the building which are locked at night. All visitors are monitored by the guards at the central security desk and captured on the building’s video recording system.

Once inside the building, clients require their security card to pass through the data center’s main entrance. They then proceed through two more security card access doors before entering the data center itself. As clients move through the facility they are recorded by the data center video recording system. These videos are kept for 60 days for subsequent review if necessary.

Once inside the data center clients proceed to where their systems are located. They use the unique key they were issued by CAPS to unlock their cage or cabinet door to gain access to their systems. The support team at CAPS keeps a copy of these keys if lost.

Monitoring and Response

People also ask “What happens if an environmental alert is issued at night?” The systems that monitor electrical power, temperature, humidity, and internet service issue alerts in real-time to the system engineers responsible for maintaining the data center. These trained professionals all live within a half hour of the data center are on call 24/7/365. The CAPS engineers also are supported by a Network Operations Center which is staffed around the clock at company headquarters in Pearl River, New York.

As the cocktail parties extend into the wee hours we occasionally we get more whimsical questions. “Do we keep the lights on in the data center at night because aren’t servers afraid of the dark?” “No,” we answer patiently, “the servers are very accustomed to the dark.” To save energy, even during the day,  we turn on the lights in the data center only when somebody is on site.

Picture is of a full moon and the Heublein Tower in Simsbury, CT

July is usually the hottest month of the year in Connecticut, so this is a good time to consider how data centers cope with elevated temperatures.

Heat is a byproduct of the power provided to servers and other IT systems at a data center. Effective heat management is essential because excessive heat can damage these systems and disrupt operations.

For example, record heat in Europe (104 degrees F) recently forced the temporary closure of data centers in the UK. Both the London based Google and Oracle data centers were powered down to prevent significant damage to servers and other equipment that could have caused prolonged outages.

There are many things that can be done to manage heat in a data center. Here is a list of some of the most important things you can do.

  1. Provide Enough Cooling Capacity for Your Data Center
  2. Maintain Air Conditioning Systems
  3. Design Your Data Center to Optimize Cooling
  4. Disperse High Heat Generating Cabinets
  5. Continuously Monitor Heat at Critical Locations Throughout the Data Center
  6. Respond Immediately to Heat Alerts
  7. Deploy Additional Localized Cooling To Address Heat Spikes

Let’s consider each of these recommended steps in detail.

First, the data center must have adequate cooling capacity in its air conditioning systems to handle the maximum power consumption and resulting heat generation that can be anticipated. Provisioning more cooling capacity than will ever be needed provides a safety factor and is good practice.

Having adequate cooling capacity is not enough. Maintaining CRAC units, condensers, and other air handling systems is an ongoing requirement to make sure these vital heat management systems function properly.

The floor layout of the data center also can be used to manage heat. Rows of cabinets are positioned so the rear sides of alternating rows face each other. By setting up a Hot Aisle/Cool Aisle configuration, cooled air coming from the Computer Room Air Conditioning (CRAC) systems is passed first to the front of the server cabinets. Heat is transferred to the cooled air from the powered equipment in each cabinet. Warmed air is then returned to the CRAC systems after exiting from the back of the cabinets so that it can be cooled once again and sent back to continue the process. The Hot Aisle/Cool Aisle design is proven to be more efficient than a layout where both cool air and hot air are intermixed in a single aisle.

Dispersion of higher heat generating cabinets is another way to minimize the impact of high temperatures in a data center. The amount of heat output by the systems in a cabinet can be highly variable. Some processor intensive servers consume a lot of power and thus generate a lot of heat. Other equipment may be much cooler. When possible, dispersing higher heat cabinets throughout the data center can minimize concentrations of heat.

Continuous temperature monitoring throughout the data center is essential. Colocation data centers establish a target temperature range where servers operate safely but where energy is not wasted. The cooler the temperature, the more energy is required. The key is to set a temperature target that is cool enough to protect IT systems but not so cool that energy costs are excessive. When monitoring systems determine a temperature is above the established threshold an alert message is issued. Data center personnel receive alerts immediately at any time of the day or night.

When an alert is issued data center professionals respond as quickly as possible to determine the cause of the elevated temperature. Once the cause is identified they work to rectify problems so that equipment is returned to a safe operating temperature.

Sometimes specialized local cooling systems are employed to address a temperature spike. Data Center engineers can quickly set up these systems to focus additional cooling to a cabinet or cabinets where elevated temperatures have been reported. This prevents a serious problem while the source of the higher temperature is determined.

Managing heat in a data center is an essential ongoing responsibility. Proper air conditioning systems coupled with an experienced staff ensure operations without disruptions. CAPS’ team is proud it has successfully managed the heat of over 20 Connecticut summers without experiencing a single unscheduled data center outage.

Colocation, Public Cloud, and Private Cloud are the three primary alternatives to hosting IT infrastructure on premises. Internet bandwidth availability and price is one of the most crucial factors to consider when determining where to locate IT systems.

Bandwidth is the capacity of a communications circuit to transmit data. It is typically expressed in Megabits per Second (Mbps) or Gigabits per Second (Gbps). The bandwidth of a communications circuit is analogous to the maximum number of cars that can travel on a highway. This is all too familiar to people in Connecticut. Our little state is home to some of the most congested roads in the U.S.

Broadband services (defined as bandwidth of at least 25 Mbps download and 3 Mbps upload) became affordable in the early 2000’s. Prices of $20 per Mbps per month or more have declined to less than $5 per Mbps per month during the last twenty years. As prices came down, adoption of broadband services rose dramatically. By 2004 more than half of all U.S. internet users had replaced their dial-up modems (typically at 56 Kbps) with broadband services.

High Bandwidth Circuits Enable Remote Data Centers

AWS introduced the first Public Cloud in 2006. This was more than 10 years after CAPS first opened its data center in Shelton, CT. It was also when the cost of broadband services first started to become affordable. Higher bandwidth circuits were required to make remote data centers viable. This was to keep network latency low enough that user response time was maintained at an acceptable level even though data was transmitted over longer distances.

Leveraging virtualization technology and lower cost bandwidth, Public Cloud vendors built large data centers in locations where both power costs and taxes were low. Economies of scale made it possible for Public Cloud vendors to provide low introductory prices for data services. Infrastructure as a Service (IaaS) took the industry by storm by offering an inexpensive way to create internet-based businesses that required no capital expense. IaaS is popular because it is flexible, scalable, and low cost (at least initially).

Public Cloud and Colocation Billing Differs

There is a difference in the way Public Cloud and Colocation providers charge for internet bandwidth. Public Cloud vendors typically bill for monthly total data transfer whereas colocation providers charge for the bandwidth rate provided.

Public Cloud providers monitor the amount of data transferred during a month (typically in Gigabytes). Both inbound and outbound data transfer is counted. Most Public Cloud providers charge nothing for inbound data transfer. They usually allow for a certain level of outbound data transfer but then charge for every outbound byte transferred after that. The problem with this approach is that egress fees can ramp up quickly. The fact that Public Cloud data transfer fees are variable from one month to the next and can be difficult to predict is also a problem.

No Surprises With Colocation

Colocation providers offer fixed monthly internet bandwidth pricing for a specific guaranteed data rate. Clients can order the amount of bandwidth they expect to need. If they decide to change their bandwidth they can typically increase or reduce their rate within a day. The benefit is their bandwidth costs are pre-established and there are no surprises when the monthly colocation invoice is received.

The availability and cost of internet bandwidth and the pricing mechanism used for billing can influence the best place to host a specific workload. For clients who want predictable and affordable monthly network expenses, the best choice is colocation

Colocation has been an important IT infrastructure option for decades. Recently, as a direct response to the COVID pandemic, there is a new reason to use colocation.

COVID forced many employees to Work From Home (WFH) over the past 2 years. As WFH became more accepted, another use case for colocation has been identified. The ability to quickly and cost effectively place IT systems in a secure and conveniently located data center reduces risk when moving to a remote work environment.

More than 2 years after the onset of the pandemic, companies are changing how they work. Office leases are not being renewed. Smaller offices with flexible layouts are being set up to save money and to support hybrid work models where employees come to the office a few days a week. Some companies have completely abandoned their office to have employees work from home all the time.

For most companies, business cannot be conducted if critical computer systems are not available. The process of moving an office requires powering down IT equipment so it is vitally important to prepare a plan that minimizes disruption.

Moving An Office Can Be Risky

Planning an office move can be stressful. The final decision to not renew an office lease is often made with only a few months left on a contract. Once a move date is set, the pressure is on to take care of a multitude of tasks. To minimize the risk of disruption of critical business operations during a transition it is important to prepare a detailed plan.

Most organizations have migrated some computer workloads to the cloud. However, there are usually residual applications that are not best in the cloud. For example, database applications that require a large amount of outbound data transfer are extremely expensive when hosted in the Public Cloud due to costly egress fees. Other applications require low latency or high security and thus should be placed locally and not in the cloud.

For those applications already provisioned through a public or private cloud, the move from an office should not be disruptive. Once internet service is available at the new location, the applications may be used.

Other workloads may be suitable for the cloud but may not have been migrated yet. These applications should not be migrated to the cloud as part of the office move. It is too risky to add these types of rehosting projects to the primary task of a major office move. These workloads should be placed at the colocation facility temporarily until they can be safely migrated to the cloud at a future time.

Colocation Reduces Risk

With colocation it is possible to move workloads that are not suited for the cloud to a secure local data center. By decoupling the move of IT infrastructure from the rest of the office relocation, organizations can reduce the risk of a service interruption. Once computer systems have been placed at the colocation facility the rest of the office move can be completed at any time without concern about the day-to-day functioning of the business.

A growing number of companies in Connecticut and Westchester County planning an office down-sizing or a move to WFH have used CAPS’ colocation services to reduce risk and provide a bridge to the future.

Pictured above is the Old Drake Hill Flower Bridge. Originally built in 1892 this bridge spans the Farmington River in Simsbury, Connecticut. Exactly one hundred years after construction, cars were banned and the bridge was designated for pedestrian use only. A few years later it was decorated with flower boxes.

What are the most important factors to consider when choosing a colocation service provider? Here is a short list-

  • Redundant power
  • Reliable air conditioning to control temperature and humidity levels
  • Resilient internet connectivity with automatic failover
  • Advanced security systems
  • Remote Hands services
  • Convenient location

Location and Cost Drive Colocation Selection

Power with back-ups, multiple environmental systems, high availability internet services, security protection, and flexible support are must-have requirements for all colocation service providers. Data centers must check all these boxes to succeed in the competitive colocation business. Ultimately, the colocation facility’s location is the factor, other than cost, that dictates which data center is selected.

Which factors should be considered when choosing the location of a colocation facility? The facility should be close enough for staff visits as needed. Yet it should be far enough away to reduce the risk of the same environmental events that might impact the primary office location. The site also should be near major roads to minimize drive time. It is even better if the drive to the colocation facility is against traffic during those times when employees typically visit the data center.

It is also best if the colocation provider is powered by a different electric utility than the one that powers the primary place of work. Though the total loss of utility power is rare, the consequences of such a loss can be devastating. The probability of two separate electric utilities losing power at the same time is far less than the chance of a total outage at either one.

Finally, here in Connecticut colocation costs can vary a lot based on real-estate costs. The cost per square foot for a data center in lower Fairfield County can be 2 or 3 times higher than the cost for the same amount of space in places like Shelton where CAPS’ data center is located.

Higher Elevations Lower Risk

The data center’s elevation above sea-level is another location-based factor to consider- especially in Connecticut. Our state has many low-lying areas that are close to the shoreline, rivers and lakes. Though hurricanes and tornedos can wreak havoc here, these extreme storms are rare. Floods, whether caused by storm surges or heavy rains, are much more common. The best way to avoid floods is to locate critical IT infrastructure at higher elevations.

All things being equal, it is best to aim for higher ground when looking for a lower risk place for your critical IT infrastructure.  Connecticut, unlike our neighbors to the north, is a relatively flat state. We rank 36th in terms of the states with the highest elevation. Our tallest peak is Mount Frissell in the northwest corner of the state which is 2,379 feet above sea-level.

So why not build a data center on Mount Frissell? There are data centers at very high elevations around the world like the one in Tibet at 11,995 feet above sea-level. Though the flood risk at such heights is minimized the cost to build a data center on the top of a mountain is very expensive. Also, at higher elevations air conditioning is more expensive. This is because the air is thinner at higher elevations so more air has to flow over electronic systems in order to remove heat.  Finally, since Connecticut has few tall mountains we should probably leave Mount Frissell for our hikers.

The CAPS data center in Shelton is head and shoulders above most of the other colocation sites in Connecticut. High above the Upper Valley at 290 feet above sea-level you can look down upon the restaurants and hotels along Bridgeport Avenue and see the cars speeding along Route 8 from the top level of the parking garage that is adjacent to the data center.

The fact that CAPS’ clients have not experienced an unscheduled power outage in over 20 years is due, in part, to the location of our data center in a flood-free zone far above sea-level.