Three years after COVID, organizations have adjusted to a new reality. Many workplaces are now hybrid. Some employees work remotely all the time while others come to the office several times a week but not every day. Moving IT systems out of an office to a third-party data center is known as colocation. Colocation can make it easier to transition to a hybrid workplace by uncoupling IT from the office place.

The signs of a changing working world are everywhere. Office vacancy rates remain high, and many companies are either not renewing their leases or downsizing. A recent Wallet Hub survey found 12.7% of full-time employees work from home all the time with an additional 28.2% adopting a hybrid schedule where they work sometime at their company’s office and the rest of the time at home.

Hybrid Workplaces Are Here to Stay

Experts believe this condition will persist as many employees enjoy the benefits of working from home. They save time and money by not commuting. Some multitask to complete personal tasks during the day while conducting work from home. The imperative to work from home during COVID proved that remote work can be done productively. Though some companies have ordered employees to return full-time, many have accepted a hybrid model. Employees in many industries have come to expect they will be able to work from home at least a few days a week.

Nationwide, the top 5 industries employing remote work are information technology, healthcare, sales, account management, and consulting. These industries are prevalent in Connecticut. Our state is home to many financial service companies that have adopted hybrid work practices too.

Relocating to a new office requires a lot of planning. Leases must be negotiated and signed, improvements may be required at the new office, and furnishings and IT systems must be moved. The timing of a move can be challenging. It may be necessary to vacate the old office before the new office is completed and ready for occupancy.

Colocation Decouples IT From The Rest of the Move

Moving to a new office or downsizing an existing office can disrupt critical information technology functions. Relocating these systems in advance to a colocation data center decouples the IT move from the rest of the move. The organization’s computers can be moved after hours in an orderly manner before the rest of the office relocates. This ensures business continuity and significantly reduces stress on the IT department. Once the systems are moved to a colocation data center, the organization can make future office changes without impacting their information technology systems.

Several new colocation clients have come to our data center in Shelton to facilitate changes they are making to support their hybrid workplaces. One client closed an office in Hartford and relocated employees to the remaining office in Fairfield County while moving servers and associated systems to our data center. Another client, a consulting company in Stamford, closed its offices and had all employees work from home after moving its IT systems to CAPS. Another is about to close its Norwalk office after collocating its IT systems to CAPS. Its employees will either work from home or a few days a week from another office in the state.

The new hybrid workplace saves money and increases worker satisfaction. A growing number of companies in Connecticut are leveraging colocation to provide the flexibility they need to make changes to their offices without compromising business continuity

High Performance Computing (HPC) is in use across Connecticut. Our state is home to a variety of data intensive industries that rely on HPC systems. Though it is possible to host HPC instances in the Public Cloud, there are benefits to locating these powerful systems at colocation data centers.

High Performance Computing, as the name suggests, employs speedy multi-core CPUs or GPUs (graphical processing units) along with fast storage systems and memory to process information quickly. The market for HPC systems is growing as the demands of Artificial Intelligence, Image Processing, and large data base applications surges.

Connecticut is home to industries that are investing in HPC. Higher education leaders such as Yale and the University of Connecticut have ongoing HPC programs in place. In-state life sciences companies including those in the biopharmaceutical, medical imaging, and health research industries are taking advantage of these powerful computer systems. Our state is also home to many insurance and finance companies that require these powerful systems. Aerospace, defense contractors, and high-end manufacturers located in our state also are increasing their use of HPC systems.

Colocation Costs Less Than Public Cloud For HPC

Colocation is a better choice than the Public Cloud for HPC applications for several reasons. First, colocation is often much less expensive. Whereas it is possible to configure instances with GPUs and high-performance memory and storage, the Public Cloud has not been optimized for these types of uses.

A recent article “Colocation vs Cloud: SEO Firm Finds Cloud to be Cost Prohibitive for its Clusters of Powerful High-Density Computers” (Network World 4/4/23) documented the cost disparity.

The article describes how a search engine optimization SaaS company estimated it would cost an additional $400 million over 3 years to operate in the Public Cloud. The cost of procuring HPC infrastructure in the AWS Public Cloud was compared to purchasing 850 HPC Dell servers and hosting them at a colocation facility.

Avoid Public Cloud Latency

Latency is also an important consideration. Most Public Cloud data centers are located hundreds of miles from Connecticut. For example, AWS’s Northern Virginia data center is over 300 miles from most locations in Connecticut. Data sent over the internet typically experiences, on average, an .82 ms delay for every 100 miles traversed. So, the additional roundtrip latency imposed by the Public Cloud could be over 4.8 ms versus a colocation facility in Connecticut. When organizations are investing a lot of money to achieve optimal performance it does not make sense to slow things down, even by a few milliseconds, with the latency burden of the Public Cloud.

Much like a schoolteacher can pick out brighter students just by observing them, it is often possible to identify a HPC system when walking through a data center. High Performance Computer systems generate more heat output since they consume more electricity to run their powerful servers. Also, often they are noisier since their internal fans are running a lot to disperse heat.

Fortunately, CAPS has plenty of available floor space, power, and cooling capacity to handle the growing High Performance Computing needs of Connecticut and the Northeast region. Our data center in Shelton is centrally located to minimize latency. Please contact us to discuss your HPC hosting requirements.

March 31st is World Backup Day. World Backup Day was inaugurated in 2011 to create awareness about the importance of data backup and recovery. We celebrate this day at CAPS every year because offsite data backup is one of the core IT infrastructure services we offer from our data center in Shelton.

Yet, even today, a surprising number of organizations do not regularly backup their data. The World Backup Day website includes a graphic that states 21% of businesses have never executed a data backup. This figure matches results from a 2020 Infrascale survey of 500 executives at small and medium sized businesses. Another survey of 6,600 IT managers in 2022 by Acronis found an even higher percentage of negligent organizations. Per that survey, 10% backup their data every day, 15% do so once or twice a week, and 34% backup monthly. That leaves an astounding 41% that rarely or never backup.

Risky Business

This is risky given the increasing exposure to data loss we are experiencing due to rising instances of cyber breaches. Data can also be lost due to human error, equipment failures, and environmental disasters. Many seem to be relying on the hope they will not need to recover their data.

Though hoping for the best is not an effective way to manage risk, it does seem to be a common practice. In many cases organizations have decided to take a chance rather than dedicate the funds needed for adequate data backup and recovery.

CIOs and CTOs have an ongoing challenge getting funding for various IT projects. Disaster Recovery is one of the hardest things to get funded. Senior management, looking to avoid spending money, frequently underfunds data backup initiatives. They convince themselves the odds of a serious outage are low. “Hope springs eternal in the human breast”. These words, coined by the poet Alexander Pope in 1733, describe the approach taken by many executives today regarding investments in data backup.

World Backup Day focuses on data backup but simply backing up data is not enough to minimize risk. The frequency of data backup is critically important. Per the survey cited earlier, 34% of the organizations only back up their data once a month. That means up to 4 weeks of data can be lost if a breach occurs just before a monthly backup.

Data Recovery Is Most Important

Most important is the ability to recover data so that systems can be restored and brought back online. Some believe making multiple backups in different locations and on different storage mediums is sufficient. However, unless the ability to recover these backups in a timely manner has been tested there can be no assurance the backup plan will work when needed.

So why do we think digitalization could foster more investment in data backup and recovery? Digitalization, as defined by the Gartner Group is “the use of digital technologies to change a business model and provide new revenue and value-producing opportunities; it is the process of moving to a digital business.”

Digitalization is one of the top things on the minds of CIOs and CTOs as well as the senior management that dole out the funds for IT investments. Though there are not many examples of successful digitalization projects yet; the number is growing. IT leaders that create a new revenue source for their organization are far more likely to get approval for their funding requests. As they transition from a cost center to a profit center their ability to get financial support increases.

Funding Data Backup

We hope you take time to contemplate data backup and recovery on March 31st. If your senior management have not been supportive of your data backup funding requests in the past we hope you bring a digitalization project to a successful completion in the year ahead. In so doing you will increase the likelihood of funding your data backup and recovery ini

The Public Cloud is widely popular but can be very expensive. Rising costs, complex pricing models, and the inability to predict monthly bills are forcing many organizations to look for ways to save money. A recent article in The Register told the story of a SaaS provider that saved $1.2 million per year by repatriating some of its workloads from AWS to servers hosted at a colocation facility.

Companies may just be realizing that cost savings expected from moving to the Public Cloud are not happening. In fact, costs may be out of control. The key to saving money is to identify workloads that are less costly when placed in a colocation facility or at an on-premises data center versus those that are cost-effective and/or require the special properties of the Public Cloud. It sounds simple but it does take some analysis.

Where Public Cloud Is The Best Choice

Public Cloud is great for some workloads. Websites that experience rapidly changing user demand can take advantage of the elasticity and self-service provisioning capabilities of the Cloud to scale up or down as needed. Though most websites are good candidates for the Public Cloud, there are some exceptions. If your website primarily serves a local geographic area and if latency is a concern, then the Public Cloud may not be the best choice. Placing web servers in a local colocation data center will reduce latency and provide better response times for website visitors who are located nearby.

Software development teams frequently benefit from the flexibility of the Public Cloud. Compute, memory, and storage resources can be added as needed to help the DevOps team keep on schedule. QA can spin up test environments to perform continuous testing. Tests can be run against different IT configurations to predict performance. Once software has been completed, there may no longer be a need for the cloud resources used during development and testing. Savings can be achieved by periodically checking to see which Public Cloud resources are still needed after certain phases of development are completed.

Email servers are also a good choice for the Public Cloud. O365 and Google Workspace leverage the Public Cloud for their email and office application services. Clients like the universal accessibility and convenience of these SaaS offerings. Software updates and patches are automatically provisioned. Data backup is also included, at least for a modest period of time. However, clients requiring email backups beyond 90 days may need to set up an offsite data backup server at a colocation or on-premises facility.

In general, the Public Cloud is the best choice when workloads vary in terms of the compute and storage resources they require. Conversely, colocation can generate cost savings when the IT requirements of a particular workload are more predictable and constant.

Where Colocation Can Save Money

Database servers are usually the best place to look for savings from the Public Cloud. Not only are many databases stable in size, but Public Cloud companies usually charge data transfer (egress) fees every time data is exported from their storage systems to a device that is not in their cloud environment. CAPS has clients that have repatriated databases from the Public Cloud to achieve more than $100,000 a year in savings by hosting their database application at our colocation facility.

Other server types that can be placed cost-effectively at a colocation data center include Domain Name System (DNS) servers and Dynamic Host Configuration Protocol (DHCP) servers. These servers have modest and stable processing and storage requirements. It may be possible to host these types of workloads on older, lower performance servers to minimize the investment required.

It is possible to save money on spiraling Public Cloud expenses. Evaluate the types of workloads you are hosting in the Public Cloud to determine if you still need some of these Public Cloud resources and if repatriating these workloads can provide meaningful savings. Of course, the cost of hosting new workloads in both the Public Cloud and at a colocation data center or on-premises should be considered before deciding where they should reside.

SOC reports are audit reports that adhere to guidelines developed by the AICPA (American Institute of Certified Public Accountants). They are commonly used to provide an independent professional review of the operations of a service provider such as a data center. Let’s consider the value of SOC reports to clients seeking colocation, data backup, and business continuity services at data centers.

First, a little background about our company’s history with respect to SOC reports. CAPS has been providing data center infrastructure services in Connecticut since 1995. In 2009 we began contracting for annual independent audits. We have engaged with an approved auditor every year since then (that’s 14 years and counting) to provide our clients with a means to independently verify our data center operations.

SOC reports have evolved over the years. The AICPA first defined an audit report requirement known as  SAS1 in 1972. Two decades later the SAS 70 Statement on Auditing Standard No. 70 was released. This document replaced SAS1 and became the standard until it was replaced in 2011 by SSAE 16 (Statement on Standards for Attestation Engagements No. 16). SSAE 16 defined SOC 1, SOC 2, and SOC 3 System and Organization Controls. These standards were updated in 2017 when SSAE 18 was adopted.

Is 2 Better Than 1?

SOC 1 Type I and II, SOC 2 Type I and II, and SOC 3 Type II are the current standards defined by SSAE 18. SOC 1 is a financial audit report that is primarily concerned with evaluating the suitability of the design and operating effectiveness of the controls a service provider has in place. It is often used to fulfill the annual independent audit requirements imposed on financial organizations and publicly held companies by the Sarbanes-Oxley Act (SOX) of 2002. A SOC 1 Type II report covering an audit over 6 or more months is typically the version of the report used for data centers.

SOC 2 consists of 5 Trust Services categories. The first category deals with security and is mandatory. The four remaining categories relate to Availability, Processing Integrity, Confidentiality, and Privacy and are optional. That is, each service provider may choose which, if any, of these categories to be included in their SOC 2 audit. A SOC 2 Type II report covering an audit over 6 or more months is typically the report used for data centers.  SOC 2 reports are growing in popularity because of their focus on security. However, they are not considered adequate to fulfill the SOX requirements of public companies and other financial institutions. That remains the domain of SOC 1.

SOC 3 is a modified version of SOC 2 that excludes proprietary information and thus can be released without a Non-Disclosure Agreement (NDA). SOC 1 and SOC 2 reports include proprietary information about the audited company and are not to be released without an NDA.

A Non-Issue for Many

Though it takes time and money to prepare a SOC report each year, many of our clients are not interested in these independent audits. If they are not required by regulation to receive an independent audit of their data center services provider, they may not request a SOC report.

CAPS and Blue Hill Data Services have always been committed to providing high quality IT infrastructure services to our clients. The SOC reports we contract for each year offer a professional, independent evaluation of our data center operations. We are happy to share these SOC reports to clients and prospective clients who request them.

For many organizations, SOCs are not required. Just as there are those who wear dress shoes without socks (especially here in Connecticut), SOCs are often a valuable addition.

Servers don’t last forever. When a technology refresh is required, there are several options. New servers can be purchased, or workloads can be migrated to the public cloud to avoid the need to purchase systems. APEX, a new service from Dell, offers an attractive alternative whenever a server refresh is required. It combines many of the advantages of the public cloud with some of the benefits of on-premises solutions.

Benefits Versus the Cloud

The Dell APEX program is a new “Pay Per Use” service that makes it possible to procure new servers without incurring capital expenses. This new service offers the benefits of the public cloud in terms of flexibility and scalability while addressing some of the problems of the cloud. Dell APEX solutions may offer better performance, more security, and reduced latency than cloud-based implementations. They also may make it easier to achieve compliance. Finally, monthly expenses with Dell APEX are known in advance. This is a big advantage versus monthly cloud expenses which are unpredictable, and which can be highly variable.

With APEX, clients order the systems they need and Dell installs them either at the client’s premise or at the site of an approved colocation service provider. A monthly fee is charged but Dell retains ownership and is responsible for maintaining the equipment. The agreement is similar to a lease but clients can add or remove functionality as needed via the Dell APEX console. APEX agreements are typically for 3 years. Monthly fees are adjusted so customers only pay for the resources they use. This provides flexibility and scalability normally only available with cloud services.

Dell’s APEX service is like HPE’s Greenlake pay per use service which was introduced about a year before APEX.  Lenovo also offers a pay per use service. These new services are expected to grow in popularity over the next few years. They offer advantages versus the cloud while making it possible for users to refresh technology without increasing capital expenses.

Dell Certifies CAPS for APEX

Recently a CAPS colocation client decided to order servers and associated equipment from Dell via the APEX program. First, the Dell team qualified the CAPS data center as suitable for the APEX program. This included verifying the dimensions of all doors between the loading dock and the data center to be sure Dell cabinets could be transported from the delivery truck to their ultimate location in the data center. Dell ships completely configured and tested cabinets to the client’s facility or to their colocation service provider’s data center.  Power availability, access to internet carriers, data center security, and technical support services were also evaluated before authorizing the data center for the APEX program.

After a series of Zoom planning meetings, the Dell APEX system was delivered to CAPS’ Shelton, CT data center. Then a team consisting of the client’s IT personnel, CAPS’ system engineers, and Dell engineers installed the new system. Since power and internet services had already been pre-installed, the cutover was completed quickly.

As we enter 2023, we expect to see more Dell APEX installations at our colocation facility. We believe the opportunity to get all new Dell servers and related equipment to replace aging systems for a reasonable monthly fee with ongoing support from Dell and the ability to quickly scale services up or down will be increasingly popular in the year ahead.

Did you know CAPS was one of the first companies in Connecticut to provide Business Continuity services? In 1995 the company began offering a secure alternate workplace for companies that wanted to minimize the risk of service disruptions. Though much has changed over the 27 years since CAPS first opened its doors, the need to manage risk is more important than ever.

Risk management is big in Connecticut. The state is home to many financial advisors and is the headquarters of some the country’s leading insurance companies. All businesses in our state must plan to avoid outages that can literally affect viability. Business Continuity service providers help organizations manage risk by providing backup facilities to limit the impact of service disruptions.

Much of business continuity has to do with IT. Information technology and communications are essential to many organizations’ operations. The best business continuity service providers offer facilities designed to assure IT systems are always operational. These high-end alternate workplaces are available 24/7/365 and have comprehensive security systems in place. They also offer trained professionals to assist in business continuity planning and periodic testing to assure preparedness.

In the past decade, the risk of service disruption due to cyber breaches has grown dramatically so data protection, backup, and recovery are now critical components of a business continuity plan. On demand, conveniently located office space with uninterrupted power and always-on internet service are the three other essential components of business continuity.

Though business continuity is a priority for most companies in Connecticut, each organization manages risk in its own way. The business impact of potential service disruptions varies from one company to the next. Each organization has its own business continuity risk appetite which is based on the likelihood of a service disruption and the estimated cost to the company of an outage. A comprehensive Business Impact Analysis (BIA) should be conducted periodically to calibrate risk.

Let’s review 5 of the most common Business Continuity approaches employed by Connecticut organizations ranging from the most rudimentary to the most complete and lowest risk solutions.

Work From A Public Place With WiFi

Relying on public WiFi at a local library or coffee shop for business continuity is not appropriate for any but the smallest organizations. Still many companies rely on this approach to access the Public Cloud when operations are interrupted at their office. Security concerns make this a risky choice. Conversations can be overheard, and WiFi communications can be intercepted.

Work From Home

Since much of work is now provisioned from the Public Cloud, there is a growing trend to depend on home offices for business continuity. If the home has power and internet service, this solution is both convenient and cost-effective. However home-based business continuity services are not optimal. Some workloads are not hosted in the cloud so there can be gaps in what can be done from the home. Power and internet services are subject to problems when working from home and security is less robust. Relying on home offices for business continuity can create client concerns especially for financial companies that must publicize their Business Continuity Plans on their websites as is mandated by FINRA Rule 4370.

Work From Another Corporate Office

Organizations with multiple offices may develop Business Continuity plans where employees work from other offices in case of an outage at their primary place of work. This approach can be effective if other offices are not exposed to the same outage conditions and if they are a commutable distance.

Work From A Shared Office Facility

Some organizations will reserve alternate workspace from shared office space companies like Regus. Regus has many locations in the state of Connecticut where a private office can be rented. The office may be furnished with a company’s systems and is usually available for use 24/7/365. However, generator power is not always available and multiple internet service providers with automatic failover is not typical. Internet services, unless specially provisioned, are shared with other tenants which limits bandwidth and security. Colocation services are not available at Shared Office facilities and technical support is not provided.

Work From An Alternate Workplace At A Secure Data Center

For organizations that have a low appetite for Business Continuity risk, a secure alternate workplace such as CAPS’ facility in Shelton, CT provides the best alternative. Clients may reserve Shared Seats or Dedicated Seats or a combination of both. Colocation services are available at the same data canter to maximize system availability. The high security business continuity workspace facility is powered by redundant UPS and generator systems with 24/7/365 access for authorized personnel. Redundant internet service with automatic failover provides high availability communications with the level of bandwidth required by each client. Trained professionals assist in the planning and periodic testing of each organization’s unique business continuity plan. They can quickly configure each client’s unique workstation user interface by employing the Virtual Desktop Infrastructure (VDI) at the data center.

For more than a quarter century CAPS has been a leading provider of business continuity services to organizations in Connecticut. Clients include investment companies, banks, and other financial institutions. There are also hospitality companies and other non-financial institutions that have decided to minimize their business continuity risk. Business Continuity, Colocation, Data Backup and Recovery, and Private Cloud services are all available from CAPS. Please contact us if you have lost your appetite for risk.

Colocation data centers provide tenants with high availability, secure infrastructure hosting. They do this by leveraging investments in redundant power, environmental, networking, and security systems. Continuous monitoring of key parameters is critical to assure data centers function properly. Alerts are generated in real-time whenever a threshold is exceeded. Though there are many conditions that can be tracked, here are 7 of the most important parameters to be monitored.

Power

Power is the most important thing to monitor in a data center. Power is so vital that most facilities have several layers of redundancy to automatically back up the utility power that comes from the electric company. When primary power is interrupted, Uninterruptible Power Supply (UPS) systems take over. Generators back up the UPS systems for longer term power until utility power is restored. Though power backup is automatic from utility power to UPS to generator, alerts are issued to data center personnel when a problem is first detected so they can determine the cause of the power outage.

Heat

Temperature ranges are monitored in the data center to make sure air conditioning systems are functioning properly. Elevated temperatures can damage IT systems and interrupt operations. Alerts are issued in real-time whenever a temperature threshold is exceeded in the data center.

Humidity

Relative humidity is also measured and reported whenever a reading is outside of an established range. Too much or too little humidity can affect performance of servers and other information technology systems.

Network

Colocation data centers provide a variety of internet bandwidth services. Since connectivity is essential, automatic failover to a redundant backup service is frequently employed. Network performance parameters such as packet loss counts, and total loss of internet service are monitored and reported.

Colocation clients have the option of procuring their own internet connectivity or they may procure these services from the data center. Clients who get internet services through the colocation data center benefit from automatic failover to a backup internet service provider circuit. Some colocation providers also monitor each client’s internet service availability. An alert may be issued if service is unavailable for a specified period of time (i.e., typically 2 minutes). Clients may be contacted to let them know there may be a problem with their firewall or some other problem that is impacting internet access.

Fire

Fire suppression systems are installed at locations throughout the data center. These systems report alerts to the data center engineers and to the building supervisors in the event that a fire suppression system is deployed.

Water

Monitors are placed in the raised floor areas in the data center to detect water and other fluids.

Data Center Access

People with authorized security cards may enter the facility at any time. However, all entry to the data center is logged and the activities of people in the data center are video recorded 24/7/365. Doors to the data center will issue an alarm whenever they are opened by someone without proper credentials or whenever doors are left open for more than a few seconds. When data center access alerts are issued, staff can review video images to determine if activity in the data center requires their attention.

Colocation data centers such as CAPS rely on proactive monitoring to assure that availability, performance and security is maintained at the standards required by our clients. Much like the wild rabbits of Connecticut whose very survival depends on keeping alert, our sensors work continuously to make sure all systems are functioning properly.

Reliable offsite data backup and restoral is essential to respond quickly to potential disruptions. Whether online data is compromised by Ransomware or something else, the ability to recover a current clean copy is the key to minimizing costly business interruptions.

Backup as a Service (BaaS) is popular for organizations looking to ensure they can recover critical data quickly. Service providers employ software-based tools to automatically copy production data at predefined intervals. Data is often encrypted as copies are sent to one or more remote storage sites. Tests are performed to verify that a backup copy has been successfully completed. Then, if needed, the backup can be restored to resume normal operations.

There are many options for offsite data backup services in Connecticut. When evaluating different services, we recommend checking to see if the following three criteria are met.

  1. Data backup services should be full featured, easy to use, and cost-effective
  2. The service provider should have sufficient IT infrastructure to meet the client’s requirements
  3. Technical support should be available to address specific challenges

Powerful Backup Software Works Well and is Affordable

CAPS has been delivering offsite data backup services for years using several different tools. For the past few years, we have been using Veeam’s Availability Suite software. Our engineers are impressed with the software’s versatility and with its ease of use. With Veeam we can configure a backup solution tailored to our client’s unique requirements. The software handles most environments and includes the ability to restore data on a granular level, i.e. for an individual Virtual Machine or file. Training is straight forward so clients get up to speed quickly. Also, the Veeam solution is cost-effective.

Multiple Data Centers for Backup with Optimal Physical Separation

The IT infrastructure available from suppliers of BaaS in Connecticut varies from one provider to the next. Some have a single data center. Those with multiple data centers can architect a variety of offsite backup alternatives with primary and secondary backup sites.

In cases where a service provider has multiple data centers, the physical separation between data centers is important to consider. For example, one Connecticut provider has two data centers located about ten miles apart. This is a concern because a disaster that affects one data center will probably affect the other.

Another Connecticut service provider has a secondary data center more than 1,000 miles away. This great distance increases both network and travel costs. Greater distance between data centers also increases the time it can take to create remote backups.

CAPS believes the physical separation of its three data centers across the metropolitan New York region is ideal. The CAPS data center in Shelton, CT is about 67 miles from our data center in New York and about 120 miles from our data center in New Jersey.

Knowledgeable Support Willing to Address Unique Needs

Larger data backup service providers usually do not provide much individualized support to clients. Smaller service providers may not have the resources to tailor customized backup solutions. A mid-sized BaaS provider, such as CAPS, can deliver individualized data backup and recovery services.

There are many different providers of offsite data backup services in Connecticut. With a little research you should be able to find the one that is just right for your organization.

We often get questions at cocktail parties when people find out we work at a data center. They may ask us to explain colocation or to discuss the difference between a Private Cloud and the Public Cloud. The best offsite data backup practices are another popular topic. However, sometimes questions come from a darker place- “What really happens at your data center late at night?”

After the sun goes down and the workday is over, many of the IT systems hosted at CAPS’ Shelton data center are still quite active. Some client workloads run around the clock. For others there are peak periods and slack times. Regardless, most clients’ servers must be available 100% of the time.

“Does anybody work at the data center overnight?” is a typical question. Some clients come to the data center in the middle of the night to work. This is to minimize the impact on their customers as they modify systems. In other cases, they work at night because it is when they can get a block of time free from the distractions that beset an IT infrastructure professional.

Access and Security

Another question has to do with how security is maintained in the middle of the night. Of course, nobody can gain access to the data center, at any time, without proper credentials which must be established in advance. When clients arrive after hours they first must use the security card issued to them by CAPS to open one of the main doors to the building which are locked at night. All visitors are monitored by the guards at the central security desk and captured on the building’s video recording system.

Once inside the building, clients require their security card to pass through the data center’s main entrance. They then proceed through two more security card access doors before entering the data center itself. As clients move through the facility they are recorded by the data center video recording system. These videos are kept for 60 days for subsequent review if necessary.

Once inside the data center clients proceed to where their systems are located. They use the unique key they were issued by CAPS to unlock their cage or cabinet door to gain access to their systems. The support team at CAPS keeps a copy of these keys if lost.

Monitoring and Response

People also ask “What happens if an environmental alert is issued at night?” The systems that monitor electrical power, temperature, humidity, and internet service issue alerts in real-time to the system engineers responsible for maintaining the data center. These trained professionals all live within a half hour of the data center are on call 24/7/365. The CAPS engineers also are supported by a Network Operations Center which is staffed around the clock at company headquarters in Pearl River, New York.

As the cocktail parties extend into the wee hours we occasionally we get more whimsical questions. “Do we keep the lights on in the data center at night because aren’t servers afraid of the dark?” “No,” we answer patiently, “the servers are very accustomed to the dark.” To save energy, even during the day,  we turn on the lights in the data center only when somebody is on site.

Picture is of a full moon and the Heublein Tower in Simsbury, CT