Let us face it: colocation is only for some. More than 99% of all companies in Connecticut are small businesses, with twenty-five employees or less. Very few are candidates for colocation. Smaller companies typically only have a few servers and rarely invest in colocation.

If less than 1% of the companies in the state are potential colocation clients, what are the characteristics of those companies that host their IT systems at our data center? We decided to look at the demographics of our colocation clients to see what they have in common.

Size Matters

Company size is an important parameter in terms of the number of employees. All our colocation clients have more than ten employees. There are a few with less than 25 employees. About 30% have between 25 and 50 employees. The majority, around 60%, have over fifty employees.

Another way to parse our colocation clients is by industry. There are many different industries in Connecticut. Financial services, defense, aviation, insurance, and life sciences are some of the most prominent in the state. About 80% of all colocation clients at CAPS are from one of four industries: investments, business services, consumer products, and software developers. Our clients are distributed evenly across these segments with about 20% of our colocation accounts in these four groups.

Location, Location, Location

Proximity to our data center is another key factor. More than 70% of our clients have headquarters within a forty-five-minute drive of Shelton. Many customers have offices in lower Fairfield County. Our data center is especially convenient for them since their travel to CAPS is in the opposite direction of rush hour traffic.

The structure of corporate IT leadership is also significant. Most of our colocation accounts have a senior IT manager, either a CIO or CTO, on their staff. More than 75% of our clients employ information technology management. The others typically rely on support from Managed Service Providers (MSPs) to oversee their IT systems.

How companies procure Internet connectivity is also a differentiator. CAPS allows clients to order Internet services directly from any of the carriers that serve our region. Colocation customers may also order their Internet bandwidth directly from CAPS. Clients who purchase Internet services from CAPS can benefit from our ability to automatically failover from one carrier to another if service on the primary circuit is interrupted. Though more than 80% of our colocation clients chose to purchase Internet services from CAPS, the option to purchase from carriers is becoming increasingly popular, especially for high bandwidth circuits.

Flexibility, Availability, and Savings

Finally, we can segment our clients based on their primary reason for choosing to move their IT systems to our colocation facility. Downsizing offices and migrating to remote work-from-home operations has put a lot of pressure on IT departments. Since the pandemic, about 50% of all clients have selected colocation to provide more flexibility as they change their office configurations. Another 30% of our clients have selected colocation to increase system availability and/or to improve data security. Finally, about 20% have chosen colocation to save money versus the excessive cost of certain cloud implementations.

Though colocation is not for everybody, we are honored to provide a high availability, secure, and cost-effective home for those who require the special hosting services we offer.

UPS systems are a key part of colocation data centers. Let us review the functions they perform and consider the effect of new battery technology, cyber-crime, and AI on these critical systems.

Since even a momentary loss of power can disrupt the flow of data, Uninterruptible Power Supplies are the most critical sub-system in a data center. They provide a continuous source of clean power to IT equipment. When an interruption of power from the electric company occurs, the UPS system immediately takes over. UPS power keeps IT systems operating until the data center’s generators come online. Then, generators maintain power until the utility restores power.

Causes Most Data Center Outages

A recent study by the Ponemon Institute reported that UPS failures are the most common cause of data center downtime. UPS failures often result from poor electrical connections. Vibrations caused by mechanical systems such as rotating power supply fans and air conditioning systems can loosen connections over time. The best way to avoid UPS failures is to perform preventive maintenance at least twice a year. During maintenance, an inspection of all electrical connections should take place. Inspection and replacement of air filters should also occur, and capacitors and batteries replaced as needed.

The UPS room is the noisiest place at our Shelton colocation data center. More than 100 lead-acid batteries are stacked from floor-to-ceiling in a large cinder-block room that was not designed to dampen sound waves. The noise is caused by the air conditioning systems and fans that operate continuously to keep the batteries and electrical components cool.

New Battery Technology

Uninterruptible power supply technology is evolving. UPS batteries are now available with varied materials and performance characteristics. Most data centers still employ UPS systems with lead-acid batteries. Though these batteries are highly dependable and cost-effective, lithium-ion and nickel-cadmium batteries offer superior power density. They require less space, have longer runtimes, and can operate at higher temperatures. Building a new data center may justify investment in batteries incorporating newer materials. However, most data centers will keep their lead-acid batteries until more capacity is required and/or they run out of space.

Exposure to cyber security breaches is a growing problem. Historically, UPS systems were not network-attached. As such, they were immune from cyber-attacks. Today many data centers have installed tools that monitor UPS systems continuously to issue alerts when problems occur. These tools, though valuable for increasing UPS availability, also can increase the risk of a breach if care is not taken to secure all outward facing Internet ports.

AI Stresses UPS Systems

The growth of Artificial Intelligence impacts UPS systems. AI applications typically consume a lot more power than other workloads. As a colocation data center installs more AI clients, the total power consumed will rise. Also, the way AI applications consume power differs from other applications. With AI, power demand can increase dramatically at any time. “Step Load” increases, or instantaneous changes in power demand, are more frequent with AI workloads. UPS systems must be able to fulfill peak power demands. As Artificial Intelligence applications proliferate in a data center, the capacity of UPS and generator systems should increase their capacity to support the increase of peak power demand. Higher power demands will also put stress on the UPS system’s batteries and other components, reducing their effective service life.

Colocation data centers employ different systems to provide secure, dependable, and cost-effective hosting services. UPS systems may be the single most important system in a data center. Though these systems are based on mature technology, changes in other technologies such as AI are having a significant impact on the future of UPS.

Fire detection and suppression technology has been an essential component of the data center industry since its inception. The ability to identify and extinguish fires quickly with minimal impact on people and IT systems is vitally important wherever computers are installed.

Though technology has improved dramatically over the years, there are new challenges. The frequency of data center fires will rise as data center power concentrations ramp up to meet the energy needs of AI. The effects of environmental regulations, lawsuits, and even patent expirations are all forcing change on the data center fire suppression industry.

In the beginning, portable fire extinguishers and water-based sprinkler systems were the primary tools used to combat data center fires. Halon systems were introduced in the 1960s and were the predominant fire suppression technology for about 30 years. Though effective at fighting fires, Halon was bad for the Ozone Layer and dangerous. When Halon is deployed it can cause suffocation by displacing the oxygen people need to breathe. Halon was banned by the EPA in 1994.

FM-200 Is a Highly Effective Clean Agent

Clean Agents were the next step in the evolution of fire suppressants. The first Clean Agent was FM-200, introduced by DuPont in 1994. FM-200 is highly effective at mitigating fires. It works by removing heat from the fire and inhibiting the chemical reactions necessary for combustion. FM-200 is stored as a liquefied gas under pressure, typically in cylinders. It evaporates upon release and forms a gas that quickly fills the protected area. It does not impact the Ozone Layer and is fast and effective at putting out fires while being safe for IT equipment and people. FM-200 leaves no residue, so cleanup after deployment is easy.

Novec 1230 is another popular Clean Agent fire suppressant. 3M brought this product to market in 2004. Novec 1230 is considered safer for the environment than FM-200. Novec 1230 has a much lower Global Warming Potential (GWP) compared to FM-200, making it more environmentally friendly. It has been designed to have a short atmospheric lifetime and low ozone depletion potential. However, Novec 1230 is more expensive than FM-200 and requires more storage tank space to protect the same area as FM-200.

Environmental Regulations Impact Fire Suppressants

Both FM-200 and Novec 1230 are impacted by environmental regulations. The American Innovation and Manufacturing Act (AIM) was signed into law on December 27, 2020. The AIM Act authorized the EPA to take actions to reduce the hydrofluorocarbons released into the atmosphere.

To comply with the new EPA regulations, FM-200 began a “phase-down” process in 2022 that mandates reduced production of the popular fire suppressant. By 2037 only 15% of the amount of FM-200 produced annually in 2021 may be brought to market.

Even though Novec 1230 does not contain hydrofluorocarbons, the AIM Act had an impact on Novec 1230 too. On December 20, 2022, 3M announced that it would discontinue all production of Novec 1230 by the end of 2025. The company made this decision based on the changing regulatory environment, the hundreds of lawsuits they were forced to defend themselves against, and the fact that their patents relating to Novec 1230 were about to expire.

Gradual Evolution

So, what does the future of data center fire suppression look like? Though both FM-200 and Novec 1230 are being phased out, this process will take a long time. Though future production of these two products will be restricted, the EPA has not mandated they be taken out of service. Since both products have shelf lives of 20 years or more, they can continue to be used for the near future in those fire suppression systems that are already in service. In fact, the environmental impact of replacing existing FM-200 and Novec 1230 systems is worse than keeping those systems in place. This will give the data center fire detection and suppression industry time for the development of the next generation of products.

Everybody is talking about ChatGPT; an artificial intelligence (AI) tool that has dominated the news for more than a year. This web-based application provides answers to questions, or prompts, in seconds. In many cases, ChatGPT’s responses are accurate and helpful.

However, ChatGPT must be scrutinized because its information can be incomplete, misleading, or incorrect. Like a know-it-all who corners you at a cocktail party, ChatGPT may provide an answer that sounds good but may not be factual.

We decided to put ChatGPT through its paces by asking some questions about a subject we know well – colocation. We asked the AI tool 3 questions. The responses came back immediately and were mostly helpful. We graded the responses using the following scale. The tool got an “A” for answers that were factually correct and helpful. A “B” grade was earned for answers that were probably true but not well supported while we gave ChatGPT a “C” for answers that were incorrect.

The History of Colocation

The first question we asked was “What is the history of colocation?” ChatGPT did a respectable job answering this question. It provided a synopsis of the computer industry starting with mainframes in the 1950s. It asserted that colocation came into being in the mid 1990s, after PCs and the Internet became ubiquitous. By that time companies were looking for cost-effective and scalable solutions to house their servers. Colocation reduced capital expenses and saved money. We gave ChatGPT an “A” for the quality of its answer to this question.

Next, we asked “What is the history of colocation in the state of Connecticut?” The answer to this question was not as helpful. ChatGPT produced some well-written paragraphs that stated it did not have much additional information about Connecticut’s colocation history. It did note Connecticut’s history of innovation and the high concentration of financial services companies as factors contributing to the growth of colocation in our state. This response earned a grade of “B.”

Finally, we asked “Can you provide the names of the first companies to offer data center colocation services in Connecticut?” Here is where the AI tool came up short. First it made an excuse that it did not have access to any real-time data because its database only includes information collected before January 2022. Then it offered the names of 3 prominent colocation service providers that we know were not among the first in Connecticut.

Chat GPT acknowledged its answer was weak and admitted that it did not have any specific information about the first colocation service providers in Connecticut. It suggested referring to historical records or industry publications. Though it was somewhat forthcoming about how little it knew, we still believe it provided false information. It thus earned a “C” for its answer.

Colocation Pioneers in Connecticut

So which companies were the first to offer colocation services in Connecticut? We researched the web sites of the colocation service providers in our state to determine the following. The first colocation companies in Connecticut began providing these IT hosting services in the middle to late nineties. These pioneers were all small or medium service providers. One started in 1995, another in 1997, and a third in 1999. All three companies are still in the colocation business in Connecticut.

We learned the early history of CAPS from “Barky” (not his real name), a retired CAPS employee who visits our data center from time to time. He started his career at CAPS in 2000. Barky told us CAPS first offered colocation services in the year 2000. The company was founded 5 years earlier to provide business continuity services. These services provided alternate workspaces where clients could operate their businesses if their primary data center was not functioning.

CAPS’ first colocation services were delivered to large financial organizations that were legally required to operate continuously. The penalties for lost transactions were severe so these companies asked if CAPS could provide a “hot recovery” service. This led to the development of the colocation services that today constitute CAPS’ primary business.

Though the CAPS’ data center in Shelton employs a highly redundant design and advanced automated monitoring technology, our skilled human resources are essential to our extraordinary service delivery record. Let us look at the role people play in powering our facility.

Proper levels of power, temperature, humidity, security, and fire protection are all key to data center operations. Of these factors, a constant source of sufficient electrical power is the most critical requirement for successful operations. If there is an interruption in the flow of adequate electrical power, data centers immediately stop functioning.

Data center design includes several critical infrastructure components to deliver the power needed to assure operations. These include automatic transfer switches, redundant Uninterruptible Power Supply (UPS) systems, and redundant generator systems.

In addition to these critical infrastructure systems, skilled human resources are vitally important. On-site professionals monitor infrastructure systems, oversee ongoing maintenance, and plan and implement new client installations.

At our data center in Shelton, we are fortunate that Charley (not his real name) is a key part of our team. An electrical contractor whose company did the electrical work for major hospitals and office buildings around the state, he is a licensed E1 Master electrician who applies his extensive experience to make sure our data center is always up and running.

People Provide Additional Protection

Let us look at the kinds of things Charley does. Most days he gets to the data center early to inspect the various systems. A variety of automated tools monitor all critical infrastructure systems continuously, sending real-time alerts issued whenever a threshold has been exceeded. Still, it is important for human oversight to provide an additional level of protection.

On a given day Charley may check the Computer Room Air Conditioning (CRAC) systems for alerts or signs of a leak. He may inspect the Voice and Data (VDR) Room, the UPS room, or check the Diesel Generator systems. He also walks through the data center and checks Power Distribution Units (PDUs) in client cabinets to gauge current flows and determine if any equipment is reporting an alarm.

On a regular basis Charley uses a voltmeter, a thermal sensor, and other tools to inspect the electric panels and PDUs in the data center. Elevated temperatures detected at the various connection points may indicate an electrical component is degrading and a technician should replace it.

If an infrastructure system requires maintenance, Charley would oversee the work performed by any of the skilled technicians that support our systems. His experience is often an immense help when dealing with subcontractors. In many cases he has known these people for a long time and can help get a particular problem resolved quickly and cost-effectively.

When a new colocation client is about to move in, Charley once again has a role to play. Each client has unique power requirements. Do they require 20-amp or 30-amp service? Would they like A/B power or is a single power circuit sufficient? Is a special connector required for a client’s systems? Having a licensed electrician who can provide professional guidance and complete any required wiring is invaluable.

Decades Without a Power Outage

Connecticut is known as “The Land of Steady Habits.” This may be due to the traits associated with our Puritan forebearers. People in our state are known for being industrious and doing things the right way. Though some might say that ongoing surveillance of our data center by humans is overkill, given all the automated monitoring tools we deploy, there is no dispute about the results achieved by Charley and the rest of our team. Our data center in Shelton has not experienced an unscheduled power outage in over 20 years.

 

There are more than 100 Managed Service Providers (MSPs) in Connecticut. The state also has 17 colocation data centers. Let us consider the changing ways MSPs and colocation providers collaborate to deliver critical IT infrastructure services to Connecticut businesses.

MSPs provide a variety of information technology consulting services to clients. Some MSPs specialize in specific technologies such as virtualization, cloud migration, or cyber security. Others are experts in applications such as Salesforce CRM software or Workday ERP solutions.

Most MSPs in Connecticut serve small and mid-sized businesses. This is not surprising since there are more than 360,000 small businesses in Connecticut. This represents over 99% of all companies in the state.

Frequently MSPs fulfill the role of the Chief Technology Officer at organizations that do not have a senior IT leader on staff. In these cases, the MSP handles everything from setting up user accounts, to configuring servers, to managing firewalls, to overseeing software updates.

MSP Services Track Technology

Over the years the services provided by MSPs have evolved as technologies have changed. For example, years ago small businesses needed help setting up their email systems. This often meant the MSP had to be knowledgeable about Microsoft Exchange and how to use this on-premises application to set up and manage mailboxes for corporate accounts.

Today, as companies have moved to a cloud-based Software-as-a-Service (SaaS) model for office applications, MSPs have pivoted to provide the support their clients now need. Though Microsoft 365 simplifies initial email configuration and relieves end-users of the on-going need to update software, MSPs are now kept busy with cloud connectivity, security, and data backup projects.

The relationship between MSPs and colocation data centers has also changed with new generations of technology. Before the cloud, MSPs counseled clients to move onsite IT systems to a colocation data center to improve performance, strengthen security, and increase flexibility. They may have procured the colocation services for resale to their clients, or they may have directed clients to a colocation provider they endorsed.

Recently, as cloud adoption has grown dramatically, MSPs have devoted increasing amounts of their attention to helping clients move to the cloud. Initially, most MSPs promoted one of the big three cloud providers (AWS, Microsoft Azure, or Google). As experience with the cloud grew, leading MSPs developed expertise configuring hybrid cloud solutions that incorporated services from more than one public cloud provider as well as private cloud and onsite infrastructure.

MSPs Have Renewed Interest in Colocation

As cloud use has become more widespread, MSPs and their clients have learned that there are workloads that are not well-suited for the cloud. They may not perform efficiently, or they may be much more expensive in the cloud than at a colocation data center or on premises. With this change, MSPs have once again adapted to serve their clients’ needs.

Recently, CAPS has seen a marked increase in the number of MSPs who are looking for colocation services, for themselves or their clients, for those workloads that are not well suited to the cloud.

 

 

 

 

Cabinets and cages are essential structural components in a colocation facility. They are where a client’s information technology systems are installed. Though decidedly “low tech,” these infrastructure elements provide a secure environment where air flow, temperature, and humidity levels may be controlled. Let us unlock some of the nuances of these enclosures as we consider their role in colocation.

Cabinets are normally included by the colocation service provider as part of the monthly service contract. The most common cabinets used in data centers are black ones that are 24 inches wide by 42 inches deep by 73.6 inches tall (42U). A variety of other sizes and colors are available.

Keys Provide Security

Cabinets include front and rear doors with keys or combination locks. CAPS keeps spare keys for all our colocation clients in case they forget to bring their key when they visit our data center.

Cabinets may be partitioned into multiple bays to provide cost-effective secure access for more than one colocation tenant. Half cabinets are available with two doors for two separate clients with each bay having 20U capacity. Four door 44U cabinets are also available, providing 11U for each of four clients.

Racks are a lower cost alternative to cabinets. Racks do not have doors with locks, so they are much less secure. As such, racks are usually only used to mount equipment within a cage. Cages include a locked door to restrict access to the racks within the cage.

Cages are custom-built using steel mesh. As such, they are more costly than cabinets for most requirements. If a client needs to collocate one or two cabinets, usually a cage will be too expensive. For those who want to place 3 or more cabinets in a colocation data center, a cage may be the best choice. The lead time before cutting over at a colocation facility will be longer for cages than for cabinets because it takes time to build a cage. The one-time installation cost will also be higher.

Though data centers usually supply the cabinets, racks, cages, and Power Distribution Units (PDUs) for a colocation engagement, the client provides the rest of the equipment. It is usually the client’s responsibility to install their servers, switches, firewalls, and other systems in the cabinets and racks provided by the colocation data center.

Client Responsibilities

Clients are responsible for connecting cables to their systems. Since cabling can impact airflow, and thus temperature control in cabinets, data centers may provide guidance about cabling practices.

Clear labeling of cables and equipment in a colocation cabinet can help when technicians are called upon to service equipment or to make a configuration change. This is important for a client’s service personnel, but it can be particularly helpful when Remote Hands services are invoked because the colocation support team may not be familiar with a client’s equipment and cabling layout.

Colocation tenants have some discretion about what they store in their cabinets and cages. Spare parts and cables are often stored within colocation space. It is important to maintain proper airflow, to avoid creating a fire hazard, and to keep colocation cabinets clean and organized.

Cabinets and cages are the building blocks of colocation. Please contact CAPS if you are interested in relocating your IT systems to our secure data center in Shelton. We will be happy to give you a tour of the data center and point out examples of how cabinets and cages have been used by our clients.

Firewalls are critical security devices that filter Internet communications to block packets from blacklisted IP addresses and applications. Firewalls can be hardware-based or software-based. There are even virtual firewalls whose software is hosted on instances in the public cloud.

Firewalls are ubiquitous. These security devices are found in corporate data centers, colocation facilities, and as part of public cloud infrastructure. Firewall implementations vary across these different venues. In this article we will review the factors to consider when deploying firewalls at a colocation data center.

Firewalls Can Be a Single Point of Failure

Firewalls at colocation facilities typically are hardware-based network units. Their components usually include processors, SSD storage, communication ports, fans, and power supplies. They are manufactured with highly reliable electronic components and typically do not include mechanical components, though some older firewalls may include HDD storage. Even without moving parts, they still are subject to failure over time; especially when exposed to extreme temperatures or power spikes. Colocation facilities maintain optimal temperature and humidity levels and power condition to extend the life of firewalls and other electronic systems.

Most firewalls in colocation data centers are configured as stand-alone units. As such, they constitute a single point of failure. If a firewall goes down, operations will be interrupted. For high availability (HA) requirements, redundant firewalls may be configured so that operations are not disrupted if one of the firewalls stops functioning.

New firewall software is released as needed to provide operating system upgrades and bug fixes. Some colocation data centers offer enhanced services that may include performing firewall updates. If the data center does not provide firewall maintenance services, clients can remotely manage the update, or they can go to the colocation facility to manually perform the upgrade.

Many of the leading firewall manufacturers have products in use by CAPS’ colocation clients. Firewalls from Cisco (ASA and Meraki MX), Fortinet Fortigate, Palo Alto Next Gen PA Series, and SonicWall are some of the leading firewall vendors with products at the Shelton, CT data center. These firewalls vary in terms of functionality, performance, and price. Higher priced units typically offer more advanced packet filtering and faster processing to enable better security and higher data throughput.

Next Generation Firewalls (NGFW) monitor Application Layer (Layer 7) data to provide greater protection than basic firewalls. Basic firewalls monitor only Network (Layer 3) and Transport (Layer 4) data. NGFW products perform deep packet inspection (DPI) and check for malware signatures in real-time to identify activity that resembles known malicious attacks.

Colocation Providers Offer Different Firewall Services

Colocation service providers offer various service options regarding firewalls. Many colocation data centers do not get involved in managing client firewalls. The responsibility for configuring and maintaining firewalls remains with the client. Though colocation staff may recycle a firewall’s power as part of their Remote Hands services, they will not manage the firewall.

Those offering managed firewall services assume responsibility for configuring and maintaining a client’s firewalls. The decision to outsource critical security functions should only be made after thoroughly vetting the colocation service provider to make sure they have the knowledge and commitment to provide the service required.

Firewalls are an important security component whether your IT systems are hosted on-premises, at a colocation facility, or in the public cloud. If colocation is an option you are considering, we hope you will contact the team at CAPS.

 

 

Internet connectivity is an essential component of colocation services. Over the past two decades much has changed regarding colocation bandwidth. Prices have dropped dramatically, and available bandwidth has risen. Some colocation providers began offering alternate variable pricing plans based on the amount of data transfer consumed each month in addition to the original model which charged a fixed price for a guaranteed data rate.

In the early days, when CAPS first opened its data center in Shelton, most clients had limited bandwidth. Twenty years ago, affordable Internet circuits ranged from 1 to 10 Mbps. Today, as connectivity costs have plummeted, it is common for colocation clients to employ 100 Mbps, 1 Gbps, 10 Gbps, or even higher capacity circuits.

Redundant Bandwidth Available

Most colocation clients at CAPS purchase Internet bandwidth directly from CAPS at a fixed rate per month. They get a committed data rate and an automatic fail-over Internet circuit for back-up. Client Internet activity provisioned over redundant infrastructure is monitored continuously. If packet losses or retransmission rates exceed an alert threshold, the circuit automatically transfers to the backup circuit utilizing Border Gateway Protocol technology.

Circuits provisioned from CAPS provide symmetric bandwidth. This means the data rate is the same on the transmit and receive paths of the circuit and will be at the contracted bandwidth rate; give or take a small percentage. Though consumer Internet service providers also advertise symmetric bandwidth, they are more likely to deliver lower transmit and receive bandwidth at times. This is because Internet service providers typically over-provision bandwidth across their customers. Though they may advertise a 1 Gbps circuit, the actual available bandwidth at any given time may be half this rate or less depending on the data transmitted by all clients at a given time on this circuit.

May Save Money Purchasing Bandwidth Direct

Though CAPS sells Internet connectivity as part of its colocation service, we also allow clients to purchase their Internet bandwidth directly from an Internet service provider. Clients who purchase a circuit from an Internet service provider will not be able to rely on our backup failover circuit for redundancy, but they may save money.

The Internet service provider market is extremely competitive. Carriers often offer great deals for Internet bandwidth. Recently some of our clients have decided to order an Internet circuit directly from a provider offering an attractive monthly rate. They recognize they may not actually be able to transmit and receive data at the full rate at all times. Even if they only get half that rate, it may still fulfill their requirements while saving them money.

Clients who opt to purchase low-priced Internet circuits take advantage of the fact that some colocation providers, like CAPS, allow them to procure their bandwidth directly. The flexibility to choose from a variety of suppliers makes it possible for clients to save money for those Internet services where redundancy is not required. We expect this new trend to continue as clients strive to save money while providing the data connectivity their organizations require.

 

 

Email has dramatically increased worker productivity. Ever since it was introduced in 1974 by Ray Tomlinson as part of the development of ARPANET, email has grown exponentially. Today, with over 3 billion email user accounts worldwide, it is arguably the most popular software application in existence.

Email is Important but Data Backup Rates Are Low

However, organizations that regularly execute complete email data backups are few and far between. The unique characteristics of email and the low perceived cost of lost email explain the minimal backup rate. However, increasing legal and regulatory exposure may change that in the future.

It has been nearly 20 years since Google’s Gmail was launched. Today, adoption has exploded to about 1.5 billion accounts. This represents roughly 49% of the current email market. Though Google is primarily used by individuals, due to its ease of use and free price tag, small businesses and even some large organizations rely on Gmail.

The Blackberry, released in 2002, was the first mobile device to include the ability to send and receive email. Now, Apple’s iCloud Mail, delivered through mobile devices as well as notebooks, laptops, and desktops, has about 850 million users which represents about 28% of the email market.

In third place, in terms of market share, with about 400 million accounts, is Microsoft’s Outlook/M365 service. M365 is one of the world’s most popular Software as a Service (SaaS) applications. Microsoft leveraged its dominance in the corporate market with its Word, Excel, and PowerPoint franchise to pivot from its boxed Outlook email product to the cloud based M365 office productivity solution. This strategy has helped Microsoft’s Azure Public Cloud close the gap in competing with Amazon’s AWS service.

Email Services Provide Limited Backup and Recovery

Many email users depend on their email service provider to retain email copies. For example, it is estimated that 70% of M365 email clients rely on the default email retention provided by Microsoft. The percent of subscribers who use the standard retention provided by Google and Apple is even higher.

Most email services store emails for a set time or until a certain amount of storage has been used. Whether the service retains emails for a defined period or until an amount of memory has been used, when thresholds are exceeded, emails will be deleted. Unless emails are copied to long-term storage, they will not be retrievable after the backup period expires or the storage capacity is exceeded.

If a proper backup has not been done, it will not be possible to recover emails that are accidentally deleted. It will also not be possible to recover earlier versions of an email if it has been edited over time. Proper email backup ensures that earlier versions and accidentally deleted emails can be recovered.

Emails Can Be Evidence in Legal Disputes

Companies are legally obligated to keep archives of all business critical and sensitive information. It is estimated that 75% of all business critical and sensitive information resides in company emails. As the popularity of email has increased so has its use in settling legal matters. Emails may be used to establish legal contracts. They may also be brought into court as evidence. Electronic Discovery or eDiscovery is a growing part of many law practices. Therefore, companies of all sizes must be sure they have developed an appropriate data retention plan to assure they can retrieve the documents they need, including emails, to protect their company’s interests in court.

Regulatory compliance also requires, in a growing number of instances, the ability to retrieve email. For example, the Sarbanes-Oxley Act of 2002 requires all publicly-held companies, and the accountants they employ, to store emails for at least 5 years.

Email differs from other applications with respect to data backup and recovery. Though it may be very important to be able to retrieve an archived email to respond to a corporate, regulatory, or legal request, a quick recovery may not be necessary. Unlike a database application that must be continuously updated for a company’s operations to function properly, the response to a request for an archived email can take days or weeks without causing a lot of problems.

CAPS uses Veeam’s powerful data backup and recovery tools to provide long term offsite data storage and cost-effective recovery of emails and other critical data. Please contact us for a free review of your current email backup practices.