Make those minutes count

Buy or Build: Why Colocation Wins

shutterstock_128785199The question of whether to build a data center or purchase space from a colocation partner continues to perplex IT stakeholders within organizations. While each company needs to evaluate their individual needs, there is a growing case for choosing colocation simply because it makes more sense from the perspective of both infrastructure and investment.

Some of the top reasons that IT leaders continue to choose colocation over building new infrastructure are:
• The colocation partner’s ability to support the density needed for companies’ increasing data output.
• The colocation partner can provide the scale and availability they require.
• It makes more sense financially.

More Data Means More Density

Today’s workers use an increasing number of devices, tools, and networks to do their job. This has the following impact on IT resources:
• Increased device usage and network traffic exponentially increase the amount of data businesses need on a daily basis.
• Workers have higher expectations about readily available and top-speed connections and tools.

The high-density computing available through a data center is better able to meet these demands of the modern workforce.

Redundancy and Scale Are Expensive

Due to the large amount of data that organizations manage nowadays, the data center’s infrastructure needs to be equipped with numerous physical and technological features. This translates into budget-busting necessities such as physical security, redundant equipment, cooling and environmental controls, and distributed power supplies. Colocation partners can provide the scaled resources required to support increasingly high-density computing more efficiently and inexpensively.

Colocation is a Smarter Investment

Even for organizations that have a higher IT budget and can support an internal data center infrastructure, colocation is still an option that should be carefully considered due to the following reasons:
• The money spent building a data center could be used to buy a more secure and powerful set up in a   colocation space.
• Data centers specialize in data availability and security; organizations that want to provide the best   connectivity to their workers should consider the quality services offered by colocation partners.
• The human resources required to operate an on-site data center could be better allocated to other, more    directly revenue-generating initiatives.

From the perspective of data management, computing resources, and financial practicality, colocation is often the better option for today’s businesses. As the case for colocation continues to grow, more IT professionals are beginning to see that the advantages of colocation could far outweigh those of building an internal infrastructure.

Some of the top reasons that IT leaders continue to choose colocation over building new infrastructure are:
• The colocation partner’s ability to support the density needed for companies’ increasing data output.
• The colocation partner can provide the scale and availability they require.
• It makes more sense financially.



The 3 Advantages Colocation Has Over Cloud Hosting

shutterstock_1514828When making a long-term investment in a hosting solution, many decision-makers still struggle with the choice of colocation vs. cloud hosting. While each option has its advantages, there are numerous factors that must be considered in order to make the best decision.

There is no doubt that cloud hosting is the lesser expensive option in the short term, but strategic, forward-thinking decision-makers should carefully consider whether the less expensive option today will result in greater expense down the road. In addition to the long-term benefits of colocation, advantages include the ability to customize technical and security infrastructure and retain control over the environment.

Make a Long-Term Investment

As a business grows, its needs will evolve and IT professionals will want to adjust infrastructure accordingly. For businesses that choose cloud hosting, the rented server and website may be adequate for their initial needs. However, colocation satisfies the present and the future needs of a business.

With colocation, businesses can consider how their data management may change as their data load grows. Businesses don’t need to constantly pay for additional scale or continue to use the same hardware used by the cloud hosting provider, which may not meet evolving needs. IT can add to the infrastructure and install additional firewalls, security apps, and other solutions as needed. 

Customize Security Needs

Colocation provides more flexibility than cloud hosting when it comes to security and privacy needs. While cloud hosting is still a secure option, colocation eliminates the additional risks that come with having the data center team perform upgrades, troubleshoot hardware issues, and otherwise handle the environment. Security risks are introduced simply by exposing the environment to more people. By retaining ownership of environment maintenance, businesses gain an additional layer of security and peace of mind with colocation.

Retain Control

When it comes to data in the cloud, control is key. The ability to control the specific servers, hardware, applications, and vendors is very important for a business that values quality and security, and this is only available through colocation. By choosing, purchasing, and owning all pieces of the infrastructure, users can ensure that they are using only the technology they choose — while still reaping the broader benefits of the data center provider.

For businesses that value a good long-term investment, strong security, and the ability to retain control over hardware type and management, colocation is the wiser choice.

Overcome the Constraints of Continuous Delivery

One of the most pressing issues that software development organizations strive to overcome is the challenge of providing users with a quality product in a quick and cost-effective manner. In order to solve this problem and better satisfy their customers, a growing number of companies are adopting a continuous delivery model for software development and release.

The adoption of continuous delivery does not come without constraints, but these obstacles can be overcome with some strategic changes at the organizational and process level. By adopting a continuous delivery model, organizations are better able to satisfy customer needs, stay ahead of the competition, and provide the most innovative solutions.

Current Delivery Constraints

The constraints to continuous delivery are spread throughout the organization and penetrate to the project, program, and enterprise level. In order to understand the benefits of continuous delivery, it is helpful to explore some of the current people- and process-related constraints in providing quality solutions to customers at an accelerated pace.

Three specific challenges that delay product releases and prevent continuous delivery are related to process, infrastructure, and management.

Lengthy Processes: Manual testing, task-based checklists for release management, and other project management-related delays cause bottlenecks that prevent progress.
Infrastructure: Software organizations using physical servers and performing manual configurations face cost- and labor-related constraints that take time and effort away from the core project focus (moving the solution toward release).
Governance: Releases are delayed while awaiting approval from committees or senior executives.

Organizations can significantly mitigate the above constraints by adopting some core continuous delivery practices and focusing on key continuous delivery best practices.

Core Focus Areas for Continuous Delivery

While successful adoption requires a high-level examination of all processes across the organization, there are a few best practices when moving toward a continuous delivery methodology.

Agility: While continuous delivery doesn’t necessarily require an agile development organization, it does require the software team to strategically focus on simultaneous, incremental builds. Optimize efficiency by ensuring that one specific group or process is not holding up an entire release cycle.
Investment: New software tools and technologies are required in order to overcome some of the constraints presented by manual processes and infrastructure. Replacing manual testing roles with automated tools and upgrading infrastructure to virtualized platforms are two ways to make an investment in continuous delivery.
Automation: The QA role is critical in any software organization, but it also causes one of the most significant delays in the release cycle. Save time by replacing manual testing teams with automated testing suites for unit, acceptance, regression, and functionality. Automation is also helpful in speeding up infrastructure-related delays; leverage automated provisioning tools and workstations wherever possible.
Release strategy: Continuous delivery requires strategic re-structuring of the entire release strategy. Software development teams must design and execute software development processes in a way that takes into account known time constraints and prioritizes both quality and speed. Stakeholders can start this process by first adopting a continuous delivery perspective, and secondly by using this perspective to examine and reconsider the existing release process.

Continuous delivery benefits both development teams and the organization as a whole by encouraging efficiency, enabling adoption of the latest technologies, and providing customers with quicker access to the solutions they need.

Four Ways That Colocation Supports High-Density Data Centers

The data burden of today’s networked world is increasingly heavy as more backend power is required to support critical business functions. While data center centralization and consolidation efforts through means such as virtualization are helpful in meeting these increased demands, organizations with high-density data center setups are challenged to constantly provide the infrastructure to support ever-evolving data needs.

Data center colocation can help IT departments satisfy the data demand by powering high-density setups in an efficient and streamlined manner.

Here are four reasons why colocation is the right solution for some high-density setups:

Increased capacity

Colocation vendors design their facilities and architecture to accommodate very large workloads, and this requires enormous power. Colocation providers are uniquely positioned to regularly and consistently provide the amount of power needed to support these workloads. This is partly because colocation vendors often have a network of interconnected data centers to help ensure that each customer gets the power and space needed without sacrificing quality or service.

Streamlined delivery

Even for those organizations that may have access to large amounts of energy to power their high-density setups, it is unlikely that they have the ability to streamline the delivery of that energy in the appropriate manner.

Colocation providers use the most advanced power delivery solutions to provide optimal energy to each rack, and this eliminates the small energy drain that can occur with other power delivery systems. The sophisticated power technology available through colocation partners provides unprecedented levels of performance for high-density setups.

Better connections

Network latency is a primary concern for IT departments as more data moves through less network space.

Colocation providers build their infrastructure to support high-frequency exchanges with minimal latency. The quality and automation of connections resulting from this setup is only available through a colocation provider.

Diverse power sources

One of the biggest limitations in hosting a high-density system within the internal infrastructure of an organization is the inability to support the required power redundancy. For example, a single backup generator may not be sufficient to support the high-density setup if the primary energy source fails.

Colocation partners solve this problem by diversifying their power streams and providing multiple levels of redundancy across data centers. Colocation partners help ensure consistent access to power by providing not just backup generators but also backup power sources from various utility vendors, local power resources, and on-site streams of power.

Looking ahead

As IT organizations continue to move toward high-density computing setups, data center colocation is becoming an increasingly crucial component in supporting the modern IT infrastructure. Only colocation partners are uniquely equipped to provide the power capacity, quality, delivery, and diversity necessary for a high-density data center setup.

Steps to Save Money With Colocation Services

By now, most either know or have had experience with cloud technology.It is certainly revolutionizing the way businesses operate in the 21st century. Its applications are unlimited, but embracing all of its possibilities can be difficult without the benefit of data center colocation.

Colocation plans offer advanced solutions for network burdens by establishing a centralized location for transmitting and receiving the constant stream of information from transactions, social media, and internal and vendor communications. Many businesses find colocation services can solve their own data center limitations and problem points.

But no matter the type of business considering colocation, the most important question in any capitalist endeavor is how to incorporate methodologies that solve workplace inefficiencies and reduce operational costs without overspending on the solution. And that is never easy.

A Big Investment

Web-based businesses and those that have already purchased servers to store and process their data are ahead of the curve, but protecting that investment is important. It would be madness to purchase expensive, high-tech equipment and then allow it to transform into a useless pile of dust-covered circuitry.

But oftentimes, limited facility space relegates this crucial apparatus to enclosed, cramped areas that feature improper environmental controls, inadequate power supplies, and other detrimental influences thatare harmful to servers. The resources simply aren’t available to construct or purchase a state-of-the-art housing complex to meet data center needs. Moreover, it’s not always practical to purchase servers despite frustrating disruptions in network functionality.

Colocation to the Rescue

Colocation offers business network solutions by storing privately owned servers in a secure facility that features plenty of room to “breathe” and precise environmental controls that regulate temperature and humidity.

In addition, most provider plans will include server lease options which allow businesses to receive the benefit of disaster recovery, safe storage, possible continuity plans, and all of the other advanced IT functionality benefits of colocation without having to purchase server equipment.
Colocation Comparisons

Every business decision requires proper analysis and choosing a colocation provider is no different.Before initiating contact, consider the following to create accurate cost comparisons.

• What are the current problem areas?

• What is the colocation goal?

• Will it be better to purchase or lease servers?

• How much storage space is really needed?

• How long will it take to make changes to storage capacity?

Once internal goals have been outlined, here’s how to find the most cost-effective colocation plan with potential providers:

• Find out how much it will cost to transfer servers to another provider or back to the business facility before signing with a provider.

• Determine whether or not IT staff can manage business servers remotely. If leasing, can the teamtake over after the initial set-up?

• Choose add-ons wisely because services like back-up and extended support can add up fast.

• Consider sharing rack space with another business. Most providers will offer this option, but don’t forget to factor in anticipated growth. If a half rack will suffice, go for it! You’ll save money. But if analysis projects substantial growth in the next few years, prepare for that contingency.

• Where is the provider’s physical location? It can make a difference. Rural areas offer lower costs but sometimes have bandwidth limitations and monitoring and compliance difficulties. Metropolitan locations are more expensive but offer convenience and heightened security.

The answers to these initial queries can help direct the course of colocation and contain the costs of the service.



Data Center Connectivity: 5 Things to Consider in Colocation Plans

Colocation services help businesses keep pace with demanding innovations in IT technology. New techniques in data transfer can make finding appropriate network solutions difficult, but colocation partnerships provide the key to getting information to end users efficiently.

A good provider will help companies mitigate challenges from cloud computing, video streaming, advanced web hosting and other activities required by today’s global marketplace. However, these services have the potential to rack up extraneous costs. Obviously, a major goal of implementing new solutions is to produce ROI, so it’s simply futile if the solution designed to fix a problem ends up further complicating things.

This is where data center connectivity can play a crucial role and increase efficiency in colocation plans. Here are five things to consider:
1. Cloud Computing. The reception and integration of data from various sources, both in physical servers and cloud data centers, often make cloud computing a vexing situation. Data from various locations around the globe can be hard to manage due to time delays, which hurt performance.

But colocation plans are perfect for cloud-related data because they establish a centralized location for data reception and transmission. When coupled with advanced connectivity plans that merge high performance interconnects and strategic operator networks, businesses can seamlessly integrate data.

2. Big Data. Connectivity is vital for solving the network burdens from analyzing massive amounts of data in real time or storing it for later. A colocation plan includes the necessary interconnects and data center configuration to deliver the appropriate bandwidth for supporting big data.

3. Video Performance. Video performance is highly affected by latency. Its data packets apportion a significant amount of a network’s bandwidth, creating susceptibility for dropped packets.

But connectivity options designed for colocation make it easy to resolve video performance problems. They feature intelligent routing and network resources with a high bandwidth that remove latency issues. These networks are the choice of many automated traders – where milliseconds are incredibly important.

4. Routing Options. Getting data closer to the client base is becoming more difficult as the global marketplace expands. Connectivity strategies combined with tactical operator networks provide superior interconnection service, so choose wisely when considering a colocation provider. The regional operator network availability will have a crucial impact on transmission performances.

 5. Elasticity. Intelligent colocation options offer flexibility for network systems, instead of just increasing bandwidths. It gives businesses workable solutions and the advanced resources needed to meet upcoming tech needs without having to resort to expensive system purchases.

Before choosing a colocation provider, be sure to consider the data center connectivity solutions included in the service.  Colocation can offer the applicable IT solutions for businesses to improve their network performance, but the data center connectivity will help establish better service and lowered operational costs.

Reinventing the CISO in a Changing IT Security Landscape

The cyber world is on alert following recent high-profile security breaches and hacking incidents. Inferior IT systems used in the automation of business processes and increased cyber trading have opened the floodgates for hackers.

Historically, the role of the chief information security officer (CISO) focused on all things IT. CISOs spent their time selecting, deploying and overseeing IT solutions. Some roles were even comparable to today’s IT security administrator jobs — guarding firewalls, negotiating with software vendors over antivirus solutions, scanning and clearing viruses from infected computer devices, and more. Many duties were completed to simply keep regulators at bay.

Multi-dimensional roles

Today, the CISO is a part of a much bigger picture — in security and in business. What can a successful information security officer bring to the corporate table?

Business Enabler  

The new CISO is not just an IT steward, but also a business enabler. The role now requires a seat at the C-level suite, sitting in boardrooms and taking part in IT decision-making with regard to systems availability and business performance. The CISO must understand business processes at all levels to be able to integrate the right machines and technology.

The Missing Link

In many organizations, IT and business still can’t see eye to eye. With IT security now a priority, the new role of the CISO  links the executive hierarchy to the individual business units. This new role calls for a second link – the bottom link – where more proactive collaboration between IT analysts and business managers can happen in each department.

Risk Manager

As an advocate for security, the new CISO is tasked as a risk manager. The role now requires identifying vectors of vulnerability and weakness in the security system and providing immediate solutions to mitigate risks. The CISO and team enforce access logs to establish traceable audit trails for easier determination of accountability. The CISO is likewise expected to explore opportunities to deliver enterprise IT systems and networks in a secure manner that is compliant with applicable regulations.

Influencer, Protector, Responder

These three new roles of the CISO were identified in a recent IBM survey. It revealed that organizations are looking at security with a holistic approach and are elevating the CISO to a more strategic position. Influencers are characterized as those who are confident and strategically prepared to influence business performance. Protectors are those with a strategic plan to prioritize security. And Responders are considered those who focus largely on protection and compliance.

Fundamental skills and competencies

Executives with a computer science or computer engineering background and experience in IT security at large enterprises are good candidates. Cybersecurity solutions product specialists and computer degree graduates with corporate IT experience can also fit into the role.

A deep technical background and experience is a must, but business acumen is another important consideration. CISOs must integrate IT into business to improve the performance of people, machines, processes, and the bottom line.

Managing IT Risk: The Special Case of Privileged Users

Acts of fraud in the workplace cost companies around $145,000, according to a report from the Association of Certified Fraud Examiners. From theft and security breaches to tarnished reputations, businesses have more than enough reason to take swift, strong action.

Internal perpetrators and leaks causing much of the damage go largely unchecked while security budgets focus more on external threats. In fact, less than half of IT departments dedicate funds to combat internal threats, according to a recent Raytheon-commissioned survey


Privileged Users: Your Best Assets and Highest Risks

Every company needs privileged users with greater access to the most valuable and sensitive IT resources and restricted company info. Privileged users include many of your best and most important employees, but their accounts need special security attention for several reasons:

  • High-level IT professionals need access to data and information too valuable to go unprotected.
  • These privileged users make attractive targets for outsiders to infiltrate.
  • These employees are often skilled and knowledgeable in the ways of hiding their fraudulent behavior.
  • Privileged users may have multiple user accounts on the network or multiple employees may have access to the same administrator accounts. IT departments need the power to accurately attribute user actions.

Why We All Must Worry About Internal Threats

Failing to protect against internal risks presents a major problem for any company. Malicious intent does not even need to be involved. Privileged users with high-level access have the power to topple networks and steal or destroy data, but even a perfectly loyal employee can become a security risk.

Without proper monitoring and security protocols, those with high security privileges can become the next target of an attack. External attackers have more to gain from users with broader access. Any employee may fall victim to a malicious attack, but the privileged user would expose more risk. Thus, organizations must grapple with how to tactfully and effectively prevent and address internal threats.

Handling the Security Risks of Privileged Users

IT leaders should devise a plan to combat privileged user threats, beginning with a common sense view of behavior in the workplace. You need to know who has access, and you need to know who has taken action when something has gone wrong.

Monitoring can help prevent internal damage, but policies should be clearly defined. False alarms create the need for high-tech security tools and auditing. No one enjoys a security scare that leads to a top employee being accused of theft. Companies need video evidence and reliable user data to avoid this problem.

The latest technologies for internal threat protection include privileged account management tools, or PAM. In conjunction with clear policies and monitoring systems that help analyze the context of user actions and the intent of possible attacks, the billion-dollar industry of internal fraud can be greatly diminished.

More Than a Handful: Seven Prime Risks of the Cloud

It’s never wise to jump into something new without learning about the risks involved. Cloud computing offers a wealth of potential for both empowering businesses and cutting costs, but as a new technology, it’s important for IT leaders to thoroughly understand the cloud’s inherent and specific risks in order to prepare companies for successful deployment.

Assessing the Risks

Risk assessment of any cloud solution should look at the many different varieties of services in the cloud, the differences in providers, and the needs of the given industry.

Specifically, decision-makers should consider these specific risk areas:


Cloud implementation must be carefully planned before deployment. If it’s not, there could be frustrating or devastating problems if existing systems cannot function and communicate with cloud services. Without ensuring full compatibility, businesses could face unforeseen costs for reverting to other systems, fixing the compatibility issues, and losing time that could be spent elsewhere.


Any compliance issues in an organization’s industry or location must be addressed when deploying cloud services. This especially poses an issue when using cloud services located in another country or for organizations with very specific and strict compliance requirements. There are cloud solutions to fit almost all compliance standards, but the issues should be addressed in the planning phase.


Virtual environments create security vulnerability risks for two main reasons: because more data is being transmitted across networks and platforms, and because organizations are housing data in a new physical location. A complete security assessment should precede cloud deployment, and a rigorous security management program should be in place over time. Communication between the company and the service provider is vital to monitoring ongoing security risk.

Reliable Performance

Some downtime will occur with any service, but the right cloud provider for a given organization will work to develop guidelines and service schedules that do not interfere with regular business. Service should be reliable and supported to mitigate any issue and provide near-perfect uptime.


If your cloud provider changes its service offerings significantly or if it goes bankrupt, will your business be able to move forward? Risk management for cloud services should stress the importance of portability, wherever possible, or a plan for easing the burden of your next big switch. Better yet, the cloud provider should appear positioned to provide service indefinitely.

Vetting Your Options

When you use a relatively new technology like cloud solutions, it can be difficult to assess whether you’re getting your money’s worth and whether your services are in line with what the market has to offer. Until cloud technologies and the market have matured and become more standardized, you should expect to put in some effort to regularly gaze at other providers and options while remembering the work it would take to shift gears.

Growing to Scale

Cloud gives businesses the power to disrupt markets and create new revenue streams. Will your provider be able to grow with you? Cloud services are generally very scalable, but not all providers are equally prepared for the task. Think about your future needs when choosing a provider; you can reduce some risk by being optimistic and finding a provider with robust, diverse services.

CIO 101: Systems Are Never Fully Secure

One of the biggest mistakes that a CIO can make is to assume that their systems are fully protected from security threats. It is a costly assumption that many CIOs make at some point; however, rather than repeat mistakes, like any leader, CIOs are better off learning from them instead.

Learning from Mistakes

Just a couple of months ago in Cambridge, CIOs from around the country gathered to participate in the MIT Sloan CIO Summit. While there, they were asked to discuss a significant failure that they had made during the course of their careers.

One of the most notable responses came from Fidelity Enterprise CTO Stephen Neff.

Neff discussed his time at Salomon Brothers and the early days of his career. He related how the firm had a double backup system that it relied upon; this ensured that the backup had a backup. The firm believed that this was sufficient coverage to protect their data. After all, while one backup could be corrupted or lost, the odds of corrupting two backups were considered to be so low that it wasn’t even a possibility.

Against All Odds

As it turned out, those odds were considerable. Upon review, it became clear that the mirrored site was corrupted, and the backup, which hadn’t been updated with crucial software, wasn’t backing data up at all.

The entire system wasn’t working properly. Fortunately, the problem was discovered before any major damage was done, and the data was able to be recovered from disks.

However, the experience taught Neff that no system is foolproof and that making assumptions such as those made at Salomon Brothers can lead to very costly mistakes.

As Neff said, “Stability isn’t a given. You might think your organization’s systems are stable, but you have to test them constantly to be sure.”

The Best Plans Go Fallow

Neff’s story illustrates that even the best security and backup plans can fall apart because of any number of factors. In this case, it was because of factors that were out of his control.

It also highlights what many IT professionals feel in the current environment that is promoting cloud-based solutions. For many, these systems represent an enormous risk because they operate out of their direct control. Stories like Neff’s underscore this concern, and it is something that each IT professional will need to address in terms of his or her organization’s specific needs and requirements.

Plan for Every Error

The bottom line is this: whether an organization is using internal or cloud-based solutions, it is important to factor in everything from human error to mechanical failures. Though CIOs can take steps to prepare for any eventuality, the truth is that all bases never will be 100% covered.

Things can and do happen. It’s important to ensure that an organization’s IT policies are consistent, constant, and always evolving. While not full proof, it is the best way to ensure that an organization doesn’t make the same mistakes that others have made.