Today's Data Centers - Driven by Business Applications
Companies, government agencies, and non-profit organizations are faced with the challenge of utilizing Internet of Things (IoT), digital e-commerce, mobile devices, and data analytics to accelerate business sales growth and increase bottom line business profits. To enable this transformation in technology, these enterprises are viewing their data centers as a “compute resource” for their business. As such, data center designs are being increasingly defined by “business applications” and less defined by active hardware infrastructure products (switches, servers, and storage devices).
Likewise, to enable a business to be agile, scalable, and elastic; many enterprise organizations are adopting a “hybrid data center strategy”—data is being computed and stored in on premise data centers as well as in private clouds, public colocation sites, and/or public cloud hosting sites. The decision on where to store and compute data, on premise or on a public site, is increasingly being driven by business applications. What do you want to manage and control within your company due to the importance, sensitivity, and/or confidentially of the data, and what do you want to outsource to a public data center site to lower operating cost, improve staff resource management, and increase agility and elasticity?
While companies require their data centers to be flexible and agile to address business requirements, they also realize that they no longer have an open “check book.” IT managers that design and operate data centers have the challenge of striking the right balance between providing enough capacity, agility, and scalability with the data center to enable big data to accelerate business results while optimizing operations to reduce its total cost of ownership. In addition, by operating a data center in an efficient manner, companies also free up underutilized capacity to enable further scalability and agility.
Data centers will play a critical role in a company’s ultimate success story–being the compute resource to run the business applications for the new digital economy
For example, IT managers are now starting to deploy Software-Defined-Networks (SDNs) to increase the speed to deploy business applications in their data center sites. The software within the SDN switches add further intelligence in the routing and management of data within a data center–this is leading many people to indicate that data centers are now being defined by the “software”, not so much by the hardware or network equipment. SDN provides a new way to operate, orchestrate and automate data management by enhancing workload and data traffic flow.
One example, Cisco’s Architecture Centric Infrastructure (ACI) platform (Cisco’s SDN offering) is a data center and cloud solution built around an application-aware network policy model. The foundation for ACI is the Cisco Nexus 9000 family of switches. With Cisco’s ACI platform, the data center design is moving to a 2-tiered “Spine & Leaf” architecture, which adds more agility and scalability to the data center site. It also leads to much longer fiber links between the Spine core switches and the Leaf “top-of-rack” switches. These fiber links must have low latency and a clear migration path to higher data rate speeds of 40/100G and beyond. This physical architecture change in the data center has led IEEE to begin working on Broadband Multi-Mode fiber (OM5) to enable higher data rate speeds between the Spine (core switching) and Leaf (edge switching) of SDN designs – speeds of 100/400G. Technology includes (1) advanced optical modulation, (2) wave division multiplexing, and (3) enhanced parallel optics.
Likewise, IT managers are ubiquitously utilizing virtual machines (VMs) to increase compute usage of servers and storage equipment. By uniting VMs and SDNs, data centers are using more converged infrastructure solutions (combining compute, storage and network systems into a single, optimized computing package) as well as hyper-converged solutions (adding applications onto converged solutions) to continue improving operational efficiencies. These converged solutions have the benefit of increasing the compute and storage density of the data center, and have the downside of increasing the power and cooling requirements needed to properly operate them.
As data centers are using more converged applications, as well as consolidating and increasing in size, the power usage and cooling required to operate these data centers continues to increase. It has not been uncommon that a large enterprise and/or colocation/hosting data center site might be the #1 consumer of electricity in the city for which the data center resides. Further, given the cost of building a new data center, optimizing capacity has become one of the top challenges faced by data center operators, as they seek to stay within their current data center footprint while accommodating increasing loads. This has led IT and facility management to work very closely together on their energy strategy for operating their data centers. To improve energy conservation and power utilization, many companies have looked towards: (1) renewable energy (free air-cooling, solar, hydro-plants, etc.), (2) comprehensive cabinet/containment thermal management solutions, (3) intelligent data center IoT environmental sensors, and/or (4) advanced in-rack intelligent power distribution units (PDUs). These devices are typically enabled by data center infrastructure management (DCIM) software systems that provide dashboards and reports to monitor, manage and control the cooling and power of the data center.
It has been my experience that the design and operations of data centers are becoming increasingly more complex with the adoption of converged / hyper-converged infrastructure solutions and industry consolidation. Instinctively, IT managers focus on network infrastructure and software applications while Facility managers focus on the heavy equipment used to supply the power and cooling for the data center. Unfortunately, the passive physical infrastructure (cabling, cabinets, PDUs, pathways, thermal containment systems, etc.) can often be an afterthought of a data center design and/or overlook during operations. This is a big mistake. If you want to have a data center that is truly agile, scalable and operationally efficient, it is critical that you deploy a robust passive physical infrastructure foundation that: (1) is based on proven reference designs and architectures, (2) provides pre-configured solutions to ensure interoperability with active network equipment, (3) deliver a clear migration path to higher data rates (40G/100G & beyond), (4) offers intelligent thermal containment and power management systems at a cabinet/pod level, and (5) monitors and controls environmental conditions and power distribution at the cabinet level to improve IT provisioning.
Data centers will play a critical role in a company’s ultimate success story–being the compute resource to run the business applications for the new digital economy. Going forward, IT managers will be tasked with the responsibility of designing and operating data centers, including the passive physical infrastructure, that are both agile and scalable to meet evolving business requirements, while continuously looking for ways to drive down cost and improve operational efficiencies.