Wednesday, May 4, 2016

SEVEN REASONS FOR DATA CENTER FAILURE


Improper System Authorization
In a data center environment only very few administrators, (if any) should have full and unrestricted authorization to access all systems in a data center. Access should be tightly regulated. 


Ineffective Fallback Procedures
One of the major steps that is mostly ignored when planning for maintenance windows is the fallback procedure. Usually, the process documented is not consistently vetted and does not fully revert all changes back to original form.


Making Too Many Changes
Administrators should try to ensure that they are not making too many changes at a time during a maintenance window as this can be very tasking. When administrators are under pressure to complete a large number of tasks within a short period of time, mistakes are bound to occur. Secondly, as a result of a lot of changes occurring at the same time frame,  troubleshooting post-change problems can be far more tasking.


Insufficient, Old, Or Misconfigured Backup Power
Power failure is the most known cause for a data center to go down. Power outages happen all the time. As a result of this there is a need for redundant power sources like Battery and/or generator power to be used as a backup source. The challenge sometimes can be as a result of batteries not being replaced in a timely manner, generators not being tested, and power failure tests not performed. All of these oversights can result to the unavailability of redundant power when needed.


Cooling Failures
Data centers are known to generate a huge amount of heat per time. This is why cooling is so important to any data center.  It is important therefore to ensure that temperature sensor readings and alerts are sent to admins, so as to ensure sufficient time to implement your backup cooling procedures.

Changes Outside Maintenance Windows
Sometimes in data centers, there are situations where a request comes in to make a slight change to a server or piece of network equipment. And while data center protocol technically necessitates this change request to pass through the change-control committee, people feel it can easily be made outside of a formal change-control process and maintenance window. This can be mostly true but quite often, a minor change has unforeseen implications.


Hanging Onto Legacy Hardware
Hardware is likely to fail at some point on the other. As a matter of fact, the longer hardware is kept, the more likely it is to fail. This is common knowledge but yet we still have critical applications running on very old hardware. These problems are usually as a result of lack of a structured and comprehensive migration plan onto a new hardware or software platform .

AFRICA'S LARGEST DATA CENTER

Africa’s largest datacenter is presently being constructed at Teraco Data Environment’s Johannesburg site in South Africa. This project is a 27,500m square project which is an extension of an existing colocation building that was built seven years ago. This data center has been estimated to have a build duration of 18months. It is expected to become operational by the end of the year 2016. It promises to have a dynamic free cooling system.

The size-limiting factor for the facility was the power constraint set down by the local council in Isando, just 19km (12 miles) from Johannesburg. An increase of 10MVA has allowed expansion to 18,500m square of utility space and 9,000 m square of white space. This data center has been able to generate a total of 16MVA of power. This will ensure that the data center can be adequately powered and that it is properly cooled and maintained.


The site is situated very close to Oliver Tambo Airport and shares the same resilient grid. In order to ensure uptime, Teraco has been given approval to store 210,000 liters of diesel onsite in accordance with an Environmental Impact Assessment. It is estimated that this quantity of diesel will enable the datacenter to run for a minimum of 40 hours at maximum load if the grid undergoes power outage.


TULIP DATA CENTER - LARGEST DATA CENTER IN ASIA


The data center has been designed based on the recommendations made by the telecommunication industry associations. The data center is provisioned with up to 66 kV line power obtainable from 2 sub-stations on a high-tension format (7.5 MW). As of today, the center has received government sanction for receiving up to 40 MW power, and it has up to 7.5 MW power available, while the usage is around 4 MW only. In terms of its energy specification, the PUE comes up to 1.9 if used effectively to its maximum capacity, and presently, it is in the range of 1.4-1.6.

A redundancy of N+N has been maintained for every component needed in the data center including the transformers. These are also equipped with Novec 1230 extinguishers, fire-retardant paints on the doors that can stand glazing fire for up to 2-3 hours, and 200-400 kVA UPS and back-up batteries that can take the load for 20 minutes in full load.

Tulip Data Center claims to be first of its kind, as it has a distinct PAHU room and separate IT and electrical room, no non-IT entry in the server room. This Data Center follows closed containment cooling system, where the entire rack space is in a glass enclosure that maintains the temperature of the confinement rather cooling up the entire room. The chiller plant is located on the roof of the building. The water trails a closed circular system, in which case the same water is used over and over again. The whole of this data center is spread over 900,000 sq ft with the server built-up area being around 45,000 sq ft. Every floor plate has a capacity of about 20,000 sq ft or up to 600 racks in 6 kV per rack.

Tuesday, May 3, 2016

THE WORLD’S LARGEST DATA CENTER-THE LAKESIDE TECHNOLOGY CENTER

The Lakeside Technology Center (350 East Cermak) is a 1.1 million square foot multi-tenant data center hub owned by Digital Realty Trust. It was initially developed by the R.R. Donnelley Co. to house the printing presses for the Yellow Book and Sears Catalog. This building was then transformed to telecom use in 1999. Today it is the the nerve center for Chicago’s commodity markets, and it houses data centers for financial firms attracted by the wealth of peering and connectivity providers among the 70 tenants.



The major attributes of this infrastructure is that it includes four fiber vaults and three electric power feeds, which provide the building with more than 100 megawatts of power. This facility is presently the second-largest power customer for Commonwealth Edison, trailing only Chicago’s O’Hare Airport. Grid power is maintained by more than 50 generators throughout the building, and these generators are fueled by numerous 30,000 gallon tanks of diesel fuel.


One of the most peculiar features of the facility is its cooling system, which is supported by an 8.5 million gallon tank of a refrigerated brine-like liquid. The massive tank acts as thermal energy storage for the Metropolitan Pier and Exposition Authority (MPEA), including the nearby McCormick Place Exposition Center and Hyatt Regency Hotel as well as the building itself. Thermal energy storage can help to reduce costs by running chillers during off-peak hours when power rates are lower.

FACEBOOK’S WIND POWERED DATA CENTER


Facebook reported that it’s building a large $1 billion data center in Ft. Worth, Texas. The facility is already being constructed and will be Facebook’s fifth data center. The data center will utilize wind power from a large wind farm that is also under construction on 17,000 acres of land in Clay County about 90 miles from the data center. This data center will be 100% powered by clean energy.

Presently, Facebook’s Iowa Data center is also being powered with energy from a nearby wind farm. Wind energy is the cheapest and most widely deployed form of clean energy around the world. One gigawatt is the equivalent to a large coal or natural gas plant. Facebook isn’t just focused on clean energy for its data centers, Facebook has also constructed its data centers in ways that make them very energy efficient. This was achieved by using outdoor air for cooling (instead of power hungry air conditioners), and also energy-efficient servers and facility designs.

This Facebook’s strategy is one of the few examples of an Internet company making use of clean power to run web services. Facebook will most likely purchase the wind power at a fixed low rate over several decades. If grid energy prices rise, this will help Facebook curb expenses on its energy bill. Facebook is also aiming to achieve a new goal of powering its operations with 50 percent renewable energy by the end of 2018. This data center will also utilize the latest OCP (Open Compute Project) technology, including Yosemite for compute and fabric, Wedge, and 6-pack at the network layer.

Thursday, April 21, 2016

AMAZON’S SNOWBALL



Although high speed Internet connections like T3 are accessible in many parts of the world, the process of transferring large amount of data such as terabytes or petabytes of data from an existing data center to the cloud is still an uphill task. To solve this challenge, Amazon introduced AWS (Amazon Web Services) Import/Export Snowball. This is built around appliances that were previously owned and maintained by Amazon. The AWS Import/Export Snowball is faster, cleaner, simpler, more efficient, and more secure than other AWS Import/Export versions. Customers do not have to buy storage devices or upgrade their network.


Snowball is generally designed for the transfer of huge amount of data to (Amazon Web Services) AWS on a one-time or repetitive basis. The new Snowball appliance is solely built for the purpose of efficient data storage and transfer. It is robust enough to bear a 6 G jolt, and (at 50 lbs) light-weighted enough to be lifted by one person. It is totally self-supporting, with 110 Volt power and a 10 GB network connection on the back and an E Ink display/control panel on the front. It is resistant to adverse weather conditions and acts as its own shipping container. It has the ability to go from the mail room to the data center back and forth without any need to pack and unpack thereby slowing the process of things down. In addition to being physically rugged and tamper-resistant, AWS Snowball is very sensitive to tampering attempts. Multiple Snowballs can be connected together. Also Snowballs are quite ­­durable, too, so companies won’t need to be worried about what happens when they drop Snowballs into the mail.

Limitations of Snowball

v  In order to ensure a tight security process, data transfers must be completed within 90 days of the Snowball being prepared.

v  Snowball only supports data transfers into the us-east-1 US East (N. Virginia) and us-west-2 US West (Oregon) regions.

v  The default service limit for the number of Snowballs that customers can have at one time is 1.

v  The Snowball appliance must not be impaired. The Snowball must not be opened at all, except for the two access panels in the front and the back.


Wednesday, April 20, 2016

ROBOTICS IN DATA CENTERS


ROBOTICS IN DATA CENTERS

Robotics can be defined a subsidiary of technology that has to do with the design, construction, operation and application of robots. Robots are programmable mechanical devices designed to perform one or more tasks repeatedly with speed and precision without any human interference.


Robotics is a rising trend across many industries, as the technology has progressed in recent years. The movement toward robots will provide a major contribution in the data center sector. As the dependence on the data center continues to grow, full software and hardware robotics automation has now become a necessity both now and in the future. A robot-driven lights-out data center will have a rail-based robotics capable of scaling the entire data center. This will make the modern data center no longer impeded by horizontal expansion space. The use of robotics make data centers to literally scale upwards. The ability to utilize space in the best possible manner is always a challenge for data center providers, so having the functionality of scaling both horizontally and vertically becomes a huge advantage. At the moment, it is somewhat clear that robotics will most likely not (at least not for quite some time) take out the need for unique human interaction within the data center. Robotics will indeed assist in creating automation around human labor that is monotonous. Robotics will free up professionals to do bigger and better things, as it actually encompasses a more automated environment and increased productivity.

The process of developing this platform will not be totally easy. One of the major challenge is that, there really isn’t an archetype or a schematic for such a design today. In addition, there are some feasibility challenges to consider as well. Here, we will be taking a look at some of the challenges and benefits.

CHALLENGES

Ø  Currently there is no server hardware designed to be operated by machines and so there will be need to design a customized hardware for this purpose.

Ø  Cabling would have to be done in a most unique format. There would be need to pay close attention to design factors with respect to the process of wiring and cabling in a robotics environment.

Ø  The upfront cost for automation is huge.



BENEFITS

Ø  Robotics allows the data center to be extremely efficient with space as it helps the data center grow vertically instead of just horizontally

Ø  Robotics help to drastically reduce the need for lighting.

Ø  Enormous decrease in downtime. This is made possible because of the structured and fully functional operational flow setup thereby making it easy for every aspect of data center management to be controlled and calculated.

Ø  The machines in the data center can run at a little higher temperature. This is because within a fully integrated, lights-out data center, environmental variables can be completely shifted towards even more efficiency and easier control of hardware.

Monday, April 18, 2016

JUNIPER’S META FABRIC

JUNIPER’S META FABRIC

In recent times, the new data center landscape is majorly virtualized and spread across multiple, geographically distributed sites and public, private, and hybrid cloud environments. But the process of building, connecting, and securing this set of computing power is an intricate task. It is one that involves type of data center network that is suitable for this task. This begins with an architecture optimized for the cloud era and an infrastructure that gives you agility, automation, and simplicity.

Juniper’s MetaFabric architecture provides a simple, smart, and open model for building high-performance data centers and cloud networks that are also highly reliable. It enables the deployment of new applications, services, and technologies quickly and easily, with secure and prompt access to data. The MetaFabric architecture delivers an agile and highly efficient network base for complex physical and virtual data centers and clouds. These are the following transformations that the MetaFabric makes to the network of data centers.

Ø  MetaFabric helps to reduce the complexity of network deployment, operations, and management.

Ø  It maximizes the flexibility of data center networks by allowing integration with any other data center or cloud environment and preventing vendor lock-in.

Ø  It helps to optimize staff time and improve the performance of application, with the use of analytics and actionable insights.


The core of the MetaFabric architecture is the ability to support the entire data center infrastructure whether physical or virtual in a most effective and flexible manner. It provides infrastructure with the necessary performance, scalability, reliability, and security to support business-critical operations. 

Sunday, April 17, 2016

HYBRID CLOUD: MULTI-TENANT DATA CENTERS


HYBRID CLOUD: MULTI-TENANT DATA CENTERS

Hybrid cloud can be referred to as a cloud computing environment which uses a combination of on-premises, private cloud and third-party, public cloud services with a synergized arrangement between the two platforms. Hybrid clouds allow workloads to move between private and public clouds as computing needs and costs change. This in turn offers businesses a higher level of flexibility and a wider variety of data deployment options. For example, an organization can deploy an on-premises private cloud to host sensitive or critical workloads, but use a third-party public cloud provider, such as Google Compute Engine, to host less-critical resources, such as test and development workloads.



According to Chris Sharp of Digital Reality, the future of the enterprise lies in how to leverage and monetize data. This therefore has mounted increasing pressure on companies to distribute data easily between all necessary parties. This can be complex, especially as most companies today require a combination of dedicated physical data centers, private cloud deployments and public cloud deployments. In order to move data in and out of data centers in a very fast and secure manner, companies are beginning to rely on elastic, hybrid clouds made available by data center and colocation providers.

Also, Michael Custer of Chatsworth Products states that there will hardly be any business in 2016 not directly or indirectly involved in some combination of cloud, colocation, and on premise compute and storage. How structured that definition of hybrid deployment needs to be is still yet to be seen. Insourcing and outsourcing trends must in the end be cyclical, as when nearly everyone jumps on the same technology boat, soon the competitive advantage is lost and the next disruptive ship arrives with much excitement.

Thursday, April 14, 2016

HOW TO REDUCE DATA CENTER NOISE



If you have been to a data center you might have experienced the high level of noise that is accompanied with it. When the equipment of data center mainly comprises of Servers, UPSes, Cooling equipment, the ­­data center managers have to come up with ways on how to deal with the noisy environment prevailing in it. On an average, most data center have a noise level ranging in between 70DB to 85DB. As a result of this noise range, it is extremely difficult to communicate on the floor and on phone. It is important to note also that most people working in a data center environment experience low-level headaches as a result of the high noise levels.

Here are some ways in which we can reduce the noise levels in our data centers.

v  First identify the source of this excessive noise other than servers. Take a walk around the data center with your colleagues and determine where your conversation is audible and where it is not.

v  Once you have determined the sources, first call the equipment dealer or the manufacturer and see if they have a solution. Sometimes all you need to make a huge difference is to replace or change a part.

v  Many times, deployed cooling solutions are responsible for creating a noisy environment in a data center. This is totally dependent on the types of fans and the number of fans which are inside the racks and the more the count is, the more the noise levels. Once you realize that the fans are making much more noise, they should be replaced with quiet ones.

v  Server fan noise can also be contained by enclosing it within an aisle containment system.  The containment barriers will trap the sounds and prevent them from reverberating around the rest of the data center. 

v  For data centers that do not have a drop ceiling, installing ceilings with acoustic tiles can be very helpful.  Some of these sound deadening panels, tiles and baffles can be suspended or hung from the ceiling and walls to ensure that the sound waves do not reflect off of ceilings and walls back into the data center. It is good to ensure that they are fiber free; properly fire rated and designed to reduce reverberation and prevent noise buildup.


v  Finally it is better to have disposable ear plugs and noise cancelling headphones at the entrance, especially for those who might be visiting the data center or for those who do not visit as often.
FACEBOOK’S DIS AGGREGATION ACTION

Disaggregation can be defined as the separation of a whole system into components. One of the ways to offer operators in a data center flexibility is to separate the data-center equipment, in particular servers, into resource components. This increases their chances of covering times of high demand while ensuring optimal utilization.

An application of disaggregation can be seen in the way the backend infrastructure that populates a Facebook user’s news feed is set up. Up until last year, Multifeed, the name of the news feed backend, was made up of uniform servers, with each having the same amount of memory and CPU capacity. The query engine that takes data for the news feed, called Aggregator, uses a lot of CPU power. The storage layer it pulls data from keeps it in memory, so it can be delivered faster. This layer is called Leaf, and it uses a lot of memory. The former version of a Multifeed rack contained 20 servers, each running both Aggregator and Leaf. In order to catch up with the increasing user growth, Facebook engineers continued adding servers and eventually realized that while CPUs on those servers were being heavily utilized, a lot of the memory capacity was sitting idle.

To solve this inefficiency, the way Multifeed works was reconstructed in the way that the backend infrastructure was set up and the way the software used it. Separate servers were designed for Aggregator and Leaf functions. While the Aggregator had lots of compute, the Leaf functions had lots of memory. This led to a 40 percent efficiency improvement in the way Multifeed used CPU and memory resources. The infrastructure went from a CPU-to-RAM ratio of 20:20 to 20:5 or 20:4 – a 70 to 80-percent reduction in the amount of memory that needs to be deployed.

Wednesday, April 13, 2016

TOWER SHAPED 65-STOREYED DATA CENTER

Recently two 28-year architects Marco Merletti and Valeria Mercuri envisioned a 65-story cylindrical-shaped data center that makes use of  a series of pods to house servers, which are available for service in a way that is very similar to the how an automated parking garage moves cars. This tall structure is aimed at the maximum use of natural cooling. Therefore, as against constructing a flat, stretched-out complex, they are putting forward the idea of a data center that has its heights similar to that of sky scrapers.


This design integrates tenable technology that can efficiently cool hundreds of thousands of servers, while leveraging on automated processes. The data center was created with Iceland in mind, which makes it suitable for use by both U.S. and European companies. It can be powered by hydropower and geothermal energy, and the proximity of Iceland to the Arctic Circle shows natural cooling will have a huge role to play. The design is viewed "as a giant 3D motherboard," on which components can be situated, updated and replaced.  The building has a modular, cylindrical design. The servers are in pod units, with 24 pods assigned to a floor. These pods are moved by an automated handling system to the ground floor where technicians would work on servicing and maintaining them. The pod design was influenced by the Car Towers at the Autostadt in Wolfsburg, Germany. The new cars made by Volkswagen are moved temporarily into towers that automate the storage process. On a smaller scale, the towers have a design that look like the Apple Mac Pro tower.

The data tower, as with a radiator, is designed in such a way that it has maximum contact surface with the outside. The pods hooked on to the circular structure of the tower give a layout of a series of vertical blades. The air inside the tower is hotter than the air outside, hence this design is aimed at creating a chimney effect. The hot air inside the tower goes up and the cold air from the outside is absorbed. The outside cold air is made to pass through the pods, and in this way cools the servers. Some of this heat is absorbed to warm other parts of the building, as well as to give heat to greenhouses that are nearby.




Tuesday, April 5, 2016

Facebook's data center fabric




FACEBOOK’S DATA CENTER FABRIC
Facebook’s Data Center Fabric idea was geared towards making the entire data center building one high-performance network, instead of a hierarchically oversubscribed system of clusters. To achieve this, they took an approach that could be referred to as Disaggregation. Here, instead of the large devices and clusters, they broke the network up into small identical units which can be called server pods and then they created uniform high-performance connectivity between all pods in the data center. The pod is a layer3 micro-cluster. The pod is not defined by any hard physical properties; it is simply a standard “unit of network” on the new fabric. Each pod is served by a set of four devices that are called fabric switches.




Figure 1: A sample pod

What makes this design unique is the decrease in size of the new unit – each pod has only 48 server racks, and this quantity remains the same for all pods. It’s an efficient building block that fits nicely into various data center floor plans, and it needs only basic mid-size switches to aggregate the TORs. The smaller port density of the fabric switches makes their internal architecture very simple, modular, and robust.

THE NETWORK TOPOLOGY
The network topology that was used here is called a “top down” approach – thinking in terms of the overall network first, and then focusing the necessary actions to individual topology elements and devices. The Fabric was built using standard BGP4 as the only routing protocol. In order to simplify things, only the important routing protocol features were used. This ensured that the performance and scalability of a distributed control plane for convergence was leveraged on, while offering tight and granular routing propagation management. They also ensured that compatibility with a broad range of existing systems and software was maintained. At the same time, a centralized BGP controller was developed. This helps to override any routing paths on the fabric by pure software decisions. This is called flexible hybrid approach “distributed control, centralized override.”

THE PHYSICAL INFRASTRUCTURE

In spite of the large number of fiber strands, the physical and cabling infrastructure of this Fabric has been simplified as against how it might appear on the logical network topology drawings. The Facebook team worked together across multiple Facebook infrastructure teams to optimize their third-generation data center building designs for fabric networks. This they have done by shortening cabling lengths and enabling rapid deployment. This infrastructure was first implemented in the new building layout of The Altoona data center.

Monday, April 4, 2016

IMPROVING THE EFFICIENCY OF DATA CENTERS THROUGH MACHINE LEARNING

Machine learning can be defined as a method of teaching computers to make and improve predictions or behaviors based on some data. It can also be referred to as pattern recognition, the act of teaching a program to respond to patterns. The use of Machine learning to improve data centers has recently been adopted by Google data center.

The modern data center (DC) is a complex synergy of several mechanical, electrical and controls systems. The sheer number of possible operating configurations and nonlinear inter-dependencies make it a daunting task to understand and optimize energy efficiency. One of the most intricate challenges is power management. Growing energy costs and environmental responsibility have placed the DC industry under increasing pressure to improve its operational efficiency. Applying machine learning algorithms to existing monitoring data goes a long way in improving the efficiency of data centers.  The enormously high amount of data that is generated by large-scale DC across thousands of sensors are seldom used for purposes other than monitoring. As a result of the evolving technology in power processing and monitoring capabilities, machine learning will play a very important role in improving DC efficiency The new development in processing power and monitoring capabilities create a large opportunity for machine learning to guide best practice and improve DC efficiency.

Machine learning is appropriate for the DC environment as a result of the complexity of plant operations and the plethora of existing monitoring data. The modern large-scale DC has a wide variety of mechanical and electrical equipment, along with their associated set points and control schemes. The interactions between all these systems and the various feedback loops make it extremely demanding to accurately figure out DC efficiency using traditional engineering formulas. For example, once there is a change in the cold aisle temperature set point, this change produces load variations in the cooling infrastructure which include chillers, cooling towers, heat exchangers and pumps. These cooling infrastructure in turn cause nonlinear changes in equipment efficiency. The atmospheric weather conditions and equipment controls also have an effect on the resulting DC efficiency. Using standard formulas for predictive modeling have a strong likelihood of producing errors because they are not able to capture such complex inter-dependencies. In addition, the sheer number of possible equipment combinations and their set point values make it an uphill task to determine where the optimal efficiency exists.

In response to solving these challenges, a neural network is chosen as the mathematical framework for training DC energy efficiency models. Neural networks can be referred to as a group of machine learning algorithms that imitate cognitive behavior through interactions between artificial neurons. They are useful for modeling very complicated systems because neural networks do not need the user to predefine the feature interactions in the model, which assumes relationships within the data. Instead, the neural network looks for patterns and interactions between features to automatically generate a best fit model. This area of machine learning can be applied in fields such as speech recognition, image processing, and autonomous software agents. Just like most of the learning systems available, the model accuracy improves over time as new training data is acquired




Friday, January 29, 2016

FLOATING DATA CENTERS
The concept of floating data centers refers to building data centers on a floating platform in form of a ship or what is referred to as barges (a flat bottomed boat used for carrying heavy loads). The idea of this technology is to power the data centers by ocean currents and using sea water to cool the servers.
A couple of companies have been involved with this technology. Google started to build what was insinuated to be a floating data center between the year 2010 and 2012. It was a 4-storey tall data center on a ship in San Francisco Bay. The original Google project was to implement the floating data center through the motion of ocean surface waves to create electricity, and a cooling system based on sea-powered pumps and seawater-to-freshwater heat exchangers. The concept visualized floating data centers located 3 to 7 miles out at sea. It was meant to allow Google the flexibility to move the data center to where they are needed without breaking it down or building new ones. Although, since 2013 nothing has been heard clearly about this development from Google. 

Figure1 Google Proposed Floating Data center

Recently, a company called Nautilus Data Technologies has decided to incorporate this idea for their data center. The Nautilus design took a different approach from Google’s design by mooring the barges at a pier, which takes out the functionality of harnessing wave power for electricity. The IT equipment is housed inside modular data halls on the deck, containing servers in racks with rear-door cooling units. Just below the deck, in a water tight hold, the cooling distribution units, UPS units, electrical and mechanical equipment are located.

Figure 2 The Nautilus DataTechnologies proposed Floating Data Center

The cooling system has two separate piping loops and a heat exchanger. Cool water from the bay goes in through an intake a couple of feet below the barge. The water is filtered so as to eliminate fish and any other impurity, which then moves to the heat exchanger. On the other side of the heat exchanger a fresh water cooling loop feeds the water-cooled rear-door systems on the racks. The intake system uses copper plating and titanium piping to reduce the effect of salt water on the equipment. Nautilus Technology claims to have worked closely with the Navy in order to address the humidity and condensation issues that arise on a floating vessel.
For Nautilus Technologies, they believe this is a timely solution to the problem of drought in California. Nautilus claim that the benefit of this approach is that 100% of the water which they use in the process is returned to the environment directly back into the body of water and not through evaporation.
The floating data center concept definitely comes with the advantage of reducing the overall cost of infrastructure especially with respect to the cooling facilities. This technology also slashes the huge cost of real estate and property taxes.
On the other hand some experts think that floating data centers are at a high risk of water damage.