Wednesday, May 4, 2016

SEVEN REASONS FOR DATA CENTER FAILURE


Improper System Authorization
In a data center environment only very few administrators, (if any) should have full and unrestricted authorization to access all systems in a data center. Access should be tightly regulated. 


Ineffective Fallback Procedures
One of the major steps that is mostly ignored when planning for maintenance windows is the fallback procedure. Usually, the process documented is not consistently vetted and does not fully revert all changes back to original form.


Making Too Many Changes
Administrators should try to ensure that they are not making too many changes at a time during a maintenance window as this can be very tasking. When administrators are under pressure to complete a large number of tasks within a short period of time, mistakes are bound to occur. Secondly, as a result of a lot of changes occurring at the same time frame,  troubleshooting post-change problems can be far more tasking.


Insufficient, Old, Or Misconfigured Backup Power
Power failure is the most known cause for a data center to go down. Power outages happen all the time. As a result of this there is a need for redundant power sources like Battery and/or generator power to be used as a backup source. The challenge sometimes can be as a result of batteries not being replaced in a timely manner, generators not being tested, and power failure tests not performed. All of these oversights can result to the unavailability of redundant power when needed.


Cooling Failures
Data centers are known to generate a huge amount of heat per time. This is why cooling is so important to any data center.  It is important therefore to ensure that temperature sensor readings and alerts are sent to admins, so as to ensure sufficient time to implement your backup cooling procedures.

Changes Outside Maintenance Windows
Sometimes in data centers, there are situations where a request comes in to make a slight change to a server or piece of network equipment. And while data center protocol technically necessitates this change request to pass through the change-control committee, people feel it can easily be made outside of a formal change-control process and maintenance window. This can be mostly true but quite often, a minor change has unforeseen implications.


Hanging Onto Legacy Hardware
Hardware is likely to fail at some point on the other. As a matter of fact, the longer hardware is kept, the more likely it is to fail. This is common knowledge but yet we still have critical applications running on very old hardware. These problems are usually as a result of lack of a structured and comprehensive migration plan onto a new hardware or software platform .

AFRICA'S LARGEST DATA CENTER

Africa’s largest datacenter is presently being constructed at Teraco Data Environment’s Johannesburg site in South Africa. This project is a 27,500m square project which is an extension of an existing colocation building that was built seven years ago. This data center has been estimated to have a build duration of 18months. It is expected to become operational by the end of the year 2016. It promises to have a dynamic free cooling system.

The size-limiting factor for the facility was the power constraint set down by the local council in Isando, just 19km (12 miles) from Johannesburg. An increase of 10MVA has allowed expansion to 18,500m square of utility space and 9,000 m square of white space. This data center has been able to generate a total of 16MVA of power. This will ensure that the data center can be adequately powered and that it is properly cooled and maintained.


The site is situated very close to Oliver Tambo Airport and shares the same resilient grid. In order to ensure uptime, Teraco has been given approval to store 210,000 liters of diesel onsite in accordance with an Environmental Impact Assessment. It is estimated that this quantity of diesel will enable the datacenter to run for a minimum of 40 hours at maximum load if the grid undergoes power outage.


TULIP DATA CENTER - LARGEST DATA CENTER IN ASIA


The data center has been designed based on the recommendations made by the telecommunication industry associations. The data center is provisioned with up to 66 kV line power obtainable from 2 sub-stations on a high-tension format (7.5 MW). As of today, the center has received government sanction for receiving up to 40 MW power, and it has up to 7.5 MW power available, while the usage is around 4 MW only. In terms of its energy specification, the PUE comes up to 1.9 if used effectively to its maximum capacity, and presently, it is in the range of 1.4-1.6.

A redundancy of N+N has been maintained for every component needed in the data center including the transformers. These are also equipped with Novec 1230 extinguishers, fire-retardant paints on the doors that can stand glazing fire for up to 2-3 hours, and 200-400 kVA UPS and back-up batteries that can take the load for 20 minutes in full load.

Tulip Data Center claims to be first of its kind, as it has a distinct PAHU room and separate IT and electrical room, no non-IT entry in the server room. This Data Center follows closed containment cooling system, where the entire rack space is in a glass enclosure that maintains the temperature of the confinement rather cooling up the entire room. The chiller plant is located on the roof of the building. The water trails a closed circular system, in which case the same water is used over and over again. The whole of this data center is spread over 900,000 sq ft with the server built-up area being around 45,000 sq ft. Every floor plate has a capacity of about 20,000 sq ft or up to 600 racks in 6 kV per rack.

Tuesday, May 3, 2016

THE WORLD’S LARGEST DATA CENTER-THE LAKESIDE TECHNOLOGY CENTER

The Lakeside Technology Center (350 East Cermak) is a 1.1 million square foot multi-tenant data center hub owned by Digital Realty Trust. It was initially developed by the R.R. Donnelley Co. to house the printing presses for the Yellow Book and Sears Catalog. This building was then transformed to telecom use in 1999. Today it is the the nerve center for Chicago’s commodity markets, and it houses data centers for financial firms attracted by the wealth of peering and connectivity providers among the 70 tenants.



The major attributes of this infrastructure is that it includes four fiber vaults and three electric power feeds, which provide the building with more than 100 megawatts of power. This facility is presently the second-largest power customer for Commonwealth Edison, trailing only Chicago’s O’Hare Airport. Grid power is maintained by more than 50 generators throughout the building, and these generators are fueled by numerous 30,000 gallon tanks of diesel fuel.


One of the most peculiar features of the facility is its cooling system, which is supported by an 8.5 million gallon tank of a refrigerated brine-like liquid. The massive tank acts as thermal energy storage for the Metropolitan Pier and Exposition Authority (MPEA), including the nearby McCormick Place Exposition Center and Hyatt Regency Hotel as well as the building itself. Thermal energy storage can help to reduce costs by running chillers during off-peak hours when power rates are lower.

FACEBOOK’S WIND POWERED DATA CENTER


Facebook reported that it’s building a large $1 billion data center in Ft. Worth, Texas. The facility is already being constructed and will be Facebook’s fifth data center. The data center will utilize wind power from a large wind farm that is also under construction on 17,000 acres of land in Clay County about 90 miles from the data center. This data center will be 100% powered by clean energy.

Presently, Facebook’s Iowa Data center is also being powered with energy from a nearby wind farm. Wind energy is the cheapest and most widely deployed form of clean energy around the world. One gigawatt is the equivalent to a large coal or natural gas plant. Facebook isn’t just focused on clean energy for its data centers, Facebook has also constructed its data centers in ways that make them very energy efficient. This was achieved by using outdoor air for cooling (instead of power hungry air conditioners), and also energy-efficient servers and facility designs.

This Facebook’s strategy is one of the few examples of an Internet company making use of clean power to run web services. Facebook will most likely purchase the wind power at a fixed low rate over several decades. If grid energy prices rise, this will help Facebook curb expenses on its energy bill. Facebook is also aiming to achieve a new goal of powering its operations with 50 percent renewable energy by the end of 2018. This data center will also utilize the latest OCP (Open Compute Project) technology, including Yosemite for compute and fabric, Wedge, and 6-pack at the network layer.

Thursday, April 21, 2016

AMAZON’S SNOWBALL



Although high speed Internet connections like T3 are accessible in many parts of the world, the process of transferring large amount of data such as terabytes or petabytes of data from an existing data center to the cloud is still an uphill task. To solve this challenge, Amazon introduced AWS (Amazon Web Services) Import/Export Snowball. This is built around appliances that were previously owned and maintained by Amazon. The AWS Import/Export Snowball is faster, cleaner, simpler, more efficient, and more secure than other AWS Import/Export versions. Customers do not have to buy storage devices or upgrade their network.


Snowball is generally designed for the transfer of huge amount of data to (Amazon Web Services) AWS on a one-time or repetitive basis. The new Snowball appliance is solely built for the purpose of efficient data storage and transfer. It is robust enough to bear a 6 G jolt, and (at 50 lbs) light-weighted enough to be lifted by one person. It is totally self-supporting, with 110 Volt power and a 10 GB network connection on the back and an E Ink display/control panel on the front. It is resistant to adverse weather conditions and acts as its own shipping container. It has the ability to go from the mail room to the data center back and forth without any need to pack and unpack thereby slowing the process of things down. In addition to being physically rugged and tamper-resistant, AWS Snowball is very sensitive to tampering attempts. Multiple Snowballs can be connected together. Also Snowballs are quite ­­durable, too, so companies won’t need to be worried about what happens when they drop Snowballs into the mail.

Limitations of Snowball

v  In order to ensure a tight security process, data transfers must be completed within 90 days of the Snowball being prepared.

v  Snowball only supports data transfers into the us-east-1 US East (N. Virginia) and us-west-2 US West (Oregon) regions.

v  The default service limit for the number of Snowballs that customers can have at one time is 1.

v  The Snowball appliance must not be impaired. The Snowball must not be opened at all, except for the two access panels in the front and the back.


Wednesday, April 20, 2016

ROBOTICS IN DATA CENTERS


ROBOTICS IN DATA CENTERS

Robotics can be defined a subsidiary of technology that has to do with the design, construction, operation and application of robots. Robots are programmable mechanical devices designed to perform one or more tasks repeatedly with speed and precision without any human interference.


Robotics is a rising trend across many industries, as the technology has progressed in recent years. The movement toward robots will provide a major contribution in the data center sector. As the dependence on the data center continues to grow, full software and hardware robotics automation has now become a necessity both now and in the future. A robot-driven lights-out data center will have a rail-based robotics capable of scaling the entire data center. This will make the modern data center no longer impeded by horizontal expansion space. The use of robotics make data centers to literally scale upwards. The ability to utilize space in the best possible manner is always a challenge for data center providers, so having the functionality of scaling both horizontally and vertically becomes a huge advantage. At the moment, it is somewhat clear that robotics will most likely not (at least not for quite some time) take out the need for unique human interaction within the data center. Robotics will indeed assist in creating automation around human labor that is monotonous. Robotics will free up professionals to do bigger and better things, as it actually encompasses a more automated environment and increased productivity.

The process of developing this platform will not be totally easy. One of the major challenge is that, there really isn’t an archetype or a schematic for such a design today. In addition, there are some feasibility challenges to consider as well. Here, we will be taking a look at some of the challenges and benefits.

CHALLENGES

Ø  Currently there is no server hardware designed to be operated by machines and so there will be need to design a customized hardware for this purpose.

Ø  Cabling would have to be done in a most unique format. There would be need to pay close attention to design factors with respect to the process of wiring and cabling in a robotics environment.

Ø  The upfront cost for automation is huge.



BENEFITS

Ø  Robotics allows the data center to be extremely efficient with space as it helps the data center grow vertically instead of just horizontally

Ø  Robotics help to drastically reduce the need for lighting.

Ø  Enormous decrease in downtime. This is made possible because of the structured and fully functional operational flow setup thereby making it easy for every aspect of data center management to be controlled and calculated.

Ø  The machines in the data center can run at a little higher temperature. This is because within a fully integrated, lights-out data center, environmental variables can be completely shifted towards even more efficiency and easier control of hardware.