Welcome to the IKCEST
Six Ways to Strike Back Against Data Center Power Inefficiency

Google and other companies have leveraged AI to study their data center environmental monitoring system’s performance data, relative to power and cooling cycles within their facilities. They do this in order to produce a profile of their energy usage. Through artificial intelligence, Google’s energy profiles then became algorithms that allowed their facility managers to apply well-timed instructions to the building’s mechanical and electrical plants. While this is all pretty easy to understand, it glosses over the fact that this hyperscale provider was already hyper-efficient.

Google’s AI journey began at a point on the efficiency journey that most of us have to work hard to attain. For those of us whose focus is on getting to a sub-1.5 PUEs, here are six thoughts on current practices for designing efficiency into new data center facilities.

  1. New LED lighting: Continued advances in lighting technology have not only driven better visibility in the rack row but also allowed operators to eke out more energy savings.
  2. Higher operating temperatures: Once controversial, the idea of your cold aisles running warmer has become possible through broader operating ranges of IT equipment, and advances in remote monitoring technologies.
  3. Free air cooling versus CRAC or CRAH: Rethinking cooling has shifted the geographic position of data centers around the globe to more northern latitudes, optimizing the number of free cooling days available to the facility.  When evaluating sites, keep in mind a change of venue can have a tremendous impact on the bottom line.
  4. Distributing at higher voltagesThree-phase distribution is more efficient, and implementing it at higher voltages makes it even more so.  Manufacturers of IT gear and electrical equipment have spent the decade making more products available that support higher voltages within the data center space, including 415V PDU solutions.
  5. Workload consolidation through containerization: Containers have allowed for computing in an even smaller footprint, reducing the need to provide cooling for large volumes of space, while virtualization has reduced the number of computing devices needed in the first place.
  6. Power Monitoring: Switched Intelligent PDUs with remote management capabilities can measure power usage of IT infrastructure by providing:

o   Data collection of power consumption at the outlet, device, and cabinet

o   Support for reporting, alarming, and smart load shedding capabilities

o   Environmental monitoring capability - temperature and humidity sensors

o   Switching off unused or underutilized assets (zombie servers, storage, load balancers, etc.) or rebooting remotely

While the promise of better data center efficiency through artificial intelligence is a reality, there are still plenty of measures that can be taken to reduce the PUE of many sub-hyperscale facilities.  

About the Author

Marc Cram is director of new market development for Server Technology, a brand of Legrand (@Legrand). A technology evangelist, he is driven by a passion to deliver a positive power experience for the data center owner/operator. He earned a bachelor’s degree in electrical engineering from Rice University and has more than 30 years of experience in the field of electronics. Follow him on LinkedIn or @ServerTechInc on Twitter.

Original Text (This is the original text for your reference.)

Google and other companies have leveraged AI to study their data center environmental monitoring system’s performance data, relative to power and cooling cycles within their facilities. They do this in order to produce a profile of their energy usage. Through artificial intelligence, Google’s energy profiles then became algorithms that allowed their facility managers to apply well-timed instructions to the building’s mechanical and electrical plants. While this is all pretty easy to understand, it glosses over the fact that this hyperscale provider was already hyper-efficient.

Google’s AI journey began at a point on the efficiency journey that most of us have to work hard to attain. For those of us whose focus is on getting to a sub-1.5 PUEs, here are six thoughts on current practices for designing efficiency into new data center facilities.

  1. New LED lighting: Continued advances in lighting technology have not only driven better visibility in the rack row but also allowed operators to eke out more energy savings.
  2. Higher operating temperatures: Once controversial, the idea of your cold aisles running warmer has become possible through broader operating ranges of IT equipment, and advances in remote monitoring technologies.
  3. Free air cooling versus CRAC or CRAH: Rethinking cooling has shifted the geographic position of data centers around the globe to more northern latitudes, optimizing the number of free cooling days available to the facility.  When evaluating sites, keep in mind a change of venue can have a tremendous impact on the bottom line.
  4. Distributing at higher voltagesThree-phase distribution is more efficient, and implementing it at higher voltages makes it even more so.  Manufacturers of IT gear and electrical equipment have spent the decade making more products available that support higher voltages within the data center space, including 415V PDU solutions.
  5. Workload consolidation through containerization: Containers have allowed for computing in an even smaller footprint, reducing the need to provide cooling for large volumes of space, while virtualization has reduced the number of computing devices needed in the first place.
  6. Power Monitoring: Switched Intelligent PDUs with remote management capabilities can measure power usage of IT infrastructure by providing:

o   Data collection of power consumption at the outlet, device, and cabinet

o   Support for reporting, alarming, and smart load shedding capabilities

o   Environmental monitoring capability - temperature and humidity sensors

o   Switching off unused or underutilized assets (zombie servers, storage, load balancers, etc.) or rebooting remotely

While the promise of better data center efficiency through artificial intelligence is a reality, there are still plenty of measures that can be taken to reduce the PUE of many sub-hyperscale facilities.  

About the Author

Marc Cram is director of new market development for Server Technology, a brand of Legrand (@Legrand). A technology evangelist, he is driven by a passion to deliver a positive power experience for the data center owner/operator. He earned a bachelor’s degree in electrical engineering from Rice University and has more than 30 years of experience in the field of electronics. Follow him on LinkedIn or @ServerTechInc on Twitter.

Comments

    Something to say?

    Log in or Sign up for free

    Disclaimer: The translated content is provided by third-party translation service providers, and IKCEST shall not assume any responsibility for the accuracy and legality of the content.
    Translate engine
    Article's language
    English
    中文
    Pусск
    Français
    Español
    العربية
    Português
    Kikongo
    Dutch
    kiswahili
    هَوُسَ
    IsiZulu
    Action
    Related

    Report

    Select your report category*



    Reason*



    By pressing send, your feedback will be used to improve IKCEST. Your privacy will be protected.

    Submit
    Cancel