My advice was for home lab use -- if you're putting hardware in a data center, of course use rack mounted hardware.
First, rack mounted hardware is going to have more business oriented features, such as remote management, redundant power supplies, multiple hard drive bays, ECC ram, etc.
Second, for power usage, a "home lab" system is typically going to be sitting idle most of the time, only using significant CPU when you're actually using it (say when watching something with Plex where it's transcoding video). Businesses, on the other hand, aren't going to rack a bunch of servers that are sitting idle most of the time; they're actually going to be using them for something and are going to be getting regular and significant traffic, so they're going to be loaded up and rarely sit idle. Therefore any power savings you may get from a "regular" system would be negligible.
Lastly, unless something has changed recently, in a data center you typically pay for a certain capacity of electricity on your rack, not necessarily the amount of electricity you actually use. Therefore as long as you don't exceed your power allotment then you're good. This is typical of commercial power in general.
On Sat, Jul 22, 2023, at 7:31 PM, Keith Smith via PLUG-discuss wrote:
Hi,
During a past thread someone talked about commercial servers being noisy
and using a lot of electricity. I assume the electricity usage would
mean more heat as well.
Was this a home lab/office statement or was it a general overall
statement?
This begs the question why not use consumer grade hardware in a data
center instead of noisy and hot commercial equipment that also use more
electricity?
Thanks!!
Keith
---------------------------------------------------
To subscribe, unsubscribe, or to change your mail settings: