Domain Name / Hosting

Sesso sesso at djsesso.com
Tue Mar 22 13:07:36 MST 2016


yeah I worked at godaddy when they had those little boxes. Yes, the industry has gone mostly virtual which is understandable. However, there are still clients that want actual hardware. I sell just as much hardware in my own business as I do Virtual. My day job sells about the same and we actually own our own datacenters. The clients that buy hardware are usually large companies that can afford it. You are right, many clients don’t need it but they want it lol. They are signing 3 year contracts on these servers also. 



Jason

> On Mar 22, 2016, at 12:45 PM, Michael Butash <michael at butash.net> wrote:
> 
> That (simple/dumb customers), and most of their customer base being that really *does not need* dedicated services for what they are doing.  It doesn't meet their business model, or technology models around that business when consumer cores are still 2-4 per cpu, and you're seeing 12-16 per socket, dual socket, and most can take 192-384gb of ram.
> 
> TLDR:
> 
> Most people probably have this delusion that a "dedicated server" is just that, a server, but the reality was GD's (and others like them) bare metal servers were just generic consumer Shuttle SFF pc boxes on bakers racks as far as the eye can see, which meant no IPMI, remote console (outside an os), absolutely nothing pluggable aside from usb, and rather a pain to deal with provisioning or maintenance-wise.  When someone's system died, a kid in a dc got paged out to rip the box apart and troubleshoot them, which isn't easy on consumer gear.  They were great when launched in ~2004 for cost/power/heat, and up until fairly recent still were, but proved ultimately unsustainable as any part that failed required some dc tech to perform surgery on a SFF case packed with parts, even raid cards, which is simply never fun.  It also ends up costing far too much to maintain over time in total opex at scale.
> 
> Even then providing dedicated hardware was a challenge even looking at real (rack) servers then as an evolution, dealing with ipmi quirks, securing networking from root-access users locally (harder than one might think across various network hardware), that once handed off to the customer simply went out the window to keep them from shooting themselves in the foot like not backing up their own server or say, doing rm on root, or trying to arp poison/mitm the lan around them and drawing security ire.
> 
> Even if hardware were "dedicated", industry movement is to simply give a vm in dedicated hardware, adding a hypervisor shim for control-plane on hardware, at very least making inventory, provisioning, maintenance, and more importantly, network control at a raw hardware level easy.  It also allows providers to bill for usage vs. blanket floodgates, so hey, if you want to pay for a whole server of 24 cores and 192gb of ram on a 10g link, they'll sell you the cycles/bandwidth for sure, and it'll be about the cost of 8 of those shuttles "dedicated" boxes.  
> 
> For GD, they could also get rid of data centers full of odd bakers racks and dumpters full of old/odd/non-standard consumer Shuttle hardware, finally, to deal with standard rack server form-factor hardware built to maintain operationally.
> 
> VM's for hosting just make sense, anything dedicated will never be "cheap" out of pure reality it doesn't make sense to offer 2-4 core hardware systems, or maintain them as stand-alone systems.  Why everyone is a "cloud" suddenly years ago, GD was just late to the party.
> 
> -mb
> 
> 
> On 03/22/2016 11:34 AM, Sesso wrote:
>> I asked an employee about it and he said, "our clients are too dumb to realize that that aren't getting a bare metal server."
>> 
>> Jason
>> 
>> Sent from my iPhone
> ---------------------------------------------------
> PLUG-discuss mailing list - PLUG-discuss at lists.phxlinux.org
> To subscribe, unsubscribe, or to change your mail settings:
> http://lists.phxlinux.org/mailman/listinfo/plug-discuss



More information about the PLUG-discuss mailing list