Domain Name / Hosting

Michael Butash michael at butash.net
Tue Mar 22 14:23:38 MST 2016


A buddy of mine had a server somewhere, used to run esxi on as he wanted 
his own farms of vm's, so makes sense for that if expensive at ~300/mo.  
Another had a beefy xen vm instance, ran openvz inside of it, and 
accomplished the same thing, if with more share/strain/micromanagement, 
and much cheaper than "dedicated" as a lab (even resold some).

Then you get the ignorant gamers that want lower ping, yet don't 
understand the concept of a queue, buffer, or that their "cheap but 
dedicated" provider uses low-end oversubscribed switches for a network 
causing the latency for vm or dedicated.  Sadly they'd likely see better 
networking out of a VM sitting on a 10g port/switch as long as their 
provider doesn't oversub the host box drastically.

I suppose if they're paying, might as well be the beneficiary of their 
abundance of cash in any regard.

Point was more cost vs. benefit to go dedicated vs. vm, or at least 
consider vm in lieu of what one might get with something "dedicated" 
post sticker shock of it.

I like/use Digital Ocean, they made GD's prices look absurd to compare 
when looking initially.  With SSD's, even a single core system was great 
to use, even with intensive i/o (like minecraft).

-mb



On 03/22/2016 01:07 PM, Sesso wrote:
> yeah I worked at godaddy when they had those little boxes. Yes, the industry has gone mostly virtual which is understandable. However, there are still clients that want actual hardware. I sell just as much hardware in my own business as I do Virtual. My day job sells about the same and we actually own our own datacenters. The clients that buy hardware are usually large companies that can afford it. You are right, many clients don’t need it but they want it lol. They are signing 3 year contracts on these servers also.
>
>
>
> Jason
>
>> On Mar 22, 2016, at 12:45 PM, Michael Butash <michael at butash.net> wrote:
>>
>> That (simple/dumb customers), and most of their customer base being that really *does not need* dedicated services for what they are doing.  It doesn't meet their business model, or technology models around that business when consumer cores are still 2-4 per cpu, and you're seeing 12-16 per socket, dual socket, and most can take 192-384gb of ram.
>>
>> TLDR:
>>
>> Most people probably have this delusion that a "dedicated server" is just that, a server, but the reality was GD's (and others like them) bare metal servers were just generic consumer Shuttle SFF pc boxes on bakers racks as far as the eye can see, which meant no IPMI, remote console (outside an os), absolutely nothing pluggable aside from usb, and rather a pain to deal with provisioning or maintenance-wise.  When someone's system died, a kid in a dc got paged out to rip the box apart and troubleshoot them, which isn't easy on consumer gear.  They were great when launched in ~2004 for cost/power/heat, and up until fairly recent still were, but proved ultimately unsustainable as any part that failed required some dc tech to perform surgery on a SFF case packed with parts, even raid cards, which is simply never fun.  It also ends up costing far too much to maintain over time in total opex at scale.
>>
>> Even then providing dedicated hardware was a challenge even looking at real (rack) servers then as an evolution, dealing with ipmi quirks, securing networking from root-access users locally (harder than one might think across various network hardware), that once handed off to the customer simply went out the window to keep them from shooting themselves in the foot like not backing up their own server or say, doing rm on root, or trying to arp poison/mitm the lan around them and drawing security ire.
>>
>> Even if hardware were "dedicated", industry movement is to simply give a vm in dedicated hardware, adding a hypervisor shim for control-plane on hardware, at very least making inventory, provisioning, maintenance, and more importantly, network control at a raw hardware level easy.  It also allows providers to bill for usage vs. blanket floodgates, so hey, if you want to pay for a whole server of 24 cores and 192gb of ram on a 10g link, they'll sell you the cycles/bandwidth for sure, and it'll be about the cost of 8 of those shuttles "dedicated" boxes.
>>
>> For GD, they could also get rid of data centers full of odd bakers racks and dumpters full of old/odd/non-standard consumer Shuttle hardware, finally, to deal with standard rack server form-factor hardware built to maintain operationally.
>>
>> VM's for hosting just make sense, anything dedicated will never be "cheap" out of pure reality it doesn't make sense to offer 2-4 core hardware systems, or maintain them as stand-alone systems.  Why everyone is a "cloud" suddenly years ago, GD was just late to the party.
>>
>> -mb
>>
>>
>> On 03/22/2016 11:34 AM, Sesso wrote:
>>> I asked an employee about it and he said, "our clients are too dumb to realize that that aren't getting a bare metal server."
>>>
>>> Jason
>>>
>>> Sent from my iPhone
>> ---------------------------------------------------
>> PLUG-discuss mailing list - PLUG-discuss at lists.phxlinux.org
>> To subscribe, unsubscribe, or to change your mail settings:
>> http://lists.phxlinux.org/mailman/listinfo/plug-discuss




More information about the PLUG-discuss mailing list