Re: Domain Name / Hosting

Top Page
Attachments:
Message as email
+ (text/html)
+ (text/plain)
Delete this message
Reply to this message
Author: Michael Butash
Date:  
To: plug-discuss
Subject: Re: Domain Name / Hosting





That (simple/dumb customers), and most
      of their customer base being that really *does not need* dedicated
      services for what they are doing.  It doesn't meet their business
      model, or technology models around that business when consumer
      cores are still 2-4 per cpu, and you're seeing 12-16 per socket,
      dual socket, and most can take 192-384gb of ram.


      TLDR:


      Most people probably have this delusion that a "dedicated server"
      is just that, a server, but the reality was GD's (and others like
      them) bare metal servers were just generic consumer Shuttle SFF pc
      boxes on bakers racks as far as the eye can see, which meant no
      IPMI, remote console (outside an os), absolutely nothing pluggable
      aside from usb, and rather a pain to deal with provisioning or
      maintenance-wise.  When someone's system died, a kid in a dc got
      paged out to rip the box apart and troubleshoot them, which isn't
      easy on consumer gear.  They were great when launched in ~2004 for
      cost/power/heat, and up until fairly recent still were, but proved
      ultimately unsustainable as any part that failed required some dc
      tech to perform surgery on a SFF case packed with parts, even raid
      cards, which is simply never fun.  It also ends up costing far too
      much to maintain over time in total opex at scale.


      Even then providing dedicated hardware was a challenge even
      looking at real (rack) servers then as an evolution, dealing with
      ipmi quirks, securing networking from root-access users locally
      (harder than one might think across various network hardware),
      that once handed off to the customer simply went out the window to
      keep them from shooting themselves in the foot like not backing up
      their own server or say, doing rm on root, or trying to arp
      poison/mitm the lan around them and drawing security ire.


      Even if hardware were "dedicated", industry movement is to simply
      give a vm in dedicated hardware, adding a hypervisor shim for
      control-plane on hardware, at very least making inventory,
      provisioning, maintenance, and more importantly, network control
      at a raw hardware level easy.  It also allows providers to bill
      for usage vs. blanket floodgates, so hey, if you want to pay for a
      whole server of 24 cores and 192gb of ram on a 10g link, they'll
      sell you the cycles/bandwidth for sure, and it'll be about the
      cost of 8 of those shuttles "dedicated" boxes.  


      For GD, they could also get rid of data centers full of odd bakers
      racks and dumpters full of old/odd/non-standard consumer Shuttle
      hardware, finally, to deal with standard rack server form-factor
      hardware built to maintain operationally.


      VM's for hosting just make sense, anything dedicated will never be
      "cheap" out of pure reality it doesn't make sense to offer 2-4
      core hardware systems, or maintain them as stand-alone systems. 
      Why everyone is a "cloud" suddenly years ago, GD was just late to
      the party.


      -mb



      On 03/22/2016 11:34 AM, Sesso wrote:




I asked an employee about it and he said, "our clients are
        too dumb to realize that that aren't getting a bare metal
        server."



Jason

        Sent from my iPhone





---------------------------------------------------
PLUG-discuss mailing list -
To subscribe, unsubscribe, or to change your mail settings:
http://lists.phxlinux.org/mailman/listinfo/plug-discuss