Re: Domain Name / Hosting

Top Page
Attachments:
Message as email
+ (text/plain)
+ (text/html)
+ (text/plain)
Delete this message
Reply to this message
Author: James Dugger
Date:  
To: Main PLUG discussion list
Subject: Re: Domain Name / Hosting
I'm a developer working at GoDaddy on one of those shared hosting platform
teams. Haven't seen any "PC's on bakers racks" Those must be a thing of the
past . I do see Dell PowerEdge rack servers "fully pluggable". We don't
buy servers in single quantities, we buy whole preconfigured 42U racks at
time. The racks are shipped directly to our datacenters, in AZ, East Coast
US, Europe, and Asia.


Our cloud offering just went live yesterday at prices comparable to
DigitalOcean. We are partnering with Bitnami for packaged server builds
and this cloud is connected to our domain services. See reviews below.

*http://www.techmeme.com/160321/p6#a160321p6
<http://www.techmeme.com/160321/p6#a160321p6>*

http://techcrunch.com/2016/03/21/godaddy-debuts-aws-style-servers-and-apps-to-build-test-and-scale-cloud-services/
Somethings can be more important than just cheap, like uptime and speed.
GoDaddy ranks in the top 3 or 4 of the fastest providers for products both
on Windows and Linux platforms.

http://cloudspectator.com/web-host-providers-performance-ranking-a-six-month-summary/



On Tue, Mar 22, 2016 at 1:07 PM, Sesso <> wrote:

> yeah I worked at godaddy when they had those little boxes. Yes, the
> industry has gone mostly virtual which is understandable. However, there
> are still clients that want actual hardware. I sell just as much hardware
> in my own business as I do Virtual. My day job sells about the same and we
> actually own our own datacenters. The clients that buy hardware are usually
> large companies that can afford it. You are right, many clients don’t need
> it but they want it lol. They are signing 3 year contracts on these servers
> also.
>
>
>
> Jason
>
> > On Mar 22, 2016, at 12:45 PM, Michael Butash <> wrote:
> >
> > That (simple/dumb customers), and most of their customer base being that
> really *does not need* dedicated services for what they are doing. It
> doesn't meet their business model, or technology models around that
> business when consumer cores are still 2-4 per cpu, and you're seeing 12-16
> per socket, dual socket, and most can take 192-384gb of ram.
> >
> > TLDR:
> >
> > Most people probably have this delusion that a "dedicated server" is
> just that, a server, but the reality was GD's (and others like them) bare
> metal servers were just generic consumer Shuttle SFF pc boxes on bakers
> racks as far as the eye can see, which meant no IPMI, remote console
> (outside an os), absolutely nothing pluggable aside from usb, and rather a
> pain to deal with provisioning or maintenance-wise. When someone's system
> died, a kid in a dc got paged out to rip the box apart and troubleshoot
> them, which isn't easy on consumer gear. They were great when launched in
> ~2004 for cost/power/heat, and up until fairly recent still were, but
> proved ultimately unsustainable as any part that failed required some dc
> tech to perform surgery on a SFF case packed with parts, even raid cards,
> which is simply never fun. It also ends up costing far too much to
> maintain over time in total opex at scale.
> >
> > Even then providing dedicated hardware was a challenge even looking at
> real (rack) servers then as an evolution, dealing with ipmi quirks,
> securing networking from root-access users locally (harder than one might
> think across various network hardware), that once handed off to the
> customer simply went out the window to keep them from shooting themselves
> in the foot like not backing up their own server or say, doing rm on root,
> or trying to arp poison/mitm the lan around them and drawing security ire.
> >
> > Even if hardware were "dedicated", industry movement is to simply give a
> vm in dedicated hardware, adding a hypervisor shim for control-plane on
> hardware, at very least making inventory, provisioning, maintenance, and
> more importantly, network control at a raw hardware level easy. It also
> allows providers to bill for usage vs. blanket floodgates, so hey, if you
> want to pay for a whole server of 24 cores and 192gb of ram on a 10g link,
> they'll sell you the cycles/bandwidth for sure, and it'll be about the cost
> of 8 of those shuttles "dedicated" boxes.
> >
> > For GD, they could also get rid of data centers full of odd bakers racks
> and dumpters full of old/odd/non-standard consumer Shuttle hardware,
> finally, to deal with standard rack server form-factor hardware built to
> maintain operationally.
> >
> > VM's for hosting just make sense, anything dedicated will never be
> "cheap" out of pure reality it doesn't make sense to offer 2-4 core
> hardware systems, or maintain them as stand-alone systems. Why everyone is
> a "cloud" suddenly years ago, GD was just late to the party.
> >
> > -mb
> >
> >
> > On 03/22/2016 11:34 AM, Sesso wrote:
> >> I asked an employee about it and he said, "our clients are too dumb to
> realize that that aren't getting a bare metal server."
> >>
> >> Jason
> >>
> >> Sent from my iPhone
> > ---------------------------------------------------
> > PLUG-discuss mailing list -
> > To subscribe, unsubscribe, or to change your mail settings:
> > http://lists.phxlinux.org/mailman/listinfo/plug-discuss
>
> ---------------------------------------------------
> PLUG-discuss mailing list -
> To subscribe, unsubscribe, or to change your mail settings:
> http://lists.phxlinux.org/mailman/listinfo/plug-discuss
>




--
James

*Linkedin <http://www.linkedin.com/pub/james-h-dugger/15/64b/74a/>*
---------------------------------------------------
PLUG-discuss mailing list -
To subscribe, unsubscribe, or to change your mail settings:
http://lists.phxlinux.org/mailman/listinfo/plug-discuss