I had 1 GB of swap.
It seems Ohava realized they had made a mistake when they provisioned a
large group of nodes in October and (1) rebuilt my node and (2) offered to
upgrade everyone to 40 GB of disk space in a public announcement on their
web site.. My node was rebuilt last night and df now shows
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/ubuntu--vg-root 19G 1.8G 16G 10% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
udev 235M 4.0K 235M 1% /dev
tmpfs 50M 368K 49M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 246M 0 246M 0% /run/shm
none 100M 0 100M 0% /run/user
/dev/vda1 236M 68M 156M 31% /boot
and 1 GB of swap, according to top:
top - 07:36:27 up 8:30, 1 user, load average: 0.00, 0.01, 0.05
Tasks: 72 total, 1 running, 71 sleeping, 0 stopped, 0 zombie
%Cpu(s): 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si,
0.0 st
KiB Mem: 501804 total, 484280 used, 17524 free, 53080 buffers
KiB Swap: 1044476 total, 3472 used, 1041004 free. 353988 cached Mem
The memory is short (sb 512 MB), but that is a disagreement over 1 KB -
1,000 versus 1 KB = 1,024, which I won't win.
So, it seems like an "honest" mistake on their part, unless I am missing
something.
Thanks!
Mark
On Thu, Oct 16, 2014 at 10:03 PM, Sesso <
jason@tier1media.net> wrote:
> How much swap do you have?
>
>
> On Oct 16, 2014, at 9:41 PM, Mark Phillips <mark@phillipsmarketing.biz>
> wrote:
>
> I signed up for a free VPS on Ohava - 20GB is what is advertised. When I
> logged into the system, df -h showed this:
>
> Filesystem Size Used Avail Use% Mounted on
> /dev/mapper/ubuntu--vg-root 6.6G 1.8G 4.6G 28% /
> none 4.0K 0 4.0K 0% /sys/fs/cgroup
> udev 235M 4.0K 235M 1% /dev
> tmpfs 50M 368K 49M 1% /run
> none 5.0M 0 5.0M 0% /run/lock
> none 246M 0 246M 0% /run/shm
> none 100M 0 100M 0% /run/user
> /dev/vda1 236M 68M 156M 31% /boot
>
> I queried to the support group, so they sent me instructions to add 10
> more GB.
>
> df now shows:
>
> Filesystem Size Used Avail Use% Mounted on
> /dev/mapper/ubuntu--vg-root 17G 1.8G 14G 12% /
> none 4.0K 0 4.0K 0% /sys/fs/cgroup
> udev 235M 4.0K 235M 1% /dev
> tmpfs 50M 368K 49M 1% /run
> none 5.0M 0 5.0M 0% /run/lock
> none 246M 0 246M 0% /run/shm
> none 100M 0 100M 0% /run/user
> /dev/vda1 236M 68M 156M 31% /boot
>
> The support groups said:
>
>
> *Our apologies on the confusion. This is a current bug in the machines
> being spun up, but we definitely offer (and want to help you get) the full
> amount of space. Every instance gets 20GB partitioned to them. There is
> some overhead in some of the other partitions of disk space so / won't ever
> show the full 20GB, as small parts of the 20GBs are allocated elsewhere. *
>
>
> *The instructions sent to you are for extending the / partition an
> additional 10GB. Since there was already some amount of storage there, the
> result after completing the instructions (and the correction to the error
> that we made when we originally sent you instructions on expanding the lvm
> volume..."sudo lvextend -L+10G /dev/ubuntu-vg/root") should get you as
> close as possible to 20GB (~18.75GB) on /root while allowing for the
> overhead. *
> I get the calculation of disk size issue, so a 20 GB drive is really only
> 18.63 GB. But shouldn't df show 18.63 GB and not 17 GB? Is the discrepancy
> (1.63 GB or 8.75% of the drive) due to formatting the disk and adding
> Ubuntu server?
>
> Thanks,
>
> Mark
> ---------------------------------------------------
> PLUG-discuss mailing list - PLUG-discuss@lists.phxlinux.org
> To subscribe, unsubscribe, or to change your mail settings:
> http://lists.phxlinux.org/mailman/listinfo/plug-discuss
>
>
>
> ---------------------------------------------------
> PLUG-discuss mailing list - PLUG-discuss@lists.phxlinux.org
> To subscribe, unsubscribe, or to change your mail settings:
> http://lists.phxlinux.org/mailman/listinfo/plug-discuss
>
---------------------------------------------------
PLUG-discuss mailing list -
PLUG-discuss@lists.phxlinux.org
To subscribe, unsubscribe, or to change your mail settings:
http://lists.phxlinux.org/mailman/listinfo/plug-discuss