SSDs

Lisa Kachold lisakachold at obnosis.com
Tue Apr 9 20:37:48 MST 2013


Hey Bryan,

As usual,  your opinion backed with long experience is beneficial to the
Unix/linux/enterprise IT community.

With all the and misinformation about disk performance comparisons with
SSDs,  I am Interested to talk to those actually doing the benchmarks.  I
have peers in the storage blog community who happily parrot this study or
that, but as I chimed in initially, I have not had sufficient SSDs to run
my own tests comparable to standard enterprise performance tests.

*I would be interested to see more specifics related to your testing,
specifically drive size and manufacture, test tools, Trim, smartd, and
other kernel settings, including distro version and 3rd party or
manufacturer drivers?  *

This would be great "blog" material actually, so I hesitate to request it
for PLUG Email List server ingestion alone.  But it's certainly timely in
an arena with a good deal of misinformation?

I suggest anyone looking into enterprise storage benefits of SSD's look at
the FAQ's from SanDisk, Crucial or OCZ and *read between the lines carefully
*.

Enterprise Solid State drives have yet to be widely available for the shops
which I contract.  While I am not specifically a storage engineer, I like
many linux/unix administrators must wade through a great deal of industry
chaff in order to get to the core of this relatively new technology.  It's
not too rare anymore that I have to recommend a solution and had to clear
up a great deal of confusion for a shop with a gung ho hobby-ist type
Entrepreneur-type hell bent on SSD drives in a pair of Dell R610's which
were joining a farm of PE1850, PE1950, PE2850, PE2950's which were
purchased used (Westech Recyclers).  The discussion came down to evaluation
of a multitude of studies of SSD's compared with Controller based DELL RAID
and no appreciable benefit (to offset a huge financial buy up option with
Dell).  I have since, worked with 2 other large shops with the where with
all to purchase SSDs where the storage and server architects agreed that
SSDs were not yet cost effective in a blade or controller card enterprise
environment.

Since I use Fedora for my desktop (Dell/Sony/HP systems), I referred to
https://www.modnet.org/fedora-optimization-performance-ssd/ which discusses
in fair detail Trim Optimization and Performance.  Note: If you use Ubuntu
see:http://askubuntu.com/questions/18903/how-to-enable-trim.

 What I have seen with one Dell Inspiron 17R with "add on" 32GB SSD Dell
"Special option", when booted into Linux Performance BenchMark , no
significant read/write speed improvement was realized.  As I mentioned,
this system must remain Windows 7 (due to Enterprise contracting
requirements), where the SSD performance is SLUG SLOW!

Here's what I did and what I found:

Test System:

Dell Inspiron 17R 3rd Generation Intel® Core™ i7-3630QM (6MB cache, up to
3.4Ghz)

1TB 5400RPM SATA HDD + 32GB mSATA SSD w/ Intel Smart Response

Test Medium/Process:

Clean boot into knoppix - manual mount of 32GB SSD drive, run sudo hdparm
-t /dev/sda

Dell Solid State Drive (crucial?) did not perform any faster than the 1TB
SATA.

Other Optimization Considerations (which don't apply in my test model due
to simple boot iso testing appear below in addition to the Trim
Optimization)

*NoAtime Tweak  *

By default Linux will write the last accessed time attribute to files. This
can reduce the life of your SSD by causing a lot of writes. The noatime
mount option turns this off.
Open your fstab file:
sudo vi /etc/fstab

Ubuntu uses the relatime option by default. For your SSD partitions
(formatted as ext3), replace relatime with noatime in fstab.  To effect
changes a reboot is required.

*Ramdisk Tweak*
Using a ramdisk instead of the SSD to store temporary files will speed
things up, but will cost you a few megabytes of RAM.
Open your fstab file:
sudo vi /etc/fstab

Add this line to fstab to mount /tmp (temporary files) as tmpfs (temporary
file system):
tmpfs /tmp tmpfs defaults,noatime,mode=1777 0 0

Reboot to affect changes. Running df, you should see a new line with /tmp
mounted on tmpfs:

tmpfs 513472 30320 483152 6% /tmp

*Firefox Cache Tweak*
Firefox puts its cache in your home partition. By moving this cache in RAM
you can speed up Firefox and reduce disk writes. Complete the previous
tweak to mount /tmp in RAM, and you can put the cache there as well.
Open about:config in Firefox. Right click in an open area and create a new
string value called browser.cache.disk.parent_directory. Set the value to
/tmp.

*I/O Scheduler Tweak*
An I/O scheduler decides which applications get to write to the disk when.
Since SSDs are different than a spinning hard drive, not all I/O schedulers
work well with SSDs.
The default I/O scheduler in Linux is cfq, completely fair queuing. cfq
works well on hard disks, but it's known to cause problems on Eee PC’s SSD.
While writing a large file to disk, any other application which tries to
write hang until the other write finishes.

The I/O scheduler can be changed on a per-drive basis without rebooting.
Run this command to get the current scheduler for a disk and the
alternative options:
cat /sys/block/sda/queue/scheduler

You’ll probably have four options, the one in brackets is currently being
used by the disk specified in the previous command:
noop anticipatory deadline [cfq]

Two are better suited to SSD drives: noop and deadline. Using one of these
in the same situation, the application will still hang but only for a few
seconds instead of until the disk is free again. Not great, but much better
than cfq.

Here’s how to change the I/O scheduler of a disk to deadline:
echo deadline > /sys/block/sda/queue/scheduler

(Note: kernel change commands needs to be run as root, but sudo does not
work with it on various distros. Run sudo -i if you have a problem to get a
root prompt.)

Of course you will replace sda with the disk you want to change, and
deadline with any of the available schedulers. This change is temporary and
will be reset when you reboot.

If you’re using the deadline scheduler, there’s another option you can
change for the SSD. This command is also temporary and also is a per-disk
option:
echo 1 > /sys/block/sda/queue/iosched/fifo_batch

You can apply the scheduler you want to all your drives by adding a boot
parameter in GRUB. The menu.lst file is regenerated whenever the kernel is
updated, which would wipe out your change. Instead of this way, I added
commands to rc.local to do the same thing.

Open rc.local:
sudo gedit /etc/rc.local

Put any lines you add before the exit 0. I added six lines for my Eee PC,
three to change sda (small SSD), sdb (large SSD), and sdc (SD card) to
deadline, and three to get the fifo_batch option on each:
echo deadline > /sys/block/sda/queue/scheduler
echo deadline > /sys/block/sdb/queue/scheduler
echo deadline > /sys/block/sdc/queue/scheduler
echo 1 > /sys/block/sda/queue/iosched/fifo_batch
echo 1 > /sys/block/sdb/queue/iosched/fifo_batch
echo 1 > /sys/block/sdc/queue/iosched/fifo_batch

Reboot to run the new rc.local options.


*Kopt Tweak*
It’s possible to add boot parameters to menu.lst that won’t be wiped out by
an upgrade. Open menu.lst  (Backup this file before you edit it):
sudo vi /boot/grub/menu.lst

The kopt line gives the default parameters to boot Linux:
# kopt=root=UUID=6722605f-677c-4d22-b9ea-e1fb0c7470ee ro

Leave this line commented and add any extra parameters.  To change the I/O
scheduler, use the elevator option:
elevator=deadline

Append that to the end of the kopt line. Save and close menu.lst.  Then run
update-grub to apply your change to menu:
sudo update-grub
[end update]

*Quick and Dirty Performance Test*
Using hdparm to test the read performance of your disk:
sudo hdparm -t /dev/sda

See Ubuntu: https://wiki.ubuntu.com/MagicFab/SSDchecklist

On 9 Apr 2013 11:30, "Bryan O'Neal" <Bryan.ONeal at theonealandassociates.com>
wrote:

> It depends on the drive of course, but we have found them to be
> significantly faster offering over 10X IOPS on average, This
> is comparison to buffered 15K enterprise dries.
>
>
> On Tue, Apr 2, 2013 at 1:24 PM, Lisa Kachold <lisakachold at obnosis.com>wrote:
>
>> I have also found that in deference to their claims some of the SSDs are
>> SLOWER than regular enterprise drives.
>> I believe this new technology leaves a lot to be desired.
>>
>>
>> On Tue, Apr 2, 2013 at 12:27 PM, Derek Trotter <expat.arizonan at gmail.com>wrote:
>>
>>> I have a question about SSDs.  I've read that they like the USB thumb
>>> drives can be written to a certain number of times before they fail.  What
>>> is the expected lifetime of an SSD?  They're terribly expensive if they're
>>> only going to last 2 or 3 years.
>>>
>>> Derek
>>>
>>> --
>>> "I get my copy of the daily paper, look at the obituaries page, and if
>>> I’m not there, I carry on as usual."
>>>
>>> Patrick Moore
>>>
>>> ------------------------------**---------------------
>>> PLUG-discuss mailing list - PLUG-discuss at lists.phxlinux.**org<PLUG-discuss at lists.phxlinux.org>
>>> To subscribe, unsubscribe, or to change your mail settings:
>>> http://lists.phxlinux.org/**mailman/listinfo/plug-discuss<http://lists.phxlinux.org/mailman/listinfo/plug-discuss>
>>
>>
>>
>>
>> --
>>
>> (503) 754-4452 Android
>> (623) 239-3392 Skype
>> (623) 688-3392 Google Voice
>> **
>> it-clowns.com
>> Chief Clown
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> ---------------------------------------------------
>> PLUG-discuss mailing list - PLUG-discuss at lists.phxlinux.org
>> To subscribe, unsubscribe, or to change your mail settings:
>> http://lists.phxlinux.org/mailman/listinfo/plug-discuss
>>
>
>
> ---------------------------------------------------
> PLUG-discuss mailing list - PLUG-discuss at lists.phxlinux.org
> To subscribe, unsubscribe, or to change your mail settings:
> http://lists.phxlinux.org/mailman/listinfo/plug-discuss
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.phxlinux.org/pipermail/plug-discuss/attachments/20130409/aa5a0d06/attachment.html>


More information about the PLUG-discuss mailing list