<div dir="ltr"><p>Hey Bryan,</p>
<p>As usual, your opinion backed with long experience is beneficial to the Unix/linux/enterprise IT community.</p>
<p>With all the and misinformation about disk performance comparisons with SSDs, I am Interested to talk to those actually doing the benchmarks. I have peers in the storage blog community who happily parrot this study or that, but as I chimed in initially, I have not had sufficient SSDs to run my own tests comparable to standard enterprise performance tests. </p>
<p><b><i><font size="4">I would be interested to see more specifics related to your testing, specifically drive size and manufacture, test tools, Trim, smartd, and other kernel settings, including distro version and 3rd party or manufacturer drivers? </font></i></b></p>
<p style>This would be great "blog" material actually, so I hesitate to request it for PLUG Email List server ingestion alone. But it's certainly timely in an arena with a good deal of misinformation?</p><p style>
I suggest anyone looking into enterprise storage benefits of SSD's look at the FAQ's from SanDisk, Crucial or OCZ and <u>read between the lines carefully</u>.</p><p style>Enterprise Solid State drives have yet to be widely available for the shops which I contract. While I am not specifically a storage engineer, I like many linux/unix administrators must wade through a great deal of industry chaff in order to get to the core of this relatively new technology. It's not too rare anymore that I have to recommend a solution and had to clear up a great deal of confusion for a shop with a gung ho hobby-ist type Entrepreneur-type hell bent on SSD drives in a pair of Dell R610's which were joining a farm of PE1850, PE1950, PE2850, PE2950's which were purchased used (Westech Recyclers). The discussion came down to evaluation of a multitude of studies of SSD's compared with Controller based DELL RAID and no appreciable benefit (to offset a huge financial buy up option with Dell). I have since, worked with 2 other large shops with the where with all to purchase SSDs where the storage and server architects agreed that SSDs were not yet cost effective in a blade or controller card enterprise environment.</p>
<p style>Since I use Fedora for my desktop (Dell/Sony/HP systems), I referred to <a href="https://www.modnet.org/fedora-optimization-performance-ssd/">https://www.modnet.org/fedora-optimization-performance-ssd/</a> which discusses in fair detail Trim Optimization and Performance. Note: If you use Ubuntu see:<a href="http://askubuntu.com/questions/18903/how-to-enable-trim">http://askubuntu.com/questions/18903/how-to-enable-trim</a>.</p>
<p style> What I have seen with one Dell Inspiron 17R with "add on" 32GB SSD Dell "Special option", when booted into Linux Performance BenchMark , no significant read/write speed improvement was realized. As I mentioned, this system must remain Windows 7 (due to Enterprise contracting requirements), where the SSD performance is SLUG SLOW!</p>
<p style>Here's what I did and what I found: </p><p style>Test System:</p><p style>Dell Inspiron 17R <span style="background-color:rgb(238,238,238);color:rgb(102,102,102);font-family:Arial,Helvetica,sans-serif;font-size:11px;line-height:17.983333587646484px">3rd Generation Intel® Core™ i7-3630QM (6MB cache, up to 3.4Ghz)</span></p>
<p style><span style="color:rgb(102,102,102);font-family:Arial,Helvetica,sans-serif;font-size:11px;line-height:17.983333587646484px;background-color:rgb(238,238,238)">1TB 5400RPM SATA HDD + 32GB mSATA SSD w/ Intel Smart Response</span></p>
<p style>Test Medium/Process:</p><p style>Clean boot into knoppix - manual mount of 32GB SSD drive, run sudo hdparm -t /dev/sda</p><p style>Dell Solid State Drive (crucial?) did not perform any faster than the 1TB SATA. </p>
<p style>Other Optimization Considerations (which don't apply in my test model due to simple boot iso testing appear below in addition to the Trim Optimization)</p><p style><b>NoAtime Tweak </b></p><p style>By default Linux will write the last accessed time attribute to files. This can reduce the life of your SSD by causing a lot of writes. The noatime mount option turns this off.<br>
Open your fstab file:<br>sudo vi /etc/fstab<br><br>Ubuntu uses the relatime option by default. For your SSD partitions (formatted as ext3), replace relatime with noatime in fstab. To effect changes a reboot is required.<br>
<br><b>Ramdisk Tweak</b><br>Using a ramdisk instead of the SSD to store temporary files will speed things up, but will cost you a few megabytes of RAM.<br>Open your fstab file:<br>sudo vi /etc/fstab<br><br>Add this line to fstab to mount /tmp (temporary files) as tmpfs (temporary file system):<br>
tmpfs /tmp tmpfs defaults,noatime,mode=1777 0 0</p><p style>Reboot to affect changes. Running df, you should see a new line with /tmp mounted on tmpfs:</p><p style>tmpfs 513472 30320 483152 6% /tmp</p><p style><b>Firefox Cache Tweak</b><br>
Firefox puts its cache in your home partition. By moving this cache in RAM you can speed up Firefox and reduce disk writes. Complete the previous tweak to mount /tmp in RAM, and you can put the cache there as well.<br>Open about:config in Firefox. Right click in an open area and create a new string value called browser.cache.disk.parent_directory. Set the value to /tmp.<br>
<br><b>I/O Scheduler Tweak</b><br>An I/O scheduler decides which applications get to write to the disk when. Since SSDs are different than a spinning hard drive, not all I/O schedulers work well with SSDs.<br>The default I/O scheduler in Linux is cfq, completely fair queuing. cfq works well on hard disks, but it's known to cause problems on Eee PC’s SSD. While writing a large file to disk, any other application which tries to write hang until the other write finishes.<br>
<br>The I/O scheduler can be changed on a per-drive basis without rebooting. Run this command to get the current scheduler for a disk and the alternative options:<br>cat /sys/block/sda/queue/scheduler<br><br>You’ll probably have four options, the one in brackets is currently being used by the disk specified in the previous command:<br>
noop anticipatory deadline [cfq]<br><br>Two are better suited to SSD drives: noop and deadline. Using one of these in the same situation, the application will still hang but only for a few seconds instead of until the disk is free again. Not great, but much better than cfq.<br>
<br>Here’s how to change the I/O scheduler of a disk to deadline:<br>echo deadline > /sys/block/sda/queue/scheduler<br><br>(Note: kernel change commands needs to be run as root, but sudo does not work with it on various distros. Run sudo -i if you have a problem to get a root prompt.)<br>
<br>Of course you will replace sda with the disk you want to change, and deadline with any of the available schedulers. This change is temporary and will be reset when you reboot.<br><br>If you’re using the deadline scheduler, there’s another option you can change for the SSD. This command is also temporary and also is a per-disk option:<br>
echo 1 > /sys/block/sda/queue/iosched/fifo_batch<br><br>You can apply the scheduler you want to all your drives by adding a boot parameter in GRUB. The menu.lst file is regenerated whenever the kernel is updated, which would wipe out your change. Instead of this way, I added commands to rc.local to do the same thing.<br>
<br>Open rc.local:<br>sudo gedit /etc/rc.local<br><br>Put any lines you add before the exit 0. I added six lines for my Eee PC, three to change sda (small SSD), sdb (large SSD), and sdc (SD card) to deadline, and three to get the fifo_batch option on each:<br>
echo deadline > /sys/block/sda/queue/scheduler<br>echo deadline > /sys/block/sdb/queue/scheduler<br>echo deadline > /sys/block/sdc/queue/scheduler<br>echo 1 > /sys/block/sda/queue/iosched/fifo_batch<br>echo 1 > /sys/block/sdb/queue/iosched/fifo_batch<br>
echo 1 > /sys/block/sdc/queue/iosched/fifo_batch<br><br>Reboot to run the new rc.local options.<br><br><br><b>Kopt Tweak</b> <br>It’s possible to add boot parameters to menu.lst that won’t be wiped out by an upgrade. Open menu.lst (Backup this file before you edit it):<br>
sudo vi /boot/grub/menu.lst<br><br>The kopt line gives the default parameters to boot Linux:<br># kopt=root=UUID=6722605f-677c-4d22-b9ea-e1fb0c7470ee ro<br><br>Leave this line commented and add any extra parameters. To change the I/O scheduler, use the elevator option:<br>
elevator=deadline<br><br>Append that to the end of the kopt line. Save and close menu.lst. Then run update-grub to apply your change to menu:<br>sudo update-grub<br>[end update]<br><br><b>Quick and Dirty Performance Test</b><br>
Using hdparm to test the read performance of your disk:<br>sudo hdparm -t /dev/sda<br><br></p><p style>See Ubuntu: <a href="https://wiki.ubuntu.com/MagicFab/SSDchecklist">https://wiki.ubuntu.com/MagicFab/SSDchecklist</a></p>
<p style>On 9 Apr 2013 11:30, "Bryan O'Neal" <<a href="mailto:Bryan.ONeal@theonealandassociates.com" target="_blank">Bryan.ONeal@theonealandassociates.com</a>> wrote:<br></p><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<div dir="ltr">It depends on the drive of course, but we have found them to be significantly faster offering over 10X IOPS on average, This is comparison to buffered 15K enterprise dries.</div><div class="gmail_extra"><br>
<br><div class="gmail_quote">On Tue, Apr 2, 2013 at 1:24 PM, Lisa Kachold <span dir="ltr"><<a href="mailto:lisakachold@obnosis.com" target="_blank">lisakachold@obnosis.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<div dir="ltr">I have also found that in deference to their claims some of the SSDs are SLOWER than regular enterprise drives. <div>I believe this new technology leaves a lot to be desired.</div></div><div class="gmail_extra">
<br><br><div class="gmail_quote"><div>On Tue, Apr 2, 2013 at 12:27 PM, Derek Trotter <span dir="ltr"><<a href="mailto:expat.arizonan@gmail.com" target="_blank">expat.arizonan@gmail.com</a>></span> wrote:<br>
</div><div><div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
I have a question about SSDs. I've read that they like the USB thumb drives can be written to a certain number of times before they fail. What is the expected lifetime of an SSD? They're terribly expensive if they're only going to last 2 or 3 years.<span><font color="#888888"><br>
<br>
Derek<br>
<br>
-- <br>
"I get my copy of the daily paper, look at the obituaries page, and if I’m not there, I carry on as usual."<br>
<br>
Patrick Moore<br>
<br>
------------------------------<u></u>---------------------<br>
PLUG-discuss mailing list - <a href="mailto:PLUG-discuss@lists.phxlinux.org" target="_blank">PLUG-discuss@lists.phxlinux.<u></u>org</a><br>
To subscribe, unsubscribe, or to change your mail settings:<br>
<a href="http://lists.phxlinux.org/mailman/listinfo/plug-discuss" target="_blank">http://lists.phxlinux.org/<u></u>mailman/listinfo/plug-discuss</a></font></span></blockquote></div></div></div><span><font color="#888888"><br>
<br clear="all"><div><br></div>-- <br>
<div><br></div><a href="tel:%28503%29%20754-4452" value="+15037544452" target="_blank">(503) 754-4452</a> Android<br><a href="tel:%28623%29%20239-3392" value="+16232393392" target="_blank">(623) 239-3392</a> Skype<br><a href="tel:%28623%29%20688-3392" value="+16236883392" target="_blank">(623) 688-3392</a> Google Voice<br>
**<br><a href="http://it-clowns.com" target="_blank">it-clowns.com</a> <br>Chief Clown<br><br><br><br><br><br><br><br><br><br>
<br><br><br><br>
</font></span></div>
<br>---------------------------------------------------<br>
PLUG-discuss mailing list - <a href="mailto:PLUG-discuss@lists.phxlinux.org" target="_blank">PLUG-discuss@lists.phxlinux.org</a><br>
To subscribe, unsubscribe, or to change your mail settings:<br>
<a href="http://lists.phxlinux.org/mailman/listinfo/plug-discuss" target="_blank">http://lists.phxlinux.org/mailman/listinfo/plug-discuss</a><br></blockquote></div><br></div>
<br>---------------------------------------------------<br>
PLUG-discuss mailing list - <a href="mailto:PLUG-discuss@lists.phxlinux.org" target="_blank">PLUG-discuss@lists.phxlinux.org</a><br>
To subscribe, unsubscribe, or to change your mail settings:<br>
<a href="http://lists.phxlinux.org/mailman/listinfo/plug-discuss" target="_blank">http://lists.phxlinux.org/mailman/listinfo/plug-discuss</a><br></blockquote></div>
</div>