(OT) Questions About SSDs for a Laptop

Michael Butash michael at butash.net
Fri Sep 5 06:32:53 MST 2014


Not to discourage your learning, but here's how I build my ssd's on both 
my desktop and laptops now universally (assuming I can cram 2 disks 
in).  This I've built over several years of trial and error with ssd's 
and various os. I made a variation for uefi booting too my asus that 
wouldn't do legacy, but this should work for any non-uefi/mbr build.

I wouldn't mind some peer review on the process anyways, it's well 
notated just why I did things so I can remember later.  This was for my 
last stab at ubuntu, never was fully successful, then just applied it to 
mint, and my sanity was much better for it.

<doc>

## boot the ubuntu desktop cd, when at desktop, hit ctrl-alt-t and spawn 
a console

## if you need to wipe the disks, use a security erase on them
## you will sometimes need to unfreeze drives, a suspend and awaken works

sudo hdparm -I /dev/sda | grep froz
sudo hdparm -I /dev/sdb | grep froz

## make sure these are the right disks, they will be wiped.

hdparm --user-master u --security-set-pass PasSWorD /dev/sda
hdparm --user-master u --security-erase PasSWorD /dev/sda

hdparm --user-master u --security-set-pass PasSWorD /dev/sdb
hdparm --user-master u --security-erase PasSWorD /dev/sdb

## at console, issue the following to install mdadm:

sudo apt-get install mdadm

## next, issue fdisk to partition the disks from terminal:

## 
http://askubuntu.com/questions/8592/how-do-i-align-my-partition-table-properly

## block size (file system block size, ex. 4096 or 4k)
## erase head size (usually 4096 or 4k)
## stripe size (same as mdadm chunk size, set. 128k)
## stride: stripe size / block size (ex. 128k / 4k = 32)
## stripe-width: stride * #-of-data-disks (ex. 2 disks RAID 1 is 1 data 
disks; 32*1 = 32)

sudo fdisk -S32 -H32 -u /dev/sda
n
p

+250M

n
p



a
1
t
1
da
t
2
da
p
w

sudo fdisk -S32 -H32 -u /dev/sdb
n
p

+250M

n
p
2



a
1
t
1
da
t
2
da
p
w

## build the raid now using mdadm

mdadm --create /dev/md0 --auto=yes --force --name=boot0 --level=1 
--chunk=128 --raid-devices=2 /dev/sda1 /dev/sdb1
mdadm --create /dev/md1 --auto=yes --force --name=spv0 --level=1 
--chunk=128 --raid-devices=2 /dev/sda2 /dev/sdb2

## or if building with one missing

mdadm --create /dev/md0 --auto=yes --force --name=boot0 --level=1 
--chunk=128 --raid-devices=2 /dev/sda1 missing
mdadm --create /dev/md0 --auto=yes --force --name=spv0 --level=1 
--chunk=128 --raid-devices=2 /dev/sda2 missing

## create secure physical volume, 0, change $pw-user*.  This provides 
slot 0-7, 0-3 admin, 4-7 user.
## we control slots 0-3, users are given 4-7 (assuming multi-user or l-user)

## 
http://java-hamster.blogspot.com/2012/04/aligning-partitions-lvm-and-encrypted.html
## 
http://newspaint.wordpress.com/2012/09/21/full-disk-encryption-on-xubuntu-precise-12-04/
## 
http://security.stackexchange.com/questions/40208/recommended-options-for-luks-cryptsetup

## default, uses essiv-256, cpu-intensive

## cryptsetup --align-payload=8192 luksFormat /dev/md/spv0
## intelni optimized, sha256, 256bit

cryptsetup --align-payload=8192 -c aes-xts-plain64 -h sha256 -s 256 
luksFormat /dev/md1
YES
$pw-slot0-diskmaster0

## add secondary user key, change $pw-user*

cryptsetup luksAddKey --key-slot 4 /dev/md1
$pw-slot4-user0

## confirm there are two slots, the master (0) and user (4)

cryptsetup luksDump /dev/md1

## remove a slot

cryptsetup luksRemovekey --key-slot 4 /dev/md1

## unlock the spv0

cryptsetup luksOpen /dev/md/spv0 spv0

<pass>

## create pv for lvm on spv0

## 
http://java-hamster.blogspot.com/2012/04/aligning-partitions-lvm-and-encrypted.html

pvcreate --dataalignment 4m /dev/mapper/spv0

## create volgroup $hostname-vg0 on spv0 - use the hostname of the local 
device (tpm locked anyways in bios - theoretically)

## 
http://java-hamster.blogspot.com/2012/04/aligning-partitions-lvm-and-encrypted.html

vgcreate $hostname-vg0 -s 4m /dev/mapper/spv0

## create your logical volumes

lvcreate --size 3G --name root0 $hostname-vg0
lvcreate --size 3G --name swap0 $hostname-vg0
lvcreate --size 9G --name usr0 $hostname-vg0
lvcreate --size 3G --name var0 $hostname-vg0
lvcreate --size 1G --name varlog0 $hostname-vg0
lvcreate --size 64G --name home0 $hostname-vg0
lvcreate --size 64G --name ext0 $hostname-vg0

## make ext4 partitions, match stripe/stride with md chunks
## need to add bit about setting inode counts here...

mkfs.ext2 -b 4096 -E stride=32,stripe-width=32 /dev/md0
mkfs.ext4 -b 4096 -E stride=32,stripe-width=32 
/dev/mapper/$hostname--vg0-root0
mkfs.ext4 -b 4096 -E stride=32,stripe-width=32 
/dev/mapper/$hostname--vg0-usr0
mkfs.ext4 -b 4096 -E stride=32,stripe-width=32 
/dev/mapper/$hostname--vg0-var0
mkfs.ext4 -b 4096 -E stride=32,stripe-width=32 
/dev/mapper/$hostname--vg0-varlog0
mkfs.ext4 -b 4096 -E stride=32,stripe-width=32 
/dev/mapper/$hostname--vg0-home0
mkfs.ext4 -b 4096 -E stride=32,stripe-width=32 
/dev/mapper/$hostname--vg0-ext0

tune2fs -c0 -i0 /dev/md0
tune2fs -c0 -i0 /dev/mapper/$hostname--vg0-root0
tune2fs -c0 -i0 /dev/mapper/$hostname--vg0-usr0
tune2fs -c0 -i0 /dev/mapper/$hostname--vg0-var0
tune2fs -c0 -i0 /dev/mapper/$hostname--vg0-varlog0
tune2fs -c0 -i0 /dev/mapper/$hostname--vg0-home0
tune2fs -c0 -i0 /dev/mapper/$hostname--vg0-ext0

## mkswap

mkswap /dev/mapper/vg0-swap

## make/mount target dir
mkdir /target
mount /dev/mapper/$hostname--vg0-root0 /target
mkdir /target/boot
mkdir /target/usr
mkdir /target/var
mkdir /target/home
mkdir /target/mnt
mkdir /target/mnt/ext0

mount /dev/mapper/$hostname--vg0-usr0 /target/usr
mount /dev/mapper/$hostname--vg0-var0 /target/var
mount /dev/mapper/$hostname--vg0-home0 /target/home
mount /dev/mapper/$hostname--vg0-ext0 /target/mnt/ext0
mount /dev/md0 /target/boot
mkdir /target/var/log
mount /dev/mapper/$hostname--vg0-varlog0 /target/var/log

## continue the installer and get to partition, use "manual"

## enable all the partitions and set the mount structure

## set "yes" to boot failed raid

## continue installing

## at grub, use mbr to install

## before rebooting, vi the /etc/crypttab file and add contents for uuid

ls -la /dev/disk/by-uuid | grep md1 | awk '{ print $9 }' >> 
/target/etc/crypttab

<>
# <target name> <source device>         <key file>      <options>
## example
## spv0 UUID=5f694a41-f8c6-4da1-8679-8263e8642eb1       none 
luks,retry=1,discard
spv0    UUID=$uuid-here     none    luks,retry=1,discard

<>

## if you need/want to remount chroot to install or fix, add device 
dir's to chroot and enter

mount --rbind /proc /target/proc
mount --rbind /sys /target/sys
mount --rbind /dev /target/dev
mount --rbind /run /target/run

chroot /target
bash

apt-get update
apt-get install mdadm cryptsetup lvm2

## make your fstab and modify the boot disk to be the correct uuid from 
the table

ls -la /dev/disk/by-uuid | grep md0 | awk '{ print $9 }' >> /etc/fstab
vi /etc/fstab

<>
# UNCONFIGURED FSTAB FOR BASE SYSTEM
proc    /proc    proc    defaults                        0 0
tmpfs    /tmp    tmpfs    defaults,noatime,nodiratime,mode=1777         0 0
/dev/mapper/$hostname--vg0-root0    /    ext4 
defaults,noatime,nodiratime    0 1
UUID=218b2c98-3f7e-4008-950c-b99e3d6dabab    /boot    ext2 
defaults,noatime,nodiratime    0 1
/dev/mapper/$hostname--vg0-usr0    /usr    ext4 
defaults,noatime,nodiratime    0 2
/dev/mapper/$hostname--vg0-var0    /var    ext4 
defaults,noatime,nodiratime    0 2
/dev/mapper/$hostname--vg0-varlog0    /var/log    ext4 
defaults,noatime,nodiratime    0 2
/dev/mapper/$hostname--vg0-home0    /home    ext4 
defaults,noatime,nodiratime    0 2
/dev/mapper/$hostname--vg0-ext0    /mnt/ext0    ext4 
defaults,noatime,nodiratime,commit=600    0 2
/dev/mapper/$hostname--vg0-swap0    none    swap    sw,discard 0 0

<>

## ensure these are correct too

vi /etc/crypttap
vi /etc/mdadm/mdadm.conf

update-initramfs -k all -t
grub-install /dev/sda
grub-install /dev/sdb
update-grub
grub-install /dev/sda
grub-install /dev/sdb

## set and make a udev rule for setting scheduler to deadline

echo deadline > /sys/block/sdb/queue/scheduler

vi /etc/udev/rules.d/60-ssd-scheduler.rules

<>
# set deadline scheduler for non-rotating disks

ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/rotational}=="0", 
ATTR{queue/scheduler}="deadline"
<>

## edit lvm.conf to change discards to =1

vi /etc/lvm/lvm.conf

<>
issue_discards = 1
<>

# issue vm.swappiness to +1 now

<>
sysctl -w vm.swappiness=1
<>

## add into sysctl

vi /etc/sysctl.d/ssd-optimization.conf

<>
# Added so Linux kernel no longer attempts to enlarge the cache by 
paging applications out
# 
http://rudd-o.com/linux-and-free-software/tales-from-responsivenessland-why-linux-feels-slow-and-how-to-fix-that

vm.swappiness=1
<>

## test the disks, get them closed to zero

sync
echo 3 > /proc/sys/vm/drop_caches
time dd if=/dev/zero of=/mnt/ext0/incoming/testfile count=1 bs=900M

sysctl -w vm.vfs_cache_pressure=100
find / > /dev/null
cp /mnt/ext0/incoming/testfile /mnt/ext0/incoming/testfile2
time find / > /dev/null

sysctl -w vm.vfs_cache_pressure=50
find /  > /dev/null
cp /mnt/ext0/incoming/testfile2 /mnt/ext0/incoming/testfile3
time find / > /dev/null

rm -f /mnt/ext0/incoming/testfile /mnt/ext0/incoming/testfile2 
/mnt/ext0/incoming/testfile3

## add permenantly cat into sysctl under prior entry

vi /etc/sysctl.d/ssd-optimization.conf

<>

## add for filesystem caching

vm.vfs_cache_pressure=50
<>

</doc>

-mb


On 09/04/2014 10:19 AM, Mark Phillips wrote:
> Michael,
>
> Thanks again for your comments, they are very helpful. I have been 
> googling RAID1 and LVM and finding lots of good information.
>
> I really like your idea of a RAID1 for the two SSDs. Does it matter if 
> one is msata and one is not?
>
> I am trying to decide on the merits of using LVM with the RAID1, since 
> I only have 1 disk and I normally don't partition it so I don't have 
> to worry about running our of space until the disk is almost full. 
> Could you explain to me the benefit of using LVM + RAID1 for these two 
> drives? How would you partition the drives? My current drive has about 
> 420 GB of data in /home, about 9GB in /opt, and some misc stuff in 
> /var, all of which I need to transfer that to the new system.
>
> Thanks,
>
> Mark
>
> P.S. One benefit of using both LVM and RAID1 is learning something new! ;)
>


More information about the PLUG-discuss mailing list