In addition, what is the value of the data? Is it worth the effort to
reclaim it? What does it cost you to lose that info to someone
nefarious?
One alternative to copying zeros to the drive is to copy from
/dev/urandom (I think its urandom, since random is MUCH slower. If you
find it takes HOURS to fill a few megabytes then use the other random
device) - that gets you the 'hard to filter known data' issue without
the 14 hours write times for shredding.
Assuming hiding the data is worth the time, and those who get the
computer are even likely to try to recover the data.
On the BS issue, as George says, it depends entirely upon the drive. On
our SSDs I generally use 1MB, since our SSDs do very well on large block
sequential writes - which is what you are getting as you make the block
size larger. However, there is a limit to the improvement you can get -
obviously even if the drive could go faster and faster, you still have
memory limitations and so forth. (there are 2 primary LBA sizes for ATA
commands. IIRC its 48 bit and 28 bit, but I could have the numbers
wrong. In any case, in the smaller addressing range you can send a
single ATA command to write 256 LBAs using DMA, so you only get the
command decode (and ack/etc) overhead once per 256 LBAs (which each
holds 512 bytes, usually). For 48-bit mode, you get to write 65536 (or
more, again I forget exact numbers) LBAs with a single command (ABLE TO
LEAP 65536 LBAs in a single command!!!! ;-), but that assumes you've got
enough memory not to send yourself into starvation J - the point of this
rabbit hole I'm in is twofold - one is to note that 256*512 is 128KB, so
if your drive only does 28-bit (or whatever the smaller mode is)
addressing then you can only do 128k at a time anyway, and two is to
point out that a lot of the gain you are getting by going to larger
block sizes is to avoid all the command chatter and latency on the SATA
(or PATA or whatever) bus. (And, three, is to note that above 256 LBAs
your percentage of command vs data gets lower and lower, so you may find
you don't get much gain beyond 128KB anyway - hmm, I feel an experiment
coming - thanks George for already writing the code for me! J)
Rusty
From:
plug-discuss-bounces@lists.phxlinux.org
[
mailto:plug-discuss-bounces@lists.phxlinux.org] On Behalf Of George
Toft
Sent: Monday, August 19, 2013 7:41 AM
To: keith smith; Main PLUG discussion list
Subject: Re: shred vs writing zeros to wipe a drive
Stephen mentioned one aspect - impact if someone recreates the data.
Another is the technical capability of the hard drive recipient and
anyone else that gets the drive. Overwriting with 0's (or 1's) creates
a regular pattern that can be filtered out to retrieve the remnants of
the previous data, which is why the DOD standard is 7 passes. That
being said, I've never met anyone who had the technical capability to
retrieve data off drives once they've been overwritten.
As fate would have it, I've developing a CD/USB-bootable image whose
sole purpose is DOD-wiping every drive in the system. It's like DBAN
(Derek's Boot and Nuke) on steroids.
As far as bs (block size), in my experience, bs affects the speed of the
dd. Too small or too large and the time increases. For the ATA drives
I've dealt with, the sweet spot was 32K, but this depends completely on
drive type. Clearly YMMV and you might experiment with different
values. Wrap the dd command in a for loop and time the execution of the
dd command. Create a 1GB test file and then run something like this:
dd if=/dev/zero of=/tmp/testfile bs=1024 count=1024000 # close enough
while [ $BS -le 134217728 ]; do
COUNT=$((1048576000/$BS))
echo BS=$BS
dd if=/dev/zero of=/tmp/testfile bs=$BS count=$COUNT
BS=$(($BS+$BS))
echo "-------"
done
When I ran this, I got speeds that varied by up to 50%. Finding the
right blocksize can save you several hours.
Regards,
George Toft
On 8/18/2013 8:19 PM, keith smith wrote:
Hi All,
I have an old computer that I am giving to a friend so I wanted
to wipe the drives in preparation for that.
The master is 250GB
The slave is 1TB.
I read a couple articles that suggested using a rescue disk and
the shred utility to take care of this. I also read that shred is not
necessary to just write all zero's to the drive.
The rescue disk I am using is DVD disk one of CentOS 6.3.
I ran shred on the fist drive. It took 4.5 hours to run 3 shred
passes plus 1 that writes zeros to the entire drive.
Command : shred -zv /dev/sda (this was on the master disk)
Then I ran : dd if=/dev/zero of=/dev/sda bs=16M
In one of the articles it showed the above command with bs=1M
Does the size of "bs" matter?
Also what about the argument that shred is overkill?
Thanks!!
Keith
------------------------
Keith Smith
---------------------------------------------------
PLUG-discuss mailing list - PLUG-discuss@lists.phxlinux.org
To subscribe, unsubscribe, or to change your mail settings:
http://lists.phxlinux.org/mailman/listinfo/plug-discuss
---------------------------------------------------
PLUG-discuss mailing list -
PLUG-discuss@lists.phxlinux.org
To subscribe, unsubscribe, or to change your mail settings:
http://lists.phxlinux.org/mailman/listinfo/plug-discuss