shred vs writing zeros to wipe a drive
Matt Graham
mhgraham at crow202.org
Mon Aug 19 13:40:58 MST 2013
On 2013-08-19 10:39, Carruth, Rusty wrote:
> Don't' know if I need to see the code, but I'm certainly curious to
> know
> how you made it slightly more efficient :-)
If you do "dd if=/dev/zero of=/dev/sdb bs=32k" , then dd is constantly
reading from /dev/zero . Sure, reading from /dev/zero is fast, but
zeroes=calloc(1,32768*sizeof(char));
/* later */
fwrite(zeroes,32768,1,WRITE_PTR);
doesn't do *any* I/O to /dev/zero .
When doing the "write random data" pass, randbuffer contains 32K of
data read from /dev/urandom . After the data is written, randbuffer is
only rewritten with new random data if rand() % 2 == 0. This means that
there's 50% less I/O to /dev/urandom at the price of having the random
data repeated for more than 1 block in a mostly non-deterministic way.
Which means the program spends more time writing and less time reading.
> I figure /dev/urandom is at least one order of magnitude better
> than /dev/zero (in some base or other) :-) But, yes, if you REALLY
> want good random numbers use /dev/random and be prepared for a wait.
If you need more than about 256 bytes of stuff out of /dev/random,
you'll be waiting for a looooong time. At least IME. If you have
serious needs for lots of high-quality non-deterministic entropy, I
guess you build an RNG made out of the beta emitters in old smoke
detectors. (Fun for the whole family! :-) )
--
Crow202 Blog: http://crow202.org/wordpress
There is no Darkness in Eternity
But only Light too dim for us to see.
More information about the PLUG-discuss
mailing list