It's probably not a Good Idea(TM) to retain a
(unencrypted) "lifetime collection of email."
Eventually, the man will use it against you.
Just ask JLF.
Tens of thousands of emails??? You should consider
switching to Exchange. Or joining mailanon. Either
method should effectively work to reduce the size of
your email archive, although the Exchange method may
reduce it ways you weren't expecting. :)
> What is djb?
Dan J. Bernstein,
http://cr.yp.to/djb.html
qmail, daemontools, ucspi, djbdns, maildirs, multilog, ezmlm, ...
The more I use his stuff, the more I say to
myself, "Damn."
D
* On Wed, Apr 18, 2001 at 05:03:54PM -0700, Shawn T. Rutledge wrote:
> On Wed, Apr 18, 2001 at 03:51:41PM -0700, plug@arcticmail.com wrote:
> > mutt directly supports maildirs. qmail directly
> > supports maildirs. sendmail using procmail as its
> > delivery agent supports maildirs. maildirs do not
> > require ANY locking whatsoever and are thus safe to
> > use with NFS.
> >
> > djb is good, mmmmK? maildirs are good, mmmmK?
>
> And maildirs with tens of thousands of emails in them bring an
> ext2 filesystem to its knees too (I used Cyrus for about a year
> around 1996-1997... I couldn't do an ls in that directory anymore).
> As well as having a lot of slack space due to cluster size. So I
> guess this is another reason I'd like to use ReiserFS, mmmmk?
> Is that good too?
>
> Would you put your entire lifetime collection of email in one big
> maildir or would you come up with some other archiving means?
> Right now, when my mail spool gets a few 10's of thousands of
> emails and mutt takes a couple minutes to start up, I mv
> /var/spool/mail/rutledge to my home directory with name
> "receivedxx". The older ones get gzipped to save space. I'm up
> to received37 now (with a gap from received7 to received20, during
> the time I was using Cyrus - somewhere there's a tarball of that
> maildir). (And not all of those archives are so big; my rate
> of email consumption has accelerated as I'm on more and more lists.)
> But it's not the easiest stuff to search. If the filesystem and the
> tools could handle millions of files in a maildir, I'd rather do it
> that way. Maybe someday some filesystem will have compression
> built-in too.
>
> What is djb?
>
> > * On Wed, Apr 18, 2001 at 02:38:57PM -0700, Shawn T. Rutledge wrote:
> > > So I see the same old disclaimers are still on the web site about it
> > > not being stable enough for prime time, but it sure has been a long time...
> > > I remember JLF saying he was using it a while back. Anybody else?
> > >
> > > I have had too many problems which sometimes resulted in /home being
> > > offline. Right now it's on the primary Linux partition on my main
> > > workstation (the one where I went through all the install hassles last
> > > weekend) which is convenient for speed but very bad for reliability.
> > > I export it via NFS to my other systems. Usually I run mutt on top of
> > > screen on my gateway machine to read email from either work or home;
> > > and mutt cannot write to folders in my home directory because of NFS
> > > locking problems. The mutt developers seem to be really strict about
> > > doing what is correct rather than what works or is convenient... so this
> > > bug waits for a fix to NFS, which has been broken so long I don't expect
> > > much (seem to remember there being fundamental discord over how to do
> > > locking, and whether it even should work for NFS the same as it works for
> > > local disk). And, the web server depends on /home too; if it's not
> > > mounted, most of my web pages aren't available. So all of this makes
> > > me want to try Coda, for the following reasons -
> > >
> > > 1.) network filesystem without NFS's limitations, if I'm lucky
> > > 2.) I could use 2 or 3 servers, replicate and have enough redundancy
> > > to alleviate my fears about not doing backups often enough (and
> > > these days, a tarball of /home doesn't fit on one CDR anymore
> > > like it used to)
> > > 3.) the local cache should make it faster; and/or maybe it will be
> > > faster for other reasons too - NFS is not known for speed
> > > 4.) maybe I could use ReiserFS, and avoid the NFS bugs which occur when
> > > using ReiserFS and NFS together
> > >
> > > The ideal distributed filesystem would be a peer-to-peer aggregating one
> > > which also insures "n" levels of redundancy (configurable); it would
> > > aggregate free disk space on all the machines on the network. But such
> > > a thing doesn't exist in a way which can actually be mounted and used as
> > > a regular filesystem, AFAIK.... so in the mean time there's Coda.
> > >
> > > My best candidate for a file server at this point is probably Tachyon,
> > > which has a 17 gig drive that is only getting used for MP3's and backups
> > > so far. I could put the MP3's in my home directory and use the entire
> > > disk just for /home. But I think Coda requires servers to have the same
> > > size disks if you're doing replication and failover, right? Because all
> > > the files have to exist on both drives? AFAIK it doesn't do aggregation.
> > > So maybe I should get a pair of nice new 40 gig drives instead - they
> > > sure are cheap enough. I would put one in Tachyon, the dual Celeron
> > > box, and another in a slower box (I have an extra P75 laying around, and
> > > an extra rack case with nothing in it; this box could also have about
> > > 6 CDROMs mounted in it, which is something else I've been wanting to do.)
> > > So my question is then, which server should be the SCM, the faster less
> > > reliable one (which could conceivably be used to do more stuff besides
> > > being an NFS server, and is likely to get its kernel upgraded from time
> > > to time) or the slow one that I get working once, get it stable and then
> > > not expect it to do anything else? Will Coda automatically figure out
> > > that one box is faster and try to get files from that one most of the time?
> > > Or will it prefer to get files from the SCM if possible and only if it's
> > > not available, fail-over to the replicant? Will the speed even matter
> > > or will it be quite fast because of the client-side cache?
> > >
> > > If speed doesn't matter a lot, I could even put a tertiary even-more-
> > > reliable machine (a 486 maybe) in a different room as yet another
> > > failover machine. It would have its own little UPS, ext2 filesystem
> > > (just in case Reiser develops problems) and absolutely no other tasks
> > > besides acting as a backup.
> > >
> > > Why is there a limit of 300 megs for client-side cache? (This wasn't
> > > explained on the web site, they just said not to do it.)
> > >
> > > Given that Coda puts all the files in one giant directory, is ReiserFS
> > > vastly superior, because as I understand it's much more efficient than
> > > EXT2 for large directories? Are Coda and ReiserFS known to get along?
> > > Which version of Reiser should I run, the latest, or the latest even-
> > > numbered one?
> > >
> > > Do file conflicts arise often enough in practice to be a pest?
>
> --
> _______ Shawn T. Rutledge / KB7PWD ecloud@bigfoot.com
> (_ | |_) http://www.bigfoot.com/~ecloud kb7pwd@kb7pwd.ampr.org
> __) | | \________________________________________________________________