Bill, have a peek at Caldera. I recall a couple of reviews on a commercial product from them targeted at Enterprise management of Linux desktops & servers. Ya, it costs USD$$$, but with 700 desktops to support you likely already are looking at or you have something like that to keep them all in sync with updates and whatnot? Single point of failure is a tough problem to solve. I'll make an assumption that "single point of failure" ties with uptime, high availability and fault tolerant computing infrastructure. I've consulted for very few companies that every fully grasp the cost of 99.999% uptime. Do you want to do this? I have done it, it is not cheap. Please prepare management for the ordeal of writing many 100,000 to 500,000 USD checks to accomplish the task. Redundant routers, dual NIC'd desktops, double ethernet wiring, geographically seperated utility power feeds, ditto UPS & Gen Sets, ditto Telcom connections. Two of everything? Hmmm, dude, you're getting the budget of your dreams! Better to plan for failure, document and PRACTICE recovering from failures. Eliminating single points of failure can be accomplished, but the expense to the business often exceeds the amount of risk. (I'll bet that you are looking to eliminate the single points of failure that are financially practical to solve, yes?) Sun has a very nice server failure capability. The simple configuration is shared storage is used by two "server" hosts. One is hot, one is a warm spare. The arp'd ethernet address for the IP address all clients connect to for their NFS (or whatever?) is able to be moved to the warm spare when the hot server fails. I think Microsoft's kinda works in this space too? There are few products out there to implement this basic server failover for Linux systems. Including SAN style storage you can share between the servers. Considering the past year's dot com bombs, you'll want to do some fresh research in this space. Even the big players are laying off people, I would hate to purchase from a company that's on the brink of failure. A really solid SAN that is accessable by multiple servers can solve a lot of the data/business continuity headache. Something really HA/FT, like able to lose a couple of power supplies and a handful of disks without kicking out your users. On the cheap side, CODA sounds promising. Can it reconnect & re-authenticate to the warm-standby server? Can smb or nfs meet the requirement if upon system failure your users understand that they need only log-out & log-in to keep working? (kinda like they would need to do if the primary MS file server died and was replaced with a warm-spare?) - tom e. On 2 Oct 2002, Bill Warner wrote: > Yes this is a great topic for the list and for Linux people in general. > > We have moved almost all of our old SCO UNIX servers to Linux and are > looking into the possibility of moving our corporate desktops to Linux > as well. Our UNIX team has been using Linux desktops for a long time so > we maintain some of the needed things. I have been charged with the > task of evaluating and putting together a working desktop system that > will fill all the requirements that our win2k desktops do now. > > Some of the things we need are performance and no single point of > failure. I seem to have problems with these things when it comes to > setting up unified logins/home directories. I know I can setup smb > authentication. I could use NFS homes but those both give a single > point of failure. > > If anyone knows of a good way to support 700 Linux desktops I would be > more than open to suggest ins.