On Mon, 16 Dec 2002, Jeffrey Pyne wrote: > I wonder, what was the historical reason for making the default permissions > of a new file 666 and a new directory 777? > I tell my students that they have to remember that Unix was created during the late 60s and early 70s. Unix was created to provide an OS for programmers who worked in a collaborative environment to write programs. Remote access to computers was minimal and most of the early users were Members of Technical Staff at AT&T Bell Labs. Thompson trusted Ritchie who trusted Kernighan and so on; in other words, they computed in a trusted computing environment. Because I wasn't there, I spammed Dennis Ritchie about this and here is his reply. [He commented not only on default permissions, but also about the use of the 'x' bit on directories.] The two questions really are: why allow (by default) general read and write permissions? Why the x permission for directories? The first has to do with whether people (in a multiple-access environment) are happy about letting others read (or even write) their files. This really depends on a level of trust. More or less forever, there have been ways of restricting access (things like umask, which does cut off some of the access bits for all created files) and also mechanisms (introduced in Berkeley versions, but often carried through to later systems) that make new files inherit restrictions from the directories in which they are created. The other question is about the x bit: this bit was originally intended to show that the file was executable. It is overloaded-- going back even to Multics, it has been used for directories to say that you can look up a file in a directory if you know the name of the file (directory is searchable) even if you are not allowed to read the whole directory contents. G.D.Thurman [CS/CIS Instructor] Scottsdale Community College 480.423.6110