On Jan 8, 6:24am, George Toft wrote:
> So if I issue the cat command as described, and compare that against
> a ulimit command, I get the following discrepancy:
> [georgetoft@biff georgetoft]$ ulimit -a | grep files
> open files 1024
> [georgetoft@biff georgetoft]$ cat /proc/sys/fs/file-max
> 4096
> [georgetoft@biff georgetoft]$
>
> The number given by ulimit is the one that takes effect (243 virtual
> hosts * 4 log files per host = 979, plus a few other open files for
> mail, logs, login shells, etc is pretty close to 1024), so I'm
> wondering what effect /proc/sys/fs/file-max has. The write up you
> pointed me to seems to conflict with what I observed.
>
> Any clarification would be appreciated.
The ulimit command specifies a per-process limit. The value contained
in /proc/sys/fs/file-max is the overall limit for the entire system.
With the values that you cite above, it would only take four processes
to max out the limit contained in /proc/sys/fs/file-max.
For your application, it sounds to me like you need to increase your
per-process open file limit. (You may need to increase the overall
system limit too.) I notice that as root, I'm able to do
``ulimit -n 2048'' and then ``ulimit -a'' actually shows that the limit
has been increased.
I've done some tests (using calls to pipe()) and have determined that
I'm able to create more than 1024 open files when I do this. However,
it seems to me that not all functions implemented by glibc will
support more than 1024 files. In particular, the select() function
relies on the fd_set data structure to know which file descriptors to
wait for. It appears to me that you can't use this function for file
descriptor numbers past 1023. In /usr/include/bits/types.h (which is
included by /usr/include/sys/types.h), I see the following:
/* Number of descriptors that can fit in an `fd_set'. */
#define __FD_SETSIZE 1024
I think you're okay if your application uses poll() instead of
select() though.
Kevin