automation of collecting web site size
Eric Johnson
ej@netasm.com
Fri, 13 Oct 2000 15:27:57 -0700 (MST)
On Fri, 13 Oct 2000 sinck@ugive.com wrote:
: \_: There must be a way to get this info, without either sucking the whole
: \_: site down, or having access to the webserver?
: \_:
: \_: Anyone have any ideas, suggestions, etc.?
: \_
: \_ lynx -dump -head $URL | grep Content-Length | cut -d: -f 2
:
: Hum, this could lead to amusing issues if it hits a cgi script (or
: other) that doesn't play nice with issuing Content-Length.
You bring up a good point. Keep in mind this was a suggestion to
help get Wes started, not a complete solution.
: Furthermore just getting the head doesn't tell you what else might be
: linked to it down the tree.
Huh? Are you saying sizeof(Content-Length) != sizeof(website). That's
kind of restating the obvious, isn't it? I kind of presumed Wes would
understand the fragment had to be looped, called recursively, whatever;
and also called only if certain conditions he designed were met, like
your comment regaring CGI.
--Eric