Accessing web elements from cli

Eric Lee Green plug-discuss@lists.plug.phoenix.az.us
Fri, 13 Dec 2002 17:02:11 -0700


On Friday 13 December 2002 04:44 pm, Kyle Faber wrote:
> Almost exactly what I needed.  Is there a way to direct the output of that
> command to stdout instead of to a local file?

Try this (all on one line, sigh):

( printf "GET /some/urlpath.html / HTTP/1.0\n\n" ; sleep 10s ) | telnet 80 
somesite.com | grep "some string"

This will work for most major sites (those that do not use name-based 
resolution to have multiple sites on one host). For name based ones, you 
might need to use HTTP 1.1 to get your data. Do something like this (from one 
of my own scripts):

---- snippet start ----
cat >/tmp/blognot.$$ <<EOF
GET /pingSiteForm?name=BadTux&url=http://badtux.net/news.php&submit=Submit 
HTTP/1.1
Host: newhome.weblogs.com
Connection: close

EOF

(cat $FILENAME ; sleep 10s) | telnet newhome.weblogs.com 80
---- snippet end -----
(Note that the above snippet is being run on FreeBSD, thus why I cannot use 
"printf" as I would do under Linux). (Note: the GET is one line, the next 
line is the Host: line, the Host: line is required for HTTP/1.1, the 
Connection: close is required in order to tell the host to close the 
connection after it's finished sending the page). 

Note that we're writing raw HTTP to the web server. I do this all the time to 
debug things or, as with the above script, to submit forms in an automated 
fashion to sites that need to be updated. You're saying that the whole World 
Wide Web is just a web browser telnet'ing into a fancy 'telnet' server and 
typing a command line and a few header lines at it? Well, yeah. Except for 
the SSL-secured sites, where you have more complex stuff going on -- neither 
'wget' nor using 'telnet' directly will help there. 

-- 
Eric Lee Green                            egreen@disc-storage.com
Software Engineer,  DISC Inc.     http://www.disc-storage.com
              Automated Solutions for Enterprise Storage