[PLUG-Devel] Sequential Record Processing in Bash

Darrin Chandler dwchandler at stilyagin.com
Sat Sep 22 20:05:37 MST 2007


On Sat, Sep 22, 2007 at 06:55:20PM -0700, Joey Prestia wrote:
> I am trying to write a bash script and in it I pull records with cut and 
> wc -l to give me the number of records I need to process because the 
> file is a csv file that changes often. What I need to do is be able to 
> go through this list one line at a time and process each line in 
> numerous ways and I can't figure out how to increment my processing one 
> line at a time example after I cut the data out this is my file
> 
> website1
> website2
> website3
> website4
> website5
>  
> now I need to use wget to go to each website but I can't figure out how 
> to pull one website at a time from this file? And then go to the next I 
> heard that I might be able to use awk but still don't know how do do it.

while read f; do
   # stuff inside
   wget $f
done < myfile.csv

That's the basic form. There more to it, such as setting the record
separator, reading multiple fields at a time, etc. Assuming you have a
simple enough file then this *can* completely replace cut. The inside of
the while loop is executed once per line. 

In the end it may be worth your time to learn enough of any of awk,
Perl, Python, or Ruby to do this. All of them are better at text
processing than shell.

-- 
Darrin Chandler            |  Phoenix BSD User Group  |  MetaBUG
dwchandler at stilyagin.com   |  http://phxbug.org/      |  http://metabug.org/
http://www.stilyagin.com/  |  Daemons in the Desert   |  Global BUG Federation


More information about the PLUG-devel mailing list