All your DATA are belong to us...

Alexander Henry alexanderhenry at cox.net
Mon Nov 28 20:14:19 MST 2005


On Sun, 27 Nov 2005 08:49:16 -0700, June Tate <june at theonelab.com> wrote:

> On Nov 27, 2005, at 8:13 AM, Ric Whitney wrote:
>
>>
>> Thanks George!
>>
>> Here's what went through my head while watching it:
>>
>> Marketing.
>> Research & Development
>> The mall figuring out who would buy what next and who was just
>> shopping with no
>> intention of buying anything.
>>

I have seen prototypes like this in action.

The look and feel was quite different, as was the objective.  It was 2003  
when I saw this.  They were trying to enhance security systems to spot  
thieves, pickpockets, and shoplifters.  Give the guard more cameras, let  
him sleep more, and light a light on a terminal when something deviant  
happens.  In the movie Shadow showed us, when the first rectangle appears  
over that woman, the first thing I thought was, "thief here".

I saw two designs.  Both were designed so that the front-end was standard  
camera systems, even grainy black and white ones.  For one, middleware  
that used affine transforms to distinguish individual persons or vehicles  
or items carried.  The next layer of middleware would be a logic engine  
which would take text from the person-item-car parser and flag persons and  
behaviors of interest.

Alternitavely, you could have a neural network learn normal behavior, and  
after the learning phase have the machine scan hours of footage and only  
show interesting stuff.

For the first one, they had the graphical problems worked out, but needed  
the logic engine.  I saw a demo of a camera taping a parking lot scene, a  
person intersecting a car, and the computer worked out which blob was  
which while they were overlapped.  This was a hard problem for them,  
remember they are working with 2D images.  I remember the computer turned  
the car into a blue blob with a white center dot, then the person to a red  
blob with a white center dot.  When they intersected, the computer showed  
one blob with approximate centers.  Then when they separated again, the  
computer quickly corrected the blobs and centers.

For the NN one, they were making good progress on this.  The filtered  
image not only compressed time to interesting times, but also the image  
itself had blotted out non-interesting stuff.  They trained the NN on a  
camera that watched a hospital's parking lot.  In the filtered shoot, you  
saw an all-green image, and lots of stuff jumping in and out in fast  
forward, like birds flying, a car taking a u-turn badly, and a guard  
pacing around a car and writing a parking ticket.  Again, picture just the  
guard floating in an all-green image.  It was supposed to play in fast  
forward as well, so while you're scanning the footage for crimes, you'll  
see the guard for half a second.  But it will grab your attention, and  
make you stop, rewind, and play the non-filtered section.

-- 
Alexander
http://ahtechllc.com/


More information about the PLUG-discuss mailing list