Each TV line takes roughly 50 microseconds, each field about 8 ms and
each frame about 16 ms. Furthermore the two fields which make up a frame
may be interlaced. Given this, how does one interpret data taken
with a shutter speed of 1/1000 sec (1 ms)?