If you look at a fixed point during some time (30 seconds for example), you may notice that the colors start to loose their brightness and shift to a medium gray. Then, if you look at a flat region you will see the negative image for a few seconds. This is called afterimage.
The color receptors in our retina are called cones. Humans have three types cones roughly tuned to different colors: red, green and blue. These cones are a kind of neurons that transmit electrical spikes when they “see” their preferred color. For example, “blue” rods spike when they are illuminated with blue light. The yellow color excites both “green” and “red” rods (the mix of green and red light makes yellow), white excites all the rods (the color white is a mix of all colors) and the color black excites none.
The afterimage effect is caused by the adaptation of the rods. When a rod is illuminated by the same light for a long time (tens of seconds) it starts to “get tired” and spikes less. Then, when the illumination changes again it takes some time to see the real colors (thats why we see the negative image).
For example, if a region of the image is blue, the “blue” rods spike quite a lot at the beginning and the “red” and “green” rods do not spike. In this situation, our brain thinks that the color is blue (which is correct). After a while, the blue rods “get tired” (adapt) and spike less and less and our brain sees it grayer. After that, if we look at a white region, all the rods should spike quite a lot, but the blue ones are already “tired” (adapted) and they do not spike so much. Our brain receives mostly the signals from the “red” and “green” rods and interprets the color as yellow, which is the negative of the color we where previously seeing.
In our daily life, our eyes move all the time (they perform saccades) and there is no time for this adaptation, so we don't see negative images. On the other hand, if the light of our room changes globally (for example if we switch of the lights or switch on a red light) this adaptation helps us to “subtract” the overall illumination and see the real colors. When we look at a bright light at night (a camera flash for example), then we can see a black spot for a while. This is also caused by this adaptation.
On the top half you can see the original image and on the bottom half you can see the output image:
If you don't have the Java plug-in you can watch a video of it
This applet simulates the afterimage using a temporal high pass filter. For each pixel we subtract a weighted average of all the previous frames of the video. This is a “weighted average” because newer values count more than the old ones. We also simulate an automatic movement of the eye (press 'm' to switch it off).
This is the result with the US flag:
This is the result with the Lilac chaser optical illusion:
If you change the speed of the filter to fast ('k' key) you have a simple motion detection. When there is no movement the image is gray, and when there is movement you see clearly which pixels have changed.
There is also the Peripheral drift illusion (in this example you see the “rotating snakes”) that could be related with this motion detectors. Here you can see that the result of a still image with eye movement is quite similar to an image that is actually moving:
Finally, a motion detector is specilly useful in video. In this case, most of the image is still (the motion detector returns a gray image), but the letters are being written (white pixels):