As a kid I was fascinated by Magic Eye images. You stare at a mess of coloured dots long enough, cross your eyes slightly, and suddenly an image pops out (or sometimes sinks in).
A few days ago I thought about them again and asked myself: how do they actually work, and could they make a really nice screensaver? (the answer is yes)

No, my GPU is not broken. Can you see the hidden figure?
How Magic Eye Images Work
Magic Eye images, also called autostereograms, are surprisingly simple to create. For each row, we start with a repeating pattern of coloured pixels. The number of pixels before the pattern repeats is called the separation.
To encode depth, parts of the row use a slightly different separation. Regions meant to appear closer repeat a little sooner, while regions meant to appear farther repeat a little later.
Human stereo vision comes mostly from horizontal differences between the two eyes. When both eyes see the same pattern but offset horizontally, the brain interprets that offset as depth.
So the trick is simply to force certain pixels to be identical at specific horizontal distances. When your eyes lock onto those matching pixels, the brain reconstructs a 3-D surface.
And that's it!
Tip: It's much easier to see the effect on large images.
Here is some pseudo code:
for y in rows:
color_constraints = UnionFind()
for x in rows[y]:
z = depth_mask[y][x]
sep = depth_to_pixel_separation(z)
left = x - sep // 2
right = left + sep
if 0 <= left < len(rows[y]) and 0 <= right < len(rows[y]):
color_constraints.join(left, right)
groups = color_constraints.connected_components()
for g in groups:
color = random_texture_value()
for x in g:
output_row[y][x] = color
for x in unset_values(output_row):
output[y][x] = random_texture_value()
UnionFind is one of my favorite data structures; I'm always happy when I get an opportunity to use it. Here it works as an efficient way to create sets of x-coordinates that all share the same pixel color, leading to the repetition I mentioned above.
It's also worth noting that depth_to_pixel_separation doesn't have to map to just foreground and background. We can instead support multiple depth layers. This tends to work best on larger, high-resolution images, where there is enough space for the different separations to remain visually stable.
Reverse it
The next obvious question is: is this reversible?
Our brains are able to do it, so it stands to reason that we should be able to approximate it.
We can process the image again, scanline by scanline. But now instead of enforcing matches, we try to discover them.
For every position in a row we search for the horizontal shift where two pixel patches look the most similar. That shift corresponds to the separation used by the stereogram, which we can map back to a depth value.
In practice this means taking a small window of width 2w+1 around a pixel and comparing it with the same window shifted horizontally by different amounts.
Once we do this across the whole image, we recover a disparity map, which is just another way of saying a depth map.
$$ \text{score}(d) = \sum_{\Delta x=-w}^{w} \left| I(x + \Delta x,\, y) - I(x + d + \Delta x,\, y) \right| $$ $$ d^* = \arg\min_d\; \text{score}(d) $$Conceptually this is similar to computing autocorrelation across each scanline and looking for the repeating separation distance.
As you will see in the decoder below, this isn't perfect, which can be fixed by adding some weighting towards the center or adding a smoothness constraint.
Another way to reveal the hidden structure is to shift the image until the repeating pattern lines up. When the displacement matches the stereogram's separation, the hidden shape suddenly becomes visible.
Pretty remarkable that our brains can do this more or less subconsciously.
Out of curiosity, I also showed an autostereogram of a plus sign to a few LLMs. Gemini 3.1 Pro hallucinated an eraser, GPT-5.4 confidently reported a seahorse, while Sonnet 4.5 wrote code not too dissimilar to the decoder described above and correctly recovered the shape. 1
Once the generation pipeline was working, I naturally turned it into a small screensaver that uses the current time and looks like an IT support ticket waiting to happen. It's macOS only, I'm afraid, but you can try it here.
-
Shape extracted by Sonnet 4.5:
↩