Imagine you are standing in a crowded room, lit by a dozen different lamps and overhead fixtures. Your brain processes the scene without effort, but your eyes are doing something even more remarkable. They act as twin organic mirrors, capturing a distorted but geometrically perfect miniature of everything in front of you. This isn't just a poetic thought; it is a fundamental law of physics. Because your eyes are set a few centimeters apart, each one sees a slightly different perspective. However, the light reflecting off them must follow the same rules of the room. If there is a green lamp to your left, both your left and right corneas will catch that green spark, just at slightly different angles on their curved surfaces.

In the digital age, this tiny glint of light, known to scientists as a "specular highlight," has become the ultimate battlefield between human truth and digital fakes. While generative AI is getting better at mimicking skin pores, flowing hair, and the rhythm of our speech, it is still a terrible physicist. An AI model can create a stunningly "real" face, but it doesn't actually understand the 3D shape of a room or how light rays bounce off a wet, curved surface. The result is a subtle glitch that forensic investigators now use to unmask even the most realistic deepfakes. By zooming into the pupils and mapping how light reflects, we can find the "mathematical ghost" that proves a person isn't really there.

The Corneal Mirror and the Physics of Light

The human eye is covered by the cornea, a clear, protective layer that is essentially a polished, liquid-coated sphere. When light hits this surface, it reflects just like it would off a high-end camera lens or a shiny Christmas ornament. Scientists call this "corneal specular reflection." In the real world, if you look at a computer screen, that screen creates a specific shape - usually a rectangle - on the surface of your eye. Because your eyes are part of one biological system in a single physical space, the reflections in both eyes must be "bilaterally consistent." This means they must show the same light sources, even if the curve of the eyes makes them look slightly different from one another.

Generative AI works much differently. Most current AI models create images by predicting pixel patterns based on millions of training photos. They aren't "ray tracing" - a technique used to calculate how light moves - in a virtual world. Instead, they are essentially guessing what a human eye looks like based on a blurry average of every eye they have ever analyzed. Because these models often treat the left and right eyes as separate objects, or fail to account for the exact 3D position of light in a scene, they leave behind clues. They might put a square reflection in the left eye and a circular one in the right, or place reflections in spots that would be physically impossible given the distance between the pupils.

Mapping the Geometry of a Fake

To catch a deepfake, forensic experts don't just rely on a gut feeling. They use math to map the reflections across a digital grid. First, they isolate the corneal area of both eyes and zoom in for a high-resolution view. Then, they identify the "specular highlights," which are the brightest points where light hits the eye. By calculating the distance between these highlights and the center of the pupil, investigators can tell if the light is coming from a consistent direction. If the highlight in the left eye suggests a lamp is 45 degrees to the left, but the right eye shows the lamp directly in front, the image is a mathematical impossibility.

This process involves looking at the "Intersection over Union" (IoU), a metric used to measure how well two shapes overlap. In a real photo, when you map the reflection from the right eye onto the one from the left (adjusting for the change in perspective), the shapes should line up almost perfectly. In many deepfakes, this score is shockingly low. The AI might get the eye color right, or even the "twinkle," but it fails to replicate the exact silhouette of the surroundings. This is especially true in video, where reflections must stay consistent as a person moves their head - a task that requires massive computing power to get right in every single frame.

Feature of Reflection Authentic Human Footage AI-Generated (Deepfake)
Light Source Shape Matches the actual environment (e.g., a window or lamp). Often generic blobs or mismatched shapes (e.g., star vs. circle).
Symmetry Between Eyes Reflections in both eyes align with 3D physics. Reflections often vary wildly between the left and right eyes.
Movement Dynamics Reflections shift smoothly and logically as the head turns. Reflections may jitter, disappear, or stay still during movement.
Environmental Logic You can see the room the person is standing in. Reflections are often "hallucinated" and don't match the background.

The Challenge of Real-Time Deception

One of the most concerning uses of deepfake technology is in live video calls, where a scammer might pretend to be a CEO or a loved one to trick someone into a wire transfer. Because these videos are made on the fly, the AI has even less time to calculate complex physics. To fight this, researchers have developed "active probing" techniques. If you are on a video call and suspect the person is a deepfake, you can use software to subtly change the brightness or color of your own screen, or perhaps flash a specific pattern.

If the person is real, the light from your screen will reflect off their eyes in real-time, showing that specific color shift. A deepfake generator, which is busy trying to paste a face onto a different person's movements, usually cannot react fast enough to include these sudden external light changes. This creates a "glitch in the matrix" where the person’s face remains perfectly lit while the reflections in their eyes fail to match the changing light of the digital environment. This means the very monitor used to view the deepfake can also be the tool used to debunk it.

Why AI Still Struggles with Spheres

You might wonder why AI can write poetry and pass medical exams but fails to reflect a light bulb in an eyeball. The answer lies in how these models "see" the world. Most image-generating AIs are built to prioritize texture and local details over overall 3D geometry. The AI knows that an eye needs a wet look and a bright spot to look convincing. However, it doesn't "know" that the eye is a 3D object sitting in a 3D room. It is a master of 2D collage, not a master of physics.

Furthermore, training an AI to understand the way reflections wrap around surfaces is incredibly hard. For a reflection to be accurate, the AI would need to simulate the entire environment behind the camera, even the parts you can't see in the frame. While movie studios use ray tracing for big-budget visual effects, doing it in real-time for a deepfake requires more power than most home computers can handle. Until AI models can integrate a full physics engine into their process, the "eye test" remains one of the most reliable ways to tell biology from code.

Developing Your Own Forensic Intuition

While professional investigators use software to map pixels, you can develop a "forensic eye" by learning what to look for. The next time you see a video that feels slightly "off," don't look at the mouth or the hair - those are the parts the AI handles best. Instead, look into the eyes. Zoom in if possible. Ask yourself: Does the light in the left eye look like it came from the same source as the light in the right eye? If the person is supposedly outdoors, can you see the blue of the sky reflected in their pupils, or is it just a generic white dot?

Another clue is how the reflection "clips" or cuts off. Because the cornea is a sphere, reflections should wrap around its curve. AI often generates reflections as if the eye were a flat surface, meaning the light doesn't "stretch" properly as it moves toward the edge of the iris. You should also look for shadows. Sometimes an AI will put a reflection in the eye but forget to put the matching shadow on the eyelid. These tiny errors are the modern equivalent of a forged signature. They are the physical evidence of a digital lie, hidden in plain sight within the "windows to the soul."

The arms race between deepfake creators and detectives will likely continue for decades. As AI models grow more powerful, they may eventually learn to simulate the physics of the eye perfectly. However, this journey teaches us something deep about the link between technology and reality. No matter how advanced our simulations become, they are always struggling to catch up to the sheer complexity of the physical world. By learning to see the world through a reflection in an eye, we aren't just spotting fakes; we are gaining a deeper appreciation for the intricate, beautiful consistency of the light that surrounds us every day. Trust your eyes, but more importantly, trust the physics that governs them.

Artificial Intelligence & Machine Learning

Eyes as the Gateway - Using the Physics of the Cornea to Spot AI Deepfakes

Yesterday

What you will learn in this nib : You’ll learn how to spot deepfake videos by examining the tiny light reflections in people’s eyes, understand the physics behind those reflections, and use simple visual and mathematical checks to tell real from fake.

  • Lesson
  • Core Ideas
  • Quiz
nib