
There are undeniable benefits to AI that can "see." In the realm of Physical AI, vision is imperative. It allows systems to create truly personalized home automation, dramatically reduce energy waste, and crucially, detect or even prevent falls in seniors. But the question of how AI should see in our interior spaces is rarely asked. It is time we talk about it.
Today, major players are aggressively adding AI to their video feeds. Ring (Amazon) utilizes it in their doorbells, while Google integrates it across their Nest interior and exterior cameras. It is no surprise that these companies are leaning heavily into AI-enabled video; they already hold massive market share, and cameras are a natural hardware evolution for them [1].
However, as this technology moves from the front porch to the living room, a line is crossed. Some companies are placing these AI eyes in our most private spaces. Others, like Xfinity and Linksys, are pivoting to WiFi signals, grasping at the illusion of invisibility to monitor presence.
I mean, it’s just a WiFi signal, right?
Unfortunately, the answer is no. The problem with both cameras and WiFi sensing is effectively the same: Privacy.
It is unavoidable: AI needs to perceive the environment to act in it. But the method we choose to give AI "sight" will define whether our future feels safe and human, or invasive and watched.
When a camera-based AI looks at the real world, it isn’t just seeing furniture. It is observing people in their most intimate environments. Once it sees, it cannot unsee. That data is processed, often stored, and inherently accessible. We must ask: What is the right way for AI to sense human presence without turning a private home into the source material for the next data breach or viral TikTok hack?
Technologists love cameras because they are data-rich. They capture the "Who" (identity) and the "What" (activity). They allow models to recognize faces, interpret gestures, and identify specific actions. This makes predicting schedules and behaviors nearly foolproof.
In the public sphere, this might be acceptable. In the private sphere, it is a surveillance nightmare.
If a company promised you next-generation safety but required a camera in every room to deliver it, you would likely say no, even with assurances of "local-only" storage or strong encryption. Why? Because the feeling of being watched is as intrusive as the act itself.
Recognizing the resistance to cameras, some companies have turned to WiFi sensing as a "privacy-friendly" alternative. This technology detects motion by analyzing how radio signals bounce and disturb the environment.
It sounds clever, but it creates a false sense of security. A camera is obvious; we understand the risk. WiFi sensing is ambiguous. While it doesn't capture a JPEG, research shows that WiFi Channel State Information (CSI) can be used to infer surprisingly detailed activities, from keystrokes on a keyboard to subtle body movements [2]. It captures the "What" without you ever realizing you are being watched.
We have begrudgingly accepted surveillance in public. We should not have to accept it at the dinner table.
So, if cameras are too intrusive and WiFi is too deceptive, what is left?
The answer lies in a technology that has existed for decades but is only now entering the smart home: mmWave (Millimeter Wave) Radar.
Originally developed for military applications in the mid-20th century and expanded into the automotive sector for collision detection in the 1970s and 90s, mmWave is a sensing technology, not a filming one [3].
Unlike a camera, mmWave does not capture faces, skin, or clothing. It creates a "point cloud", a 3D representation of presence and movement. By analyzing the scattering of these points, modern AI can learn to distinguish individuals based on gait (how they walk) and determine their "safety state" (standing, sitting, or falling).
Crucially, it delivers the "Who" (context/identity) and the "Safety State" without the intrusive "What" (reading a confidential document, getting dressed, having a private conversation).
It is the Goldilocks solution: precise enough for personalized automation and fall detection, but abstract enough to preserve human dignity. As we build the future of the smart home, we must demand technology that respects the sanctity of our private spaces.