So here's what happened.
I've got two Kinects (different generations — v1 from Xbox 360, v2 from Xbox One), a pile of old Android phones that were just sitting in a drawer, and an idea that probably sounds insane to anyone who isn't building human-computer symbiosis interfaces in their garage.
The Vision: What if UI wasn't about clicking buttons? What if you could just... intend something and it happened?
Not voice commands (too slow, too ambiguous).
Not traditional gesture control (too limited).
But actual biological intent recognition through sensor fusion.
The Hardware:
- Kinect v1 & v2 for skeletal tracking
- Android phones as distributed IMU arrays
- Heart rate sensors (pulse oximeter)
- Future: EEG headband for actual brainwaves
The Problem: I needed to actually wear these sensors to test the biometric feedback loops.
The Solution: Duct tape.
Yes, I currently have phones attached to my wrist and chest. Yes, my family thinks I've lost it. No, I will not stop.
Every UI paradigm has been about explicit commands. Click here. Type that. Say this. But that's not how humans actually think. We don't think "I will now click the File menu..." We think "I want a new document" and then our brain translates that intent into mechanical actions.
What if we could skip that translation layer?
What if the computer could recognize intent directly from biological signals — gesture, posture, heart rate variability, eye tracking, eventually EEG — and just do the thing?
Current Status:
- ✓ Kinect sensor fusion working
- ✓ Android IMU streaming data
- ✓ Basic gesture recognition
- ⏳ Heart rate variability for stress detection
- ⏳ EEG integration
And yes, when the compiler succeeds, Pickle Rick does a backflip.
Because science isn't about WHY. It's about WHY NOT.