Wouldn’t it be sweet if Google Project glass had biofeedback hands free control? Will we be able to control our future devices with our brain activity?
Lately, I’ve heard a lot about Google Project Glass. One of the things I’ve learned is that the glasses operate with a touch panel on the side of the frames. In this video, Sergey Brin lets someone else try them on, and it’s clear that this is true. While the project seems impressive enough, it got me wondering where technology could be headed if neuro-feedback sensors became a valid brain-computer interface precise enough for everyday use.
Consider that Microsoft Kinect may soon be finding its way into your vehicle, and suddenly, we’re magically answering the phone in the car with a wave of our hands. What if the combination of camera sensors and neuro-feedback sensors could combine image and heat recognition with brain activity?
Overall, the evolution seems straightforward enough. We went from a mouse and keyboard to touch-screens, really fast. Now all of the sudden, project glass comes out just when the tablet and smartphone markets are starting to get a little bit saturated. Some folks still prefer buttons. Others prefer keyboards. All of these input methods involve moving in some way. What if devices responded to the way that we think?
The Application of Neuro-feedback Sensors:
If google project glass is on our head, that seems like a great place to put a few specially engineered neuro-feedback sensors. This could be a brain-computer interface, slowly learning how we think over time. Of course, this all sounds much more appealing when we talk about the experience of wearing augmented reality glasses.
The Neuro-Feedback Calibration Experience:
After opening your new google glasses and putting them on your face, you are prompted to go through a number of really basic biofeedback calibrations for the brain-computer interface.
STOP – A basic example of Biofeedback Calibration
You would see a word like STOP while the neurofeedback sensors take a snapshot of your brain activity. At this point, google glasses are calibrated to STOP playing a movie, or STOP playing a song. After enough seemingly random snapshots, you would go through a series of tutorials to learn how to control the brain-computer interface by aligning your brain activity with other commands. How easy would it be for us to replicate the same brain activity on command? Is it as simple as recalling a thought or a word?
Taking A Photo Is A Thought
When you think about taking a picture, it just happens. How amazing would this be? Could it be more magical than gesture and touch controls? Do we have enough control over our thoughts to use a technology like this? Consider this link (Google announce full resolution uploads). Then imagine how much more more storage and bandwidth will be available in another five years. Live video surveillance of our entire life, and if biofeedback were implemented, our brain activity, could be around the corner. Is augmented reality really worth it? Would a cyber reality become more real? Could brain activity be influenced by the neuro-feedback sensors as well as recorded?
Certainly, with as many privacy concerns as we have right now over social media, there would be a handful of questions about Google project glass having access to our brain activity. If we could compose blogs with our minds, what other kinds of software would emerge to organize our thoughts? Are privacy concerns more important to you than the potential enhancements to your everyday life?
Will Google Project Glass Get Biofeedback Hands-Free Control Someday?
As for me, I can’t wait for the day that biofeedback augmented reality devices come out, but I have no idea if brain-computer interfaces are something Google is actually working on.
Would you buy a biofeedback controlled augmented reality brain-computer interface? I’d probably be happy just to get my hands on the current offering of project glass.