Thoughts on AR glasses



What would AR glasses be like in 3 years?

A variety of smart glasses have been developed by companies such as North, Intel, Amazon, etc. And rumor has it that Apple is going to release smart glasses as well.

After browsing through numerous smart glasses video reviews, it makes me wonder what the future of smart glasses will be and how we could leverage its advantages to design the next generation of computing system.

I’m optimistic about AR glasses after seeing that hand tracking technology has been improved drastically — the neural interface technology developed by CTRL Labs & the help with computer vision such as ConvNets (Convolutional Network).


Humble thoughts on the vision of AR glasses in 3 years


A user wears either light-weight smart glasses or contacts, with or without a wearable, such as a watch or a ring. Wearables are for those who want to use dynamic controls, with or without benefits of tracking their body health.

As more inputs are supported nowadays, a user can switch control inputs between voice commands, gestural controls, keyboard inputs, eye tracking, (and wearable controls). I’d leave invasive BCI out of this list for now as this is a whole other conversation. :)

Various input methods can accommodate numerous user scenarios and make it easier or more likely for a user to use daily.

For example, Mendy is wearing both smart glasses and a wrist wearable today, and she’s riding a bus to work. How would her interactions with smart glasses be like?
  1. Mendy can make voice commands silently if her hands aren’t free, because she doesn’t want to bother other passengers. The hardware to support this vision nowadays is a patch like computing system picking up otherwise undetectable neuromuscular signals triggered by internal verbalizations. What if we could develop this technology using longer ear hooks, for example?
  2. She can use eye tracking to navigate her apps, and simply imagine her hand clicking and doing different gestures to control interfaces on her smart glasses, thanks to neural interface technology.
  3. She can remote control her laptop.

Magical moments


As we know, spatial computing bridges digital and physical spaces. Many cool use cases and magical moments can be unlocked because of this.

A user will be able to interact with their products or spaces with the most natural interactions which humans have developed for hundreds of years.

How cool would it be if we could:

1. Replace physical products with digital contents

For example: a user can open a digital book/newspaper and read, or, look at time/ digital watch on the wrist.




A user can place digital contents in different places. I believe notes will work better if they are associated with spaces. Spatial memory will be leveraged.


2. Control internet of things with a glimpse

For example: a user can turn on/off with a glimpse or voice/gestural controls, or, trigger control menu of a speaker by simply looking at it and use hands to interact with the digital menu.



A user can pull out digital contents from a laptop with a finger. Using multiple screens to boost productivity is a must-have. Before the graphic is good enough on AR glasses, we will still rely on our laptops or VR.


3. Improve communication

I believe it’s been discussed for many times as it’s a great feature to connect people around the world — a user can benefit from instant translation with XR.




Final thoughts


Technologies and some use cases I mentioned in this article is public knowledge. I believe many ideas above have already been developed or is currently being developed. I simply wanted to document some personal thoughts and hope I inspire my readers in some ways. :)



© Qiao Huang. ALL RIGHTS RESERVED