Since social media and the iPhone, our memories have been deteriorating quickly.

"More in-person time with friends and family. Less political knowledge, but also less partisan fever. A small bump in one’s daily moods and life satisfaction." These are results from a Stanford study reported when people deleted their Facebook.

Though many upsides, the latest personal computer and smart phone revolution have changed how we interact and what we remember.  

And as society continues to reach new heights, we will eventually:

  1. replace computers with smart glasses, and
  2. allow technology to interpret our spoken words (conversational intelligence/AI)

We will need to find ways to shift the trajectory from technology that distracts from the human experience to technology that serves the human experience.

If successful, smart glasses and conversational intelligence would help people to communicate and collaborate more effectively. Though 10+ years away, if we can nudge these technologies in a direction that is more personal, people will be smarter and better connected because of it.

Hurdles to Overcome

Prior to augmented reality and conversational intelligence becoming mainstream, we'll need better microphone technology and natural language processing tools to accurately transcribe speech, distinguish voices from one another, and understand sentiment.

We'll need creative solutions to state and federal laws that prevent recording without consent. And we'll need better batteries and algorithms to transmit, transcribe, and categorize speech.

What to Expect

Imagine you and a friend each have a pair of glasses and out of the corner of your eyes you see the conversation you're having being transcribed in real time. You receive real-time suggestions based on your context: be it a better way to connect with the individual, learn an educational concept, or keep track of your experiences.

Picture Jennifer—a fictional individual from Paris. Jennifer likes to travel, and begins by looking at the conversations she's had over the past week. These conversations are heatmapped by topic, sentiment, talk time, region, and tasks.

This is Jennifer's view prior to having an active conversation. Conversations are heatmapped by topic, sentiment, talk time, region, and tasks.

Next is Jennifer's profile, where she specifies what in life she aims to prioritize. Her conversations with others will naturally group by the topics she specifies.

This is Jennifer's view prior to having an active conversation. Her conversations with others will group by the topics she specifies.

Lastly are Jennifer's active conversations. She meets Stephen for a cup of coffee in order to discuss a project at work. While discussing, the system transcribes and categorizes her notes, and even reminds her to ask Stephen about his recent vacation. At any time, Jennifer can tap on 'Log Task' and have the current topic and speech logged as an action item. The system takes into account messaging and email interactions, allowing Jessica to connect 1:1 with Stephen, rather than navigating a smart phone or computer mid-conversation.

We'll need to desgin an interface where everything that is in the periphery of Jennifer's eyes are relevant to her in-person surroundings–aiding and enhancing the 1:1 experience.

This is Jennifer's view from her smart glasses while having a conversation.

The video below overviews a few of the suggestions we'll need to work on how to surface up in glasses.

Though all of this feels inevitable, we still need to find ways to shift the trajectory from technology that distracts from the human experience to technology that serves the human experience.