These AI glasses promised to make me smarter, and all I got was Clippy for my face

This is Optimizer, a weekly newsletter sent every Friday from Verge senior reviewer Victoria Song that dissects and discusses the latest phones, smartwatches, apps, and other gizmos that swear they’re going to change your life. Optimizer arrives in our subscribers’ inboxes at 10AM ET. Opt in for Optimizer here. We’ll be off for the next two weeks and back on November 7th.

As I wrote last week, I’m rapidly running out of body parts to do my job. Part of being human is knowing when to ask for help, so a few months ago, I enlisted senior editor Sean Hollister — a fellow smart glasses nerd — to help me test Halo Glass, an always-listening AI companion that lives inside a pair of glasses.

Halo is the brainchild of two former Harvard students who made headlines last year after they rigged a pair of Ray-Ban Metas to dox strangers in real time. In August, AnhPhu Nguyen and Caine Ardayfio announced they were making a pair of always-on AI glasses that could listen to, record, transcribe, and then organically feed you the answers to questions relevant to your real-time conversations. It’s sort of a mix between Cluely, another AI startup that aims to help you “cheat on everything,” and Bee, an AI wearable that claims to act as your second memory. Instead of a pin or wristband, this lets you view answers discreetly inside a pair of smart glasses.

So of course I wanted to test them.

Sean and I chatted with Ardayfio, who told us that while Halo will eventually make its own hardware, for now, we’d be among the first to experience their app running on the Even Realities G1 Glasses. You may not have heard of Even Realities, but it was among the more impressive smart glasses makers at CES. All we’d have to do is try out the prototype, compare notes, and then write up our experience. Easy, right?

Some of the confusion is the Halo Glass prototype is using third-party hardware. That led to some annoying troubleshooting.

Some of the confusion is the Halo Glass prototype is using third-party hardware. That led to some annoying troubleshooting.

The appeal to both of us was having a second memory. We are busy, occasionally forgetful people. Wouldn’t life — and our jobs — be a tad easier if we stopped forgetting that one thing we told our colleagues, bosses, and spouses we’d do? Wouldn’t interviewing sources be easier if, when they used an esoteric term, a definition might pop up in real time without having to break the flow of conversation?

It sure sounds nice, but always-on AI wearables present a boatload of ethical conundrums. Since this is an entirely new product category, the ethics of it all took us a bit by surprise. For starters, Sean lives in California — a state that legally requires both parties to consent to recording a conversation. Is he committing a crime if he wears these glasses without disclosing to everyone around him that he’s recording? And Sean’s wife has a job that requires confidentiality. An always-on recording device could jeopardize her livelihood if Sean forgets to turn it off while she’s working and he’s nearby. As a result, Sean couldn’t actually test these glasses at home. Meanwhile, my spouse is royally fed up with always-listening AI wearables after I reviewed Bee and it transcribed one of our fights. (To test Friend, I had to wear it outside the home.) Our solution was for each of us to wear a pair of the G1 glasses running Halo and to hop on a video call to test it with each other.

In theory, Halo works like this: In the app, you see a live running transcription of the conversation happening around you. Every once in a while, there’s a pop-up factoid about something referenced. For example, maybe you’re talking about animals native to Australia, and someone asks which is the most dangerous. That answer gets sent to your glasses, and you can look like a smartypants in your conversation. Once the conversation is over, you see a quick summary of it and some action items to address — similar to what you’d compile at the end of a meeting.

Screenshots of the Halo app compiled

The summaries on Sean’s transcript (first two) was more useful than the one on mine (last).
Screenshots: Halo

In practice, our call was ridiculous.

It kicked off with a 20-minute troubleshooting session involving multiple firmware updates and disconnections. I’ll spare you the details except this, because it’s just… the most awkward way imaginable to interact with AI: To summon the display, the G1 glasses require you to look up. You can adjust the required angle — a wise choice, since the default is 40 degrees. That’s sort of like just throwing your head back to look at the ceiling. We both adjusted to roughly 15 degrees, but it’s still a comically obvious trigger.

Wonky prototype hardware is forgivable because you’re exploring an idea. And the idea that AI glasses can make you appear smarter without the person you’re talking to knowing, makes me uncomfortable.

I talked to Sean about my concerns. We rambled about whether smart glasses really help people stay present in the moment. We wondered, can you really be yourself if you know you’re being recorded? What level of disclosure is ethical? How do you protect the privacy of your loved ones who may not be as keen on this tech as you are?

It was a riveting conversation, save for all the times AI would butt in. At that point, one of us would have to throw our head back to view whatever alert had popped up. Imagine Sean and me, 30 minutes into the call, throwing our heads back like deranged sea lions barking on a pier.

It would sometimes interject useless trivia. For example, it showed me the definition of “ensconced” after I used it correctly. I was mildly offended that AI perhaps thought I didn’t know the meaning of that word in context. When I referenced Cluely, the Halo AI instead gave factoids about Clueless, the “1995 coming-of-age comedy film directed by Amy Heckerling.” Typical AI.

Close up of Even Realities G1 Glasses

Clippy for my face was not how I anticipated this going.

The worst was when Halo displayed a message explaining that mobile phones first arose in the 1970s and ‘80s. Sean must have said something about phones for me to receive this. I relayed the fact to Sean. Then he told me his glasses showed the same notification. The AI again alerted me that phones first arose in the 1970s and ‘80s. We were stuck in a hellish AI-powered ouroboros. We bobbed our heads some more.

A few times, Halo AI offered helpful facts. It surfaced the definition for “nits” when we were talking about the displays on smart glasses. It defined “doomerism” when Sean and I spiraled, pondering the implications of always-on recording on the lives of people around us.

But ultimately, using Halo was more of a distraction than an aid. The entire time, roughly 10 percent of brainpower was spent wondering when the assistant would butt in or disconnect. Rereading the transcript of our conversation in the app, there were so many dropped threads that I wished we’d delved deeper into, if not for all the distractions.

Sean told me his interest in Halo was sparked by a very human desire to “remember better.” I’d bet anyone with a to-do list would. I felt the same way when testing the AI wearable Bee. And yet, this conversation — where AI was spitting the same factoids to each of us at the same time — just reminded me of Microsoft’s Clippy. Always there, nagging you with tidbits that weren’t all that useful, and interrupting your train of thought just as you got going.

For now, I think I’ll take my imperfect hodgepodge of analog Post-its and to-do lists. I will settle for potentially looking dumb in a conversation by asking, “I’m sorry, what does that mean?” It’s not exciting, but I’d rather not bob my head the next time I need an answer.

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.