On March 28th, according to The New York Times, starting next month, Meta will introduce a series of multimodal AI features for its Ray-Ban smart glasses. These features include translation, object recognition, animal and scenery recognition, and more, which entered early testing phase in December last year.
Users only need to say "Hey Meta" and provide cues or ask questions to activate the built-in AI assistant in the glasses, which then responds through the speakers integrated into the frame. Testing conducted by The New York Times found that in various scenarios such as grocery stores, while driving, or visiting museums and zoos, the glasses can correctly identify pets and artwork. However, it's not "100% accurate," for example, it may struggle to identify animals in the distance or within cages.
In terms of translation capabilities, the glasses support translation for English, Spanish, Italian, French, and German.
According to previous reports, Meta CEO Mark Zuckerberg revealed this feature in an interview with The Verge in September last year. He stated that people can "ask the Meta AI assistant questions all day," implying that it can answer wearers' questions about what they see or their current location.
Meta's Chief Technology Officer, Andrew Bosworth, also demonstrated this AI assistant in a video, for example, it can accurately identify and describe the California-shaped sculpture on the wall. He also explained other features, including having the assistant help add descriptions to captured photos or perform translation and summarization, which are also quite common in products from other companies like Microsoft and Google.
Currently, the AI capabilities of the glasses are only available to "a small subset of users who have opted in" in the United States and are still in a testing phase.
Source: IT Home