New GPT-4 app can be ‘life-changing” for visually impaired people

The first app to integrate GPT-4’s image-recognition abilities has been described as ‘life-changing’ by visually-impaired users.

Be My Eyes, a Danish startup, applied the AI model to a new feature for blind or partially-sighted people. Named “Virtual Volunteer,” the object-recognition tool can answer questions about any image that it’s sent.

Imagine, for instance, that a user is hungry. They could simply photograph an ingredient and request related recipes.

If they’d rather eat out, they can upload an image of a map and get directions to a restaurant. On arrival, they can snap a picture of the menu and hear the options. If they then want to work off the added calories in a gym, they can use their smartphone camera to find a treadmill.

“I know we are in the midst of an AI hype cycle right now, but several of our beta testers have used the phrase ‘life-changing’ when describing the product,” Mike Buckley, the CEO of By My Eyes, tells TNW.

Join us at TNW Conference June 15 & 16 in Amsterdam

Get 20% off your ticket now! Limited time offer.

“This has a chance to be transformative in empowering the community with unprecedented resources to better navigate physical environments, address everyday needs, and gain more independence.”

Virtual Volunteer takes advantage of an upgrade to OpenAI’s software. Unlike previous iterations of the company’s vaunted models, GPT-4 is multimodal, which means it can analyse both images and text as inputs.

Be My Eyes jumped at the chance to test the new functionality. While text-to-image systems are nothing new, the startup had never previously been convinced about the software’s performance.

“From too many mistakes to the inability to converse, the tools available on the market weren’t equipped to solve many of the needs of our community,” says Buckley.

“The image recognition offered by GPT-4 is superior, and the analytical and conversational layers powered by OpenAI increase value and utility exponentially.”

Be My Eyes previously supported users exclusively with human volunteers. According to OpenAI, the new feature can generate the same level of context and understanding. But if the user doesn’t get a good response or simply prefers a human connection, they can still call a volunteer.