The MIT Media Lab has created a prototype for a wearable device that uses internal vocalisation to communicate with computers. Rather than reading your mind, it detects tiny movements of the muscles around your vocal cords and larynx when you say words to yourself in your head.
This would overcome one of the main obstacles to using voice tech in public – namely the embarrassment and noise of talking to yourself. It's a clunky looking prototype but could be an achievable way of showing how AI can be used to amplify human capacity and make tasks easier, rather than replacing humans altogether.
AR is much more accessible than VR because it's possible through a phone screen, but this stifles how much people can actually interact with this 'reality'. Enter Leap Motion's 'virtual wearables', which use impressively accurate hand tracking technology (and a headset) to give natural hand-based interactions.
This is a huge improvement on the UX of AR, in which interactions are normally handled by controllers or learned trigger gestures. You can pick up, rotate and generally play with any virtual object in the way any human would. The project is open source, with just a prototype headset at this stage.
Project Zanzibar is a platform that blurs the distinction between the digital and physical worlds via 'tangible interaction'. Think a digitally-enabled Toy Story, with toys brought to life and given memories (but in a non creepy-doll kind of a way) and a mat that has the ability to locate, sense and communicate with objects.
As the objects have RFID tags recording everything they do, it's enhancement both through how it can make things come to life (such as firing a cannon and seeing it explode on screen) and how it can create records and 'memory' for physical objects.
Through RFID-tagged bibs, signal broadcasting mats and cameras around the course, every single one of the 30,000 runners in this year's Boston Marathon will get their own video of the race, featuring them as the star. This is courtesy of adidas's 'Here to create legend' campaign.
This is the extreme end of personalisation and user ‘generated’ content – literally. It's content creation as a great service for people that actually brings value to them personally, and an impressive turnaround of speed content creation at scale.
Unilever is testing a new video format that encourages viewers to watch adverts by donating to a charity of their choice if they watch for at least 15 seconds. It’s a clear exchange and offers better transparency around advertising – people know brands want them to watch ads and this gives a clear value for doing so.
The format applies a level of gamification to avoid a sense of bribery – less ‘watch it and you can do this’, more language around unlocking the donation, linked to a countdown, and with interactive choice over which charity to donate to. This brings the viewer in and shows the value of their role in the set-up.