by STIRworldNov 12, 2019
'The Augmented Human’, an extensive study released by SPACE10, explores how increasing digital access has merged boundaries between people and devices. With digital interactions being a way of life for most of us, it inquires as to where have we come and where are we headed to when our relationship with technology is concerned.
While the Part 1 explored how we got to the point where technology and humans seamlessly interact, here we cover the subsequent part of the report that anticipates components of the next paradigm shift.
The following excerpts have been taken from the report ‘The Augmented Human’, released by the Danish innovation lab.
A. THE COMPUTER INTERFACE – WHAT’S COMING NEXT?
“I feel there’s a really critical moment right now where human values and practices — people’s lived experiences — are going to have to be applicated for a lot more strongly, and those who create these technologies will have to be much more aware of their impact on individuals and societies,” says John Vines, Professor, Northumbria University’s school of design.
In this paradigm, the experience of interaction will be much more smoothly integrated into everyday life.
In terms of product launches, the first half of the next decade should set the scene for how our relationship with our devices will cement itself. Apple is working on AR glasses; and Facebook, AI computing company Nvidia and Google have all committed publicly to releasing glasses within the next few years.
For now, the technology is quite immature. Because of its limited field of view, it has a viewport problem due to the limitations posed by the mobile phone’s hardware. But as more and more competitors enter the market, the performance of the underlying technology should only improve.
All in the gestures: Companies such as Leap Motion and Nimble VR (recently acquired by Oculus) are developing technology that uses bodily interactions and gestures to make the connection between the physical and the digital feel more real.
Alexa, cook me a potato: In September 2018, Amazon unveiled dozens of new products. They all have one thing in common: they can be controlled with your voice. In recent years, the fidelity of the technology capturing voice has continued to mature, spearheaded by tech firms like Google and Baidu.
We’re now close to the point of having artificial voices that are indistinguishable from real ones to the untrained ear.
So when an artificial voice calls you, should it declare its identity? And what would it mean for a brand or company to have its own voice communicated through a smart speaker?
The developments we see today, including voice technology, are likely to form just one part of a multi-modal future — where voice, gesture, text and other interfaces all interact.
B. AUGMENTED INTELLIGENCE
“Computers already see better, hear better, and are better at many other things,” says Mo Gawdat, the former head of Google X, Google’s so-called moonshot laboratory. He predicts that AI will help machine intelligence surpass human intelligence by 2029. But the future Gawdat anticipates isn’t as dystopian as sometimes depicted by science fiction and the media. It’s a world in which computers enhance human existence rather than seek to wipe it out.
A world in which algorithms and machine learning help humans make decisions quicker and with more might, thanks to the power of processing.
Spatial Intelligence: A key ingredient in developing the next generation of mixed-reality technology is to enable computers to reason about the spatial aspects of the world and the way they work.
First, a computer needs to locate the planes in an environment. Once it knows that, say, a floor is a plane or a table is a plane, it can accurately navigate a space. For overlaying virtual objects into the physical world, however, a higher degree of precision is required.
Visual Intelligence: As cameras become an intrinsic part of our everyday lives, their use is becoming more commonplace, with AI trained to analyse moving images and entirely new products and opportunities emerging from them.
In the near future, the visual intelligence software in our phones will be able to recognise items and put them in our virtual shopping bags…
Emotional Intelligence: The aim in teaching computers to understand our emotions and reflect them back to us when they interact is to improve human-computer interactions.
Many universities, such as MIT, are investigating ‘affective computing’: teaching AI how to monitor facial expressions, voices and writing, interpret feelings and reply sympathetically.
C. BRAND MOVEMENTS
As technology is ever more intertwined with everyday life, tech-enabled objects will increasingly sit alongside everyday objects. Ugly devices are out; well designed and seemingly handcrafted gadgets are in.
Focus on design: From the Apple Watch to Snapchat Glasses to HoloLens, these devices are no longer judged solely on their purpose and utility, but also on how they look.
Bespoke experiences: “Personalisation is what’s happening,” explains Lara Marrero, strategy director and retail practice leader at design firm Gensler.
Netflix, for example, emails suggestions of what to watch next, based on your recent viewing history, while other companies use targeted marketing to follow you around social media.
Companies are using the power of big data, collected and sifted through at great speed thanks to the immense computing power of artificial intelligence.
D. WHERE ARE WE GOING?
In recent years, screens have shrunk and the barriers between humans and computers have dissolved, with interfaces becoming more intuitive and natural. If we continue in the same direction, witnessing the demise of the device and the rise of the accessory, we will discover a world in which screens and interfaces are both increasingly closer to our bodies (thanks to wearables and other interfaces) and all around us (thanks to increasingly immersive and augmented environments).
For one thing, there’s a good chance that we’ll eventually stop carrying around our phones. Instead we will have screens embedded in our glasses or, even farther out, contact lenses, and one day we may even have brain interfaces.
Physical objects will increasingly be imbued with digital properties, blurring the relationship between form and function.
Technological speed will increase in lockstep with the amount of information we call upon. We won’t even consider this unique. The term ‘AI driven’ will disappear when everything relies on artificial intelligence.
Questions that arise: Will intuitive AR be limited to the privileged few who have the means to pay for it? And will augmenting the human body further exacerbate the separations between the have and have-nots and the global north and south?
As long as we start thinking about these sticky issues today, the future is full of promise. And as we reach the end of the smartphone era, and anticipate what comes next, it is with hope and promise, and unlimited potential.