Media & Technology

Google Soli - putting people at the heart of a revolutionary new product

Google’s Advanced Technologies and Projects team (ATAP) partnered with AllofUs to explore and prototype use cases for Google Soli, their micro-radar technology.

We worked with Google to understand how their new micro radar technology, Soli, might be used in a range of different scenarios. The user research we did helped inform a range of prototypes and proofs of concept that we developed. We also developed the concept of Virtual Tools - simple interactions like fingertips rubbing together to make a slider or dial that could be used for a wide range of different purposes. Soli was one of the stars of the show at the 2016 Google I/O Conference, and continues to be developed by the Google ATAP team.

Our Approach

Though the task in hand was to prototype applications and use cases for Google Soli we were very conscious that technologies like Google Soli don't sit in silos. So we started the Google Soli project with a deep dive into the language of gestural communication and ergonomic principles. Looking across a range of different interactions from voice to gesture and touch. This helped us identify opportunities and white space for Google Soli's radar technology, which in turn helped us define a set of standard models for the Google ATAP team to test with the hardware. We then worked with the Google engineers to run a series of hardware tests and experiments to turn these into workable and usable user interactions.

radar visualisation loopGoogle Soli

Our brief was to envision a future in which the human hand becomes a universal input device for interacting with technology.

Building the Use Cases

To envision possible use cases, we facilitated a series of co-creation workshops that looked at connections and relationships between user archetypes such as Teens, the Differently-Abled, or the Elderly across a variety of contexts including Education, Health, Fitness, and Home life. This helped us define the primary interaction models for Google Soli, culminating into a set of four characteristics that helped Google understand the potential areas for further investigation and future development.

Soli-Gesture-Sketch-1
Soli-Gesture-Sketch

Make, Test, Learn

We took the work we did in the research phase and started to build that into a range of prototypes and working proofs of concept. We used an iterative, make, test, learn approach to test and validate the use cases we'd identified with the most successful going forward for further development with the Google engineers.

Results

Ultimately the work we did identified three broad strategic implementations for Google Soli:

- As an embedded component within a device - fixed in a product

- As a ubiquitous presence - fixed in an environment, e.g. a room

- As a “precious” device that was always on and reacted to your context, e.g. a wearable that knew you were driving a car and offered the right tools for that occasion - radar everywhere

Each implementation was packaged with its own set of interaction principles and gestural language.

Google Soli is now in full commercial development within Google and is available as an alpha dev kit. Emerging evidence of our gestural language and principles have started to appear in a range of consumer devices, from wearables to domestic products.

Our Role:

  • Innovation
  • Product Design
  • Technology Consultancy
  • Design Research
Culture & Education

Serpentine Galleries - designing a location-based app that doesn’t break the gallery experience

Read the case study

London

Unit 2, 55a Dalston Lane

London

E8 2NG

Get Directions
+44 (0)20 7553 9250
ldn@allofus.com

San Francisco

Suite 215

1010 B Street

San Rafael

CA, 94901

Get Directions
+001 415 650 9536
sf@allofus.com