Desktop 2.0

Information Organization + Augmented Reality

Desktop 2.0

A 6 week long concept project where I explored using Augmented Reality to extend the desktop computer experience. How can we allow AR to supplement rather than replace the rich tactile experiences of computing?

Introduction

On our computers we have an incredible amount of information and for the most part, we use numerous files, folders, and applications to organize this "stuff". Ironically, we are really quite terrible at naming, sorting and organizing said "stuff". This can lead to many problems down the road- namely not being able to find things!

Our computers have handy-dandy search functions to help us locate our files but they pivotally rely on human memory and our capacity to name things in ways that we can remember... which as we covered... most people are very bad at doing.

Desktop 2.0: An Introduction

So what if our computers didn't rely so heavily on our capacity to name things well and search for things? What if our computers were sensitive to the kinds of activities we use our devices for and adaptively showed us related files we might find useful? Desktop 2.0 imagines an interface where your virtual desktop space adapts to the state of your current activity.

Product Initiatives

The high level goals of Desktop 2.0.

Understand how stuff relates.

Using machine learning to connect digital content in ways that mean something to you, the user. Core to the experience is technology that can understand how our files relate.

Show how our stuff relates.

An interface that naturally communicates why our digital property is sorted the way it is. Bring user centered data visualization to the desktop experience.

Give effortless access to relevant stuff.

A product that serves the user what they want with complex situational awareness. 

Concept Demo Video

A quick introduction to the Desktop 2.0.

Design Process

In the very early stages of the project I was unsure how I wanted to try to test and develop an AR experience. At this point I had plenty of sketches of what kinds of experiences I wanted to experiment with...

Flipping though some of my sketchbook

I have some experience developing in Unity so I decided to make my first prototype in VR with Google Daydream. I would pretend that virtually reality is REALLY REALITY so I could play with Augmented reality properly. I thought I was really clever when I thought of this only to find out that this is how most folks prototype AR 😅 😂 🤣

User Testing Session 1: Mistakes and Lessons Learned

I spent a lot of time setting up my first user testing session. My goal was to run participants though a short series of tasks related to finding files and navigating the augmented desktop experience. I ended up wasting a lot of time checking mundane tasks of finding files when I should have focused more on initial reactions and impressions.

My First Unity Prototype

The study didn't end up being a total bust though because through this experience I gained a bit more insight into the physical interactions that come with an AR system supplementing the traditional computing experience.

Mainly- people don't want to use a mouse and keyboard AND a game controller.

I had to make a key decision here of whether I would support interactions with virtual objects with gestures or by some other means...

Final Prototype: More Mistakes and Lessons Learned

I decided to animate my final demo in Maya in video form. Google daydream was great to test in and to get a scale of things, but it did poorly as a long term solution for sharing my project. Teaching myself animation software just for this project ended up being a huge mistake for my project, but it ended up being a fantastic learning experience overall.

The fidelity of the final prototype didn't end up being nearly as clean or smooth as I wanted (in fact looking back on the project it doesn't even look like an AR project) so I took extra time crafting a script that would support the visuals with storytelling.

Ambient Awareness

Reduce reliance on search and looking through files, have the most relevant files right in front of you.

  • Computer Vision
  • File Parsing
  • Contextual Tracking

Interested in knowing more about this idea? I've explained it in much more detail in my project Recollection.

Check it out here.


What does search look like?

Despite claiming to have magical ambient awareness of what you might be looking for, it was also really important to look at what the search experience might look like. I'd like to make a case now for voice search on desktop. I know, most people find the Siris, Cortanas, and Assistants on browser kind of annoying, but particularly for the purposes of finding files, they could be our next best friend.

Why do I think this is? Well, voice search affords for a very interactive form of search that veers from traditional search in one very distinct way. Its conversational.

Well duh, Claire- Is what you might be thinking right now. The amazing thing about voice search is that if affords much more nuanced results than traditional search can deliver with just a single input. It's like if you could tell Google how she's doing as she populates your search page.

Compared to traditional search conversational voice search has so many opportunities to better understand what results to show to the user with minimal long typing.

If we try to do this kind of complex search via typing it can tend to look a little like this...

Streamlining the Physical Experience

Let the pre-existing user input, computer trackpad and sight, be the mode of input for interacting with objects in virtual reality. My hypothesis is that using a combination of trackpad gestures and eye tracking technology (included in AR headset), the user will be able to seamlessly transition from interacting with things in their tangible and intangible work spaces.

You can check out a few more details in my presentation deck.