Touch-First 3D Modeling

December 31 2011

In late 2011, with the consumer market shifting towards mobile platforms, there was a push in DevDiv to bridge the gap between where apps were developed and where they were used. The design team explored a variety of next-generation developer experiences in a mobile-first, cloud-first world. Visual Studio 2012 just shipped with a primitive 3D viewer/editor DIGIT for game developers who work with models, textures and shaders. I was tasked to re-imagine the next version of DIGIT.

The Goal

The market was not short of 3D modeling applications. On one end of the spectrum, professional modeling tools were super complex and require extensive training to get even started (Maya, Blender). One the other end, consumer-friendly apps offered a catalog of shapes and did not allow for much creative freedom. Most mobile versions of 3D apps offer only viewing features (AutoCAD, Sketchup). Some attempted to port traditional desktop modeling UI to the tablet (Verto Studio), but they were far from touch friendly. There were finger-sculpting apps that were much more intuitive (Mudbox, 123D Sculpt), but they could not build anything precise.

This project was targeting what we called prosumers. Typical scenarios:

I was aiming to design something that was absolutely touch-first, low learning curve, allowed for a decent amount of creativity as well as precision control.

The Story

The core concept of the tool is called a smart object - a 3D component that is scripted to generate surfaces from certain parameters. The parameters are exposed to the user as control points that they can just drag and tweak intuitively. This drawing shows a cylinder with 3 control points: two 3D points that define the spine, and an 1D point that defines the radius.

Smart Object And Control Points

On top of a primitive shape, modifiers that I call transforms can be applied to make it more complex. Here I apply a transform to the cylinder to add a section, which adds 2 control points: one 1D point that defines the position of the section along the spine, and another 1D point to scale the section.

Transform

A list of core primitives and transforms:

Primitives And Transforms

This paradigm allow users to quickly create or tweak a 3d model without training. Developers and designers can script custom components and behaviors, and package them as smart objects, shared via an online library.

Sample Workflow

Controls And Proof-Of-Concept Prototypes

To move a point in 3D space by a precise amount, I introduced an on-demand fine grain control system. When a point is being pressed and held, a color-coded axis lock is displayed to allow locking in one direction. When the finger drags over the distance label, it expands into a scrollable ruler, where a number can be picked.

Fine Grain Control

I implemented the system in JavaScript to validate the usability and fine-tune the responsiveness of the UI.

The menu holds most of the power the app has to offer. To avoid flooding the screen with buttons, I adopt a touch-slide-release model. The menu is hidden most of the time, and any button can be easily accessible leveraging muscle memory.

Contextual Menu

Selecting multiple objects is assisted by a “touch modifier” button. When held down, the other hand can tap or swipe through the scene to add to selection.

Multiple Selection

My prototype that tests the selection modifier together with multi-touch navigation:

I also refer to the transform technique as “vector-based sculpting”. It opens the door to easily go back in history, change or remove one step without purging everything.

Undo

My prototype that tests the transform concept. It is implemented in 2D for convenience, but can be easily generalized to the 3D space:

Last, I explored the possibility for multi-modal interaction - leveraging speech to quickly adjust cameras, align objects and such. To teach the user, I designed a visual tip system that automatically suggests the next word to say.

Voice Commanding

I made a prototype for this with C# and WPF. It was proved handy in certain scenarios, but constantly making the conscious decision between using mouse and using voice to accomplish a task was quite tiring. Here’s a demo:

The above ideas generated several patents. Unfortunately the project was cut due to change of priorities.

user experience touch prototyping multi-modal interaction speech 3d JavaScript C#