Process: User experience testing 🔬

As I said in my previous post on process, it’s important to test your work early and often. This means getting in front of your users and collecting feedback, aka UX testing.

When you’re starting out with user testing it can be quite intimidating, but with a few tips and a bit of preparation, it can be very valuable. There are a few basic things to consider when carrying out tests.

Asking the right questions

Just rolling up to your user and asking questions may provide some value, but it may also give you some bias or low-value results. To avoid this it pays to prepare your questions in advance of the interview. When formulating the questions themselves make sure you avoid both leading and dead-end questions.

Leading questions

An example of a leading question is “do you find the mood of the scene magical?”. By asking a question in this way you’re influencing your user to think of the scene in a magical context. This may affect their answers, reducing the value of the feedback. A better question to ask would be “can you describe the mood of the scene?”. It’s more open-ended, leaving the user to describe the mood without bias.

Dead end questions

Likewise asking dead-end questions also provides little value. Dead end questions are questions that can be answered with a simple yes or no. An example would be “did you enjoy the experience?”. A better option would be “tell me about the experience?”.

If need be, you can always ask any follow-up questions to clarify anything you feel isn’t clearly communicated in the users’ answers.

Sample size

On the surface, you’d think you’d need dozens and dozens of users to test on, and in the past, many UX practitioners have done just that. As it turns out, that really isn’t needed. In fact according to a study by the Norman Nelson Group you’re actually wasting your time with big sample sizes. You’re actually far better served by performing multiple small tests on 3-5 people. Not only is it cheaper and easier to do, but it’s actually more effective.

Working with GameObjects in Unity 👾

Given I’m currently learning to develop in Unity I thought I’d share a few tips I pick up along the way to becoming proficient with it.

When creating a scene in Unity you’ll work with loads of different GameObjects. Your virtual world is built out of them (Floors, walls, trees everything) and you’ll be manipulating them all the time.

One thing you’ll notice is the more GameObjects you have the more complex it becomes placing things correctly. Trying to line everything up, preventing different objects overlapping, and just generally making things look as they should, can consume a lot of time.

Thankfully there are a few things you can do to make life a whole lot simpler. First is learning to navigate around the environment, second duplicating objects, third vertex snapping.

Navigating the environment

The odds are good if you’re doing VR development in Unity you’ve played at least some 3D person shooter on a PC or Console. If so, you’re in luck because navigating in a 3D unity scene can be exactly like this with a few extras. Click and hold the right mouse button and you’ll be able to look around the environment with your mouse and use the WASD keys to move forward (W), backward (S), strafe left (A), and right (D). A little extra something for moving quickly is selecting a GameObject and pressing the F key to jump to it for manipulation.

Duplicating objects

When creating an environment it’s essential to duplicate GameObjects or groups of objects. To do this is very easy, just select the GameObject and do what you’d do to duplicate and just like text in a text editor, copy and paste. Easy as that you’ll have a duplicate item to move around and position in the scene.

Vertex snapping

For the first few months I was learning I didn’t know about this one and boy does it save some time and frustration. Let’s say you’ve got a section of wall and you’ve duplicated it to make the wall bigger. You move your newly duplicated wall GameObject to line it up with the original and you slide it along just a little, you eyeball it and there’s a small gap. You move it back a little and now it overlaps. This sort of thing can go on and on. Vertex snapping will save you from hours of repetition by allowing you to snap two objects together cleanly.

To do this, select the move tool then select the GameObject you want to move. Now hold down the V key, move your cursor to a vertex, then click and drag it to the vertex of the GameObject you want to snap to. Now release the mouse button and boom, just like that, you’ll have a perfectly placed GameObject in line with the first.

Now to be completely honest it can be difficult to get your head around vertex snapping from reading a text description of it. Thankfully a number of people have made excellent YouTube videos showing just how it’s done. Here’s one I found useful:

For more information on vertex snapping and a whole lot more, see the Positioning GameObjects section of the Unity manual.

AR/MR/VR/XR WTF? 🤔

One of the problems with discussion of emerging technologies is terminology, or more importantly the lack of clearly defined terminology. At the moment this space has a big issue with overlapping, confusing terms that consumers will likely never fully understand. It doesn’t help that the industry itself isn’t exactly using the terms consistently either.

The terms by definition

According to wikipedia the main three terms are defined as:

  • Augmented reality (AR) – a live direct or indirect view of a physical, real-world environment whose elements are augmented by computer-generated sensory input such as sound, video, graphics or GPS data.
  • Mixed reality (MR) – sometimes referred to as hybrid reality, is the merging of real and virtual worlds to produce new environments and visualisations where physical and digital objects co-exist and interact in real time.
  • Virtual reality (VR) – a computer technology that uses Virtual reality headsets, sometimes in combination with physical spaces or multi-projected environments, to generate realistic images, sounds and other sensations that simulate a user’s physical presence in a virtual or imaginary environment.

My interpretation of this is that there’s a spectrum here. So to explain it simply:

  • AR puts stuff on top of the real world, like a heads up display
  • MR puts stuff in the real world, like a digital character dancing on a real world stage
  • VR is replacing the real world, as in everything you see and hear is replaced by a computer generated environment

Unfortunately the industry is choosing to miss use these terms all over the place, which is super confusing.

The terms according to the big players

The use of these terms amongst the big players is mixed, but there does seem to be some consistency coming together. Typically MR isn’t really sticking, instead AR and MR and just being glued together into AR. Apple have ARkit, Facebook have AR studio, Google tango devices refer to AR. It makes sense really as from a consumer perspective the finer detail of the difference between the two seems kind of a pointless distinction. VR is also widely agreed to be the replace everything arrangement we know and love. Of course there’s always exceptions and shocking exactly no one Microsoft has decided to put all its chips down on MR. that’s right AR, MR, and VR all as one term.

What I’m calling this stuff

I’m not a particularly big fan of Microsoft’s naming conventions over the years, but I have to give it to them, I think a single catch all term is better in this case. Personally I prefer XR as it sounds cooler (it has an x after all) and x is always a good placeholder for a range of things.

Another thing a “catch all” has going for it, as I said earlier, I’m not sure users need the detail of two or three different terms. Something like XR says “cool tech that puts stuff in front of your eyes”, and then it’s just a question of how much stuff. It just seems more straight forward, I know it sure would make it easier to write about.

Just to keep things simple here I’m going to refer to anything on the spectrum between AR/MR/VR as XR. It’s just easier to write and read than all three terms individually. If the industry itself settles on a different term I’ll switch but until then as far as this sites concerned XR = AR/MR/VR.

More than games 🎮

When thinking of VR/AR/MR it’s easy for your mind to leap to games. Games are the obvious low hanging fruit of the industry, especially given all this stuff tends to be built in game engines like Unity and Unreal. But there’s far more potential to VR etc than just games.

There’s an increasing number of non-game apps surfacing. Many of these are a result of Apple’s ARkit being available in beta to developers. The one that jumped out at me recently was an augmented reality tape measure. It’s the sort of demo that just makes you smile. It’s so clever and obvious. In 12 months this sort of thing will be commonplace, the norm even.

Another example is Speech Center VR. This interesting looking app helps people face fears of public speaking or social anxiety. The developer, Cerevrum, have a number of VR apps that relate to some form of personal development using VR. VR is fantastic at taking you to places and putting you in situations you normally can’t. Cerevrum is making the most of that with their work.

The Body VR is another company crafting interesting educational experiences. They currently have three apps focused on various parts of the human body and target hospitals and medical training providers. It’s easy to see how experiencing the human body in this way would make it easier to recall the finer details.

This industry is still in its infancy, but already there’s just an explosion of fantastic content being produced. Content that has the potential to change every aspect of day to day life. Exciting times.

Space 👨‍🚀

In VR, as in real life, personal space matters. Personal space is “the physical space immediately surrounding someone, into which encroachment can feel threatening or uncomfortable”¹. The amount of space that any individual considers “theirs” can vary considerably. When I lived in the Netherlands I found the Dutch had a considerably smaller personal bubble than I was used to as a New Zealander. I suspect this was due to the wildly different population densities between the two countries.

As a general rule of thumb 0.5m is a good minimum level to start with. This sort of space between the user and any GameObject will make players feel comfortable that nothing is “in their face”. Unless you want them too of course. As always developing an understanding of your audience as part of your process helps inform what is and isn’t important in your VR experience.


  1. For more information on personal space see Proxemics on Wikipedia

Process 📝

The making of a “thing” happens in loads of different ways, but I’ve always found it’s best to have some sort of process to get a good result. It doesn’t have to be onerous, just something repeatable that considers a few things consistently.

To avoid an overly long post I plan to write a series on this subject. Here’s a few tips to get you started:

Start with people – It’s always tempting to just start building stuff, but you should always start with your target audience. That is the audience you intend to use you’re app. It’s easy to think it’s “everyone”, but is that really true? What experience do they have with VR/AR? Also consider this is a physical medium, so you may be making greater physical demands of your audience than with other digital experiences.  Jumping may require users to physically jump, their height may be a factor, and so on. Simply writing a few sentences describing who your user is and what you expect them to do can be very powerful. For example:

The end users for ProjectX are likely to be people who are new to VR, but who have already experienced various games in their life. They are probably going to be 25-35 and own a smartphone. They'll be expected to stand, reach, jump and turn with relative ease.

Develop personas – I’ve always been a fan of personas. They are simple to put together, and make it easy to conceptualise your audience. Something to be aware of though is that personas by their nature are a generalisation of an audience. They will never cover all of the details, so you should expect to continue to refine them over time. With this in mind start with something simple you can build upon. Something like this:

Image:
FBF69C46-9A73-413C-9627-4FE80A414519.png
Name:
Anna Age: 36 Role: Scientist/Marketer VR Experience: Little to none Quote (that sums up their attitude): VR looks interesting and I’d like to give it a go. About this person: Anna is a highly educated working mother of two. She has a busy lifestyle but is always interested in new and interesting things. She prefers to try things before jumping in head first and enjoys things she can share with her family.

Statement of purpose
A good statement helps us understand what we are building by defining simple project scope. It should be concise and ideally no more than one or two sentences long. It’s something you should keep visible to constantly remind you of what it is your building, and limit the temptation to go too far beyond that.

Here’s an example purpose statement:

ProjectX is a mobile VR app which gives new VR users a quick taste of a VR via their existing smartphone. The entire experience should take no more than a few minutes.

Make disposable concepts and iterate – In the beginning, everything you do should be simple, quick, and disposable. Use materials, tools, and techniques that enable this. Pen and paper are a surprisingly useful and versatile tool set. Expect to be wrong, it helps you keep an open mind on other options, not becoming too attached to any one thing. Iterate, iterate, then iterate some more. Eventually, you’ll find something you’re happy to move on with. Steadily increase the complexity and sophistication of your work as the idea solidifies.

Share your work early and often – You know who your audience is, show them what you’re making. Their feedback will be invaluable. You’ll also be able to get in front of complexity before things become difficult to change. While you’re with your audience take the opportunity to grow your understanding of them and update your personas to match. Note: Be careful to listen for what your user wants, less so how they want it.

Conclusion – Using these simple steps and methods is a great way to start any project, but don’t be afraid to change things up until you find a process that works best for you. Ultimately you need to find what makes your product better for your users and discard the rest.

Mixed reality capture or: how to share your VR work 🎥

You’ve likely all see the videos showing VR users actually immersed in a VR world, like the demo at WWDC. The question is, how can you do the same with your work?

Well aside from requiring a few extra bits of gear to your normal VR setup, it actually seems fairly straight forward. Ars technica have put together a nice tutorial covering off what you’ll need and how it’s done.

A handy skill to add to your repertoire.

VR design – Google Street View for iOS 🗺

Part of my Udcaity VR course involves doing a quick/light review of a Google Cardboard app. So I figured I might as well post that here too.

VR App review – Google Street View – iOS

To call Google Street View for iOS a VR app feels like a bit of a stretch. Upon opening the app it’s immediately obvious the app isn’t designed from the ground up for a VR experience. Instead, it’s optimised for its primary audience of phone users.

On the home screen, there’s nothing to indicate there’s a VR interface option at all. The search bar at the top of the page obviously requires keyboard input, the main map, and even the tiles at the bottom of the screen, all require touch and there’s no way of switching into a VR mode from this view.

Google Street View UI
A mobile first UI

Those tiles I mentioned, that’s the easiest way to get to the VR side of things. Scrolling down the page show a seemingly endless list of Google Street View locations that can be viewed using a cardboard setup.

Selecting a tile presents a street view style UI (still not in VR mode yet). This allows the user to tap the screen and move around the environment. You’d be forgiven for missing the small icon in the top right of the interface for switching to cardboard mode. It’s the one that looks like a little cardboard headset.

IMG_4296
Not exactly obvious how to get to VR mode

If you were lucky enough to make it this far, you’ll be presented with a standard Google UI telling you to turn your device on its side and put it into your viewer.

Google VR viewer UI
Standard Google viewer info screen

Once in Cardboard mode, you’ll experience a familiar interface to many other Cardboard VR apps. One difference here is they’ve mashed VR together with a desktop browser style Street View experience. Obviously to look around you just… well… look around 😉 but navigation is done via a point and clicks teleport interface. This uses the “on ground” style direction button similar to Street View’s desktop UI. It works well and it’s fairly intuitive given the navigation follows the users’ gaze. Basically where ever you look, if you can go there, the arrow will point the way. I prefer a more obvious waypoint style interface as it makes it clearer where you can go, but that’s just a personal preference.

Google street view VR mode
Explore Machu Picchu in VR

As I said at the beginning it’s obvious this isn’t VR centric app, but then again it’s clear that isn’t the intent either. Given the nature of Cardboard being a mobile-based app, there’s an awful lot of sense in adding VR as a feature, rather than the primary UI.

Although mobile users are the primary audience, Google still does a great UI job for VR users and that’s to be commended. Really we shouldn’t expect less from an industry heavyweight like Google.

Of course, there’s always room for improvement, and this app is no exception. Street View being what it is doesn’t include any sound, it is just static 360 images after all. That being said it would be great to increase the emersion of the VR mode by including ambient environmental audio. It’s always amazing how good 3D sound can really put you in a place.

Conclusion

Ultimately Google Street VR did a good job of adding a VR mode feature to the mobile app. More could be done to improve the experience, but frankly, this is an impressive example of 3D content captured for one interface and repurposing it for another. It’s not exactly a virtual world tour vacation, but it might just encourage you to take one.