I seriously doubt the stock like silliness in this promo video is even remotely connected to reality… but $99USD AR headset! Sign me up!
You know what’s really fun? Being able to move around and interact in a virtual environment. You know what isn’t? Barfing all over your new all birds.
That’s exactly what’s at stake when designing a good VR experience. Particular one that includes movement. When developing for VR it’s important to build a good understanding of “simulator sickness” and what can be done to eliminate, or at the very least, minimise it.
Just like with other forms of motion sickness, simulator sickness affects different people in different ways. As it’s still early days of mainstream VR, there’s still a lot of research to be done into causes and solutions. Having said that there’s also a lot of good material out there that can help you provide comfortable experience for users.
Some general rules of thumb to follow include:
- Tightly control the users speed and rate of acceleration. Slower speeds are generally more comfortable for users, as is a very high rate of acceleration. Any gradual acceleration at all in VR can trigger simulator sickness so it’s best to keep speed transitions short and infrequent
- Leave your user in control of their vision. In other words don’t disconnect what you see from the users head tracking. If you need the user to look in a certain direction use other techniques such as sound or lighting to draw their gaze.
- Make sure your experience is performant by maintaining a suitable frame rate. 90 FPS is optimal.
The best way to stay on top user comfort is to test things on them as early into development as possible. Ask them questions about their comfort levels and always make sure you let them know to remove the headset right away if they feel any discomfort. After all, The last thing you want is to push a tester to the point of ruining their shoes.
One of the most powerful things about VR is its ability to have a shared experience at scale.
From an education perspective this is an extremely exciting thing. The below TED talk is a fantastic example of how VR can bring multi million dollar facilities to every student without the cost of traditional real world labs.
Not only that but the evidence in the video suggests that combining teachers and VR in this way results in a 2x improvement in learning outcomes. Just awesome!
Given I’m currently learning to develop in Unity I thought I’d share a few tips I pick up along the way to becoming proficient with it.
When creating a scene in Unity you’ll work with loads of different GameObjects. Your virtual world is built out of them (Floors, walls, trees everything) and you’ll be manipulating them all the time.
One thing you’ll notice is the more GameObjects you have the more complex it becomes placing things correctly. Trying to line everything up, preventing different objects overlapping, and just generally making things look as they should, can consume a lot of time.
Thankfully there’s a few things you can do to make life a while lot simpler. First is learning to navigate around the environment, second duplicating objects, third vertex snapping.
Navigating the environment
The odds are good if you’re doing VR development in Unity you’ve played at least some 3D person shooter on a PC or Console. If so, you’re in luck because navigating in a 3D unity scene can be exactly like this with a few extras. Click and hold the right mouse button and you’ll be able to look around the environment with your mouse and use the WASD keys to move forward (W), backward (S), strafe left (A), and right (D). A little extra something for moving quickly is selecting a GameObject and pressing the F key to jump to it for manipulation.
When creating an environment it’s essential to duplicate GameObjects or groups of objects. To do this is very easy, just select the GameObject and do what you’d do to duplicate and just like text in a text editor, copy and paste. Easy as that you’ll have a duplicate item to move around and position in the scene.
For the first few months I was learning I didn’t know about this one and boy does it save some time and frustration. Let’s say you’ve got a section of wall and you’ve duplicated it to make the wall bigger. You move your newly duplicated wall GameObject to line it up with the original and you slide it along just a little, you eye ball it and there’s a small gap. You move it back a little and now it overlaps. This sort of thing can go on and on. Vertex snapping will save you from hours of repetition by allowing you to snap two objects together cleanly.
To do this, select the move tool then select the GameObject you want to move. Now hold down the V key, move your cursor to a vertex, then click and drag it to the vertex of the GameObject you want to snap to. Now release the mouse button and boom, just like that you’ll have a perfectly placed GameObject in line with the first.
Now to be completely honest it can be difficult to get your head around vertex snapping from reading a text description of it. Thankfully a number of people have made excellent YouTube videos showing just how it’s done. Here’s one I found useful:
For more information on vertex snapping and a whole lot more, see the Positioning GameObjects section of the Unity manual.
One of the problems with discussion of emerging technologies is terminology, or more importantly the lack of clearly defined terminology. At the moment this space has a big issue with overlapping, confusing terms that consumers will likely never fully understand. It doesn’t help that the industry itself isn’t exactly using the terms consistently either.
The terms by definition
According to wikipedia the main three terms are defined as:
- Augmented reality (AR) – a live direct or indirect view of a physical, real-world environment whose elements are augmented by computer-generated sensory input such as sound, video, graphics or GPS data.
- Mixed reality (MR) – sometimes referred to as hybrid reality, is the merging of real and virtual worlds to produce new environments and visualisations where physical and digital objects co-exist and interact in real time.
- Virtual reality (VR) – a computer technology that uses Virtual reality headsets, sometimes in combination with physical spaces or multi-projected environments, to generate realistic images, sounds and other sensations that simulate a user’s physical presence in a virtual or imaginary environment.
My interpretation of this is that there’s a spectrum here. So to explain it simply:
- AR puts stuff on top of the real world, like a heads up display
- MR puts stuff in the real world, like a digital character dancing on a real world stage
- VR is replacing the real world, as in everything you see and hear is replaced by a computer generated environment
Unfortunately the industry is choosing to miss use these terms all over the place, which is super confusing.
The terms according to the big players
The use of these terms amongst the big players is mixed, but there does seem to be some consistency coming together. Typically MR isn’t really sticking, instead AR and MR and just being glued together into AR. Apple have ARkit, Facebook have AR studio, Google tango devices refer to AR. It makes sense really as from a consumer perspective the finer detail of the difference between the two seems kind of a pointless distinction. VR is also widely agreed to be the replace everything arrangement we know and love. Of course there’s always exceptions and shocking exactly no one Microsoft has decided to put all its chips down on MR. that’s right AR, MR, and VR all as one term.
What I’m calling this stuff
I’m not a particularly big fan of Microsoft’s naming conventions over the years, but I have to give it to them, I think a single catch all term is better in this case. Personally I prefer XR as it sounds cooler (it has an x after all) and x is always a good placeholder for a range of things.
Another thing a “catch all” has going for it, as I said earlier, I’m not sure users need the detail of two or three different terms. Something like XR says “cool tech that puts stuff in front of your eyes”, and then it’s just a question of how much stuff. It just seems more straight forward, I know it sure would make it easier to write about.
Just to keep things simple here I’m going to refer to anything on the spectrum between AR/MR/VR as XR. It’s just easier to write and read than all three terms individually. If the industry itself settles on a different term I’ll switch but until then as far as this sites concerned XR = AR/MR/VR.
When thinking of VR/AR/MR it’s easy for your mind to leap to games. Games are the obvious low hanging fruit of the industry, especially given all this stuff tends to be built in game engines like Unity and Unreal. But there’s far more potential to VR etc than just games.
There’s an increasing number of non-game apps surfacing. Many of these are a result of Apple’s ARkit being available in beta to developers. The one that jumped out to me recently was an augmented reality tape measure. It’s the sort of demo that just makes you smile. It’s so clever and obvious. In 12 months this sort of thing will be commonplace, the norm even.
Another example is Speech Center VR. This interesting looking app helps people face fears of public speaking or social anxiety. The developer, Cerevrum, have a number of VR apps that relate to some form of personal development using VR. VR is fantastic at taking you to places and putting you in situations you normally can’t. Cerevrum are making the most of that with their work.
The Body VR is another company crafting interesting educational experiences. They currently have three apps focused on various parts of the human body and target hospitals and medical training providers. It’s easy to see how experiencing the human body in this way would make it easier to recall the finer details.
This industry is still in its infancy, but already there’s just an explosion of fantastic content being produced. Content that has the potential to change every aspect of day to day life. Exciting times.