The future of the 3D internet
Table of contents
3D technology has changed everything about how we interact with devices. AR is one example of how UI design has developed more natural way of accessing data. Talking about the 3D internet, it goes beyond HTML pages coded for 3D images. This has been done for a long time.
White Void is a great example of a portfolio demonstrating 3D application to a usually boring menu. The Dasai Creative site offers a sphere of navigable options by rotating the shape. Swell offers a 3D site which renders anaglyphic three-dimensional shapes and need some of those retro red and blue glasses to be properly viewed.
The most effective approach is debatable. White Void and Dasai Creative use a big chunk of Flash. 2D websites aren’t as beautiful, granted, but they load a lot faster and they’re a lot more basic therefore simple to navigate.
The future of the 3D internet lies in AR. It’s still viewed as a bit of a novelty by a lot of people, but they should be taking it more seriously. AR can be seen in apps that use the camera built into a device so it can calculate your exact location and change – augment – what you see on screen relevant to web data. More simply, it’s a crossover between the internet and the real world.
AR apps on smartphones take data from real locations, often mined from an existing database and uses this to display data of value to you based on what you’re doing. Star Walk, for example, will show you constellations in the sky. Word Lens will use your camera to collect text and translate it based on web data, into whatever language you do know, right there on your screen.
AR isn’t magic, though. It’s little more than a UI. If we consider the Word Lens example, it’s nothing you can’t do with a dictionary in your hand, but it does this faster, making it effortless for you.
If you pointed your smartphone at the highstreet whilst using the Acrossair app, it would highlight nearby restaurants and show you which have better reviews. You don’t need to be a tech genius to see where the future of this is going. You’ll soon be able to be more granular and see if a particular shop has an item you want, or if your friends are in that pub over the road.
Google threw its whole weight into developing AR with Google Goggles and apps for shopping. You can take a photo of an item and your Google Goggles will try to figure out what it is and where you can get it. You can get a price comparison using Google Shopper, for example.
The tech itself is temperamental. The process has a tendency to take longer than simply running a normal Google search. It can be useful for some things, but it’s rather literal in its search capabilities and you’re not likely to find something if you’re trying to be creative. There are some concepts it won’t understand. How would you go about searching for a concept? What about an idea? How would you use AR to search for AR?
So, things like this are used for visual searching. It’s an alternative to, rather than a replacement of traditional search. What it is doing, is changing our interactions with places and objects in the real world.
AR is essentially an interface to make some tasks easier. The future of AR/VR will be to improve integration with the real world and the information people need.
YouTube is doing an amusing AR experiment. You can point your smartphone at an ad and the image will come to life.
The future of AR Technology
The tech needed to create an AR app has been on the scene for while now. It’s embedded into most smartphones. Phones will get better CPU dual-core processor and more sensitive sensors. AR apps will use these and more powerful apps of better quality will be seen.
3D Computer Controls
How we interact with computers has not changed much in 30 years. Not for the lack of trying, though. We’ve got 3D mice, VR headsets and trackballs and balls you can, eh hem, squeeze. We even have headsets that work off your brain waves. Very few of these inventions have come close to challenging the keyboard and traditional mouse or pad. Both are suited perfectly for the internet’s 2D interface.
Things are evolving, though. Touchscreen technology on mobiles paved the way for new user interfaces that are invisible to the uneducated eye. A traditional PC asks that you move the mouse from A to B. With touchscreen, you reach and touch the thing you want to see or interact with.
Laptops and even desktop computers are coming out with touchscreen displays now. Windows 7 now supports this technology as standard. Microsoft now offers a multi-touch experience with the Surface, which is essentially a really expensive, but cool, coffee table.
Beyond touchscreen, we can look toward game consoles which will evolve to incorporate 3D interfaces into their controllers. PlayStation offers the Eyetoy and Move, Microsoft has the Kinect and of course, Nintendo has the Wii. Millions have been introduced to spatial and gestural controls, and they’ve proven to be really popular.
The Wii, in particular, offers a benefit that doesn’t need too much explanation. You swing your hand, which is acting as a tennis racket or stab with an invisible sword. It’s not complicated. The actual interface is also invisible. Almost. They can be a little pernickety when navigating menus or inputting data.
Hands up if you remember the film Minority Report? Tom Cruise had a big fat budget for special FX and do you know what it went on? A hacked Xbox 360 Kinect camera. True story.
Gesture-based control seems like a great replacement for a mouse. The keyboard, though, might need some kind of projection, but, really, it’s hard to see how we could beat the keyboard or why we’d currently want to.
3D computer interfaces
2D interfaces have more than proven their world to us. 2D is fast-loading and generally still very effective. That’s not a reason to stay put, though. 3D tech development has created some seriously cool user interfaces.
Ditching the desktop
Decision trees and file systems are great means of organisational structures and generally we’re now tied to the idea that a desktop is a thing that should be visually well organised. We treat them as though they should be a structured environment, although we all know someone who’s desktop is a total mess. This is because we treat them identically to how we would treat our physical office desk.
As such, we’re familiar with clutter. As a reaction to that, we saw the introduction of the design where files could be stacked on top of each other. This created a 3D image. Google bought the tech for this in 2010 and it became known as Android 3.0.
Did we need this? What for? Could we have achieved something similar with our much-loved 2D interface? Of course not. There’s a very definite argument for 3D tech making our lives more convenient.
Large-scale data can be organised into 3D shapes based on geometric principles. This could help us to visualise and therefore understand the human body or solar system better. A huge win for 3D tech.
As computers become more powerful they will be able to render 3D much quicker. Windows 8 is rumoured to be one of the first to embed 3D tech. There’s an optional graphics interface known as Wind. It might have Kinect support for gesture-based tracking and you should be able to use facial recognition for logging in, too. Similar to the later iPhones.
It’s very much been trial and error getting to this point with UI development. Back in the early 90s it was everywhere and all people could think about. People soon learned that replicating the real world as 3D inside a computer wasn’t so easy. It’s more about utilising our cognition to create a real-world experience that is natural and compelling.
All this combined will make for some very exciting experiences on the 3D internet.