How to create augmented reality apps Android – Explained
How to create augmented reality apps android, incl. tools, cloud vs local, costs, platforms, tracking, support etc. It’s likely you already know a little about augmented reality. It’s really big in the mobile world currently. We’re going to talk about the definitions of AR as well as ways you can harness this tech for Android apps.
What is Augmented Reality?
Let’s pause before we dive into how to create an AR Android app, let’s start with the basics. We need a definition first.
Augmented reality is the term used to describe either an indirect or direct view of a real-world, that is to say, physical, environment where parts are manipulated by a virtual, computer-built sensory input like a sound & graphic. It’s connected to an idea termed “mediated reality”, where a given reality is computer-modified. The result changes one’s existing perception of their reality, through technology.
Mobile app developers have recently begun to harness AR by utilising the camera input by overlaying the images in it’s receiving real-time. This produces an image where we see a real-world scene with an object added into that real-time output from the camera. This definition of AR isn’t definitive. Some say Google Sky Map qualifies as AR, but it doesn’t utilise one’s camera. However, it does everything else you would expect an AR app to do.
If you’re interested in developing your own AR app, don’t be afraid to bend the definition. Try to spot opportunities where your app might overlay metadata onto live data. Take, for example, when eBay recently announced they will be using AR for their fashion section, where users can virtually try an item on before they buy.
We’re going to discuss how AR aspects can be applied to Android. We’re not going to go into detail about the actual application and code, this is to explain just enough to get you started. Consider if you can incorporate some lightweight augmented reality support into your existing apps. If you’re overwhelmed by the idea of starting from scratch, you could investigate the feasibility of implementing some AR support into your existing apps. If the idea of pulling this all together seems out of scope specifically within your project, what are you going to do? You could turn to existing AR services like Layar. They work with Android and iOS platforms. Layar lets anyone add data to be displayed to users who have the details of that specific AR.
The basics of creating your own bespoke augmented reality implementation
Now you know what AR is by definition, it’s time to talk about how it fits together including the components from Android you might want to harness in a common AR app. A typical augmented reality implementation will have two main components. These are the ‘live’ data element, that which you’re augmenting; this is usually input from a rear-facing camera, including the direction that camera is pointed and location. This gets referenced with metadata.
If you wanted to find the location of a specific casino, your AR service should have the necessary augmentation information for every one of the restaurants in that same chain, which includes longitude and latitude. When the camera is then pointed at a close enough proximity to one of those locations, the overlay will kick in and display the chain’s logo, for example, above the venue.
That augmentation information is typically sourced from a pre-existing database that can be loaded or a search service that has local places of interest. The remainder of the work revolves around APIs, namely camera, graphical and sensors to trigger the overlay on the real-time data to bring to life the augmented experience. Let’s look at the key components a little closer.
A real-time feed from an Android camera is not the ‘reality’ element in AR. The actual camera itself is the reality part. The data from the camera is made available via the APIs in the Hardware Camera File.
If your app doesn’t really require analysis of frame data you can start a preview with a SurfaceHolder item with a setPreviewDisplay method. This will show whatever the camera records for use on screen. If again you don’t need any frame data, you can use the Preview Callback method so long as you have a valid Preview Callback object.
Augmented reality apps usually rely on more than the camera. You’ll need some location information regarding the device, which tells us about the user. To achieve this, you need some granular or high-level location information. This is usually gathered through the Location Package File with the Location Manager class. Your app uses this effectively to listen-in on location-based events and uses them to learn where real-time items that might interest the user, based specifically on their location (specifically, that of the device).
If your AR app needs to analyse the feed from the camera with what is called ‘computer vision” (extracting the data it requires from the images received by the camera) then you don’t necessarily have to know the location of the device. Computer vision gets a lot of coverage in the dev world. A lot of solutions make use of libraries like OpenCV.
When the location information is not used, a tag or marker will be used in its place. This is an identifiable object where the scale and orientation are known so can be drawn over by a similar object for quick recognition. AndAR, for example, use simple markers to draw cubes as a tester of their AR capabilities.
This is key to augmented reality implementations. If you know the rough orientation of a phone then this is handy when you’re trying to synchronise data with its camera-input feed. To know this orientation figure of a specific Android device you need to utilise the available APIs in the Hardware Sensor Manager File. Some of the sensors you might need are those sensors that let the user move their device about and see the changes on their screen. This engages the user by immersing them into the experience.
Some apps are reliant on live camera input. There are, however, others like Google Sky Map or Google Street View that utilise pre-recorded images and their data. This is an older technique but it’s none the less still valued and used by developers today.
You Are Closer To Virtual Reality Than You Think – Read Full Post
At the heart of AR is to layer something on top of the camera input feed that enhances what the user sees in real-time. This is conceptually quite simple. You just draw something on top of the camera input feed. It’s totally your choice how you achieve this.
You might want to reach each individual frame from the camera input feed then put the overlay on that to be able to draw directly on the frame being seen on-screen. Bitmap might be useful here. You could harness the Hardware Camera Preview Callback Class here. This will let your app get the images frame-by-frame. You could always use a regular SurfaceHolder using the Hardware Camera Object and draw on over the Surface where needed. How and what you intend to draw will depend on your bespoke app requirements. Readily available on Android are 2D or 3D graphic APIs. Graphics and Opengl are two of the best.
Where and how to store your AR data
You might be wondering where this augmentation information comes from? You’ll likely get the data from a database that you own, which will be locally stored on your own servers or perhaps an online database offered from a cloud service provider.
If you’re using pre-loaded augmentation information directly on the device you’ll be needing a SQLite database. This gives you quick and efficient lookups. You can easily find the API you need for this in the Database SQLite File. For data that’s web-based you will need a cloud service offering the standard methods, ie, HTTP and XML parsing. You’re able to use a simple java dot net URL for this with an XML parse like the XML Pull Parser Class for your results parsing.
How to create augmented reality apps Android: Conclusion
AR is a vast subject that flirts with a lot of development aspects and APIs within the Android world. Hopefully, you’ve learned exactly what AR is, first and foremost, and how many APIs you can utilise. You’re now able to incorporate this information with your existing knowledge about Android SDK and enhance any existing apps you have. You might be building a new app from scratch.
What does the future look like for augmented reality?
The potential AR holds is huge. There are so many examples in everyday life that are being deployed; mostly in a commercial setting. This ranges from manufacturing to retail, travel and health.
As for the future of AR app development, the biggest thing to happen will be when a user no longer needs their screen. What do we mean by that? Currently, most AR apps are reliant on using a camera which enables the image overlay on top of the real-world image in real-time. Using their smartphone is currently a very convenient way to enable AR for a user but it also acts a blocker in ways. If we can free ourselves of all 2D constraints, who knows what we’ll see! Keep an eye out!
Other articles that may interest you: