The Future of AR is in 5G, with Deutsche Telekom’s Terry Schussler

July 08, 2019 00:46:39
The Future of AR is in 5G, with Deutsche Telekom’s Terry Schussler
XR for Business
The Future of AR is in 5G, with Deutsche Telekom’s Terry Schussler

Jul 08 2019 | 00:46:39

/

Show Notes

The current generation of 4G devices are great, if you want to chat faster, share photos, or stream a movie on the go. But for real-time spatial computing technologies — like XR, for example — that just won’t cut it, especially when it could mean life or death. Terry Schussler is the director of Immersive Technology at Deutche Telekom, and he’s working to bring 5G into the XR domain, and expand the capabilities of mixed reality technologies.

Alan: Today’s guest is Terry Schussler, “entreprenerd,” technology architect, passionate software designer, writer, speaker, trainer and all-around awesome guy. As a software innovator, Terry’s focus has been making software smarter for users, while leveraging technology to enable new forms of communication. During the development of over 200 commercial software products, reaching over 50 million users on desktop, mobile and tablet devices, Terry has delivered numerous technology innovations; artificial intelligence and consumer products, multimedia, hybrid online/offline CDRoms (what’s a CDRom?), interactive multimedia on the internet, real-time character animations, factory-to-consumer personalized plush toy design, just to name a few. A number of his products have been category creators, opening up new markets with long-tail monetization opportunities. If you want to learn more about the company that Terry works for, Deutsche Telekom is at www.telekom.com

It is with great honor that I welcome Director of Immersive Technology at Deutsche Telekom, and founding member of the Open AR Cloud, Mr. Terry Schuster. Welcome to the show, Terry.

Terry: Thanks, Alan. Nice to have the opportunity.

Alan: Thanks so much. It’s really a pleasure and honor to have you on the show. And I’m just going to dive right in here because I think the people listening really want to get an understanding of how this technology can be used for them. So to start it off, what is one of the best XR experiences that you’ve ever had?

Terry: One of the best experiences I ever had was actually realizing that the technology can be used not just to make people money, or to provide education, but to actually save lives — that it can really be transformative. So a company which unfortunately is no longer in business, ODG, created an oxygen mask for pilots, which allowed the pilots to operate a plane using augmented reality when the cockpit was full of smoke. Seeing that product developed and come to fruition really got me thinking differently about the importance of utilizing these types of technologies to increase human safety and save lives, as well as provide all of the obvious benefits that we’re used to.

Alan: Wow. That is… how do you even… that’s a show-stopper. I had Mark Sage on the show, and he was talking about how firefighters are using this technology for heads up displays, and military are using it for being able to see in the dark and creating that visibility layer. Can you maybe talk a bit more about this, this mask that can help pilots in a distressed situation like that? Because there’s so many ways this technology can be used to save lives. I think we should dig into that.

Terry: So, ODG co-develop this with… I think with FedEx. FedEx, I think, had two flights which had crashed due to a cockpit filled with smoke conditions that prevented the pilots from being able to properly control the plane. They made a decision to look at how they can utilize technology, a heads-up display technology, using AR to give pilots the visual controls that they need to continue flying the plane, even if such a situation happened. And they actually had a live demonstration unit at the Augmented World Exposition last year in Santa Clara, where you could try the mask on and actually see what the experience would be like. 

It really got me thinking differently about some of the goals that I want to achieve personally with XR devices, and how I’d like to utilize them. And the importance of the general technical work I do in terms of creating low-latency experiences, and how they can be used to combine together to create better human safety conditions.

Alan: So, you talked about low-latency experiences in creating this human connection, or human tools that we’re going to be able [to use] to save lives. You work for a telco, and all of the telecommunications companies now are really pushing this 5G wave. So maybe you can speak to how 5G is going to benefit augmented reality, mixed reality, and what are some of these low-latency experiences that businesses will start to tap into?

Terry: Sure. Most people see 5G as, if you were to ask the average person, what does it mean to them? It’s really about bandwidth; the speed at which things could be downloaded. And that certainly is a huge value proposition. But one of the most important things that 5G also does is, it provides a higher reliability of service, a more consistent availability of service. Because of the technology, it’s able to support 100 times more people in the same area, having consistent access to the Internet in a mobile context, or in a fixed context. And that’s really important.

Alan: It’s interesting, we should punctuate that right now, when you cut out a couple of times during this podcast. 

Terry: [laughs]

Alan: Technology, we can’t wait! 5G, come faster!

Terry: That can happen. Of course, when you go to environments where there is a lot of people at one place — a coliseum sporting event, or a shopping mall, or–

Alan: Coachella. 

Terry: Yeah, exactly. Coachella. Then connectivity becomes even more problematic. And certainly, if everybody’s trying to uplink a live video stream of something that’s going on, then it becomes even more overwhelming. Those are a context in which 5G can provide value with certain technologies that we’re utilizing for mixed reality, where we’re taking the camera feed and feeding it back, and processing that camera feed to make decisions that provide spatial mapping information, things like that. And this consistent higher bandwidth connection is important.

Alan: For people who are listening, let’s take it back to basics. Why would a company need some sort of augmented reality, where it’s capturing the world’s data? A lot of people think about AR as, “I can hold out my phone and see a Pokémon,” or “I can see a piece of information overlaid as a digital layer.” But most people don’t realize that it’s the camera that’s capturing as much or more data around the world to create this conceptualized data. And so, capturing data and uploading it to the cloud is probably as important — or more important — than the data being driven back to the device.

Terry: Yeah. And there’s technical limitations that we have today with the on-device sensors that can be enhanced or addressed through cloud-based technologies. So, for example, one company that is in our incubator, hubraum, is a company called 1000 Realities, and they’ve developed a purely cloud-based slam approach, where they take video stream directly off of a device, run it to the cloud, to our edge compute infrastructure, and then process that video to create a feature point cloud, or to localize a user against one. That allows devices which don’t have the compute capabilities today — lightweight AR headset devices — to have the kind of capability that a higher-end device like a Hololens 2 or Magic Leap might have, or even better in some cases.

Alan: The 1000 Realities team is doing some amazing stuff. I think they’re using the VUZIX blade or something… they’re using different hardware devices. But let’s dig into what an actual example, of why you would need to map out the world in real-time like that. Can you think of any real business use cases for that?

Terry: Sure. I mean, positionally, things changed within an environment. You need to be able to track objects in real-time. So, having the ability to perform a non-preprogrammed or dynamically-applied enhancement of information overlay on top of real world objects is super important, in tons and tons of environments: factories, outdoor logistics, and so on. Think of retail, for example, where everything’s moving around. If you’re trying to look at a planogram section of a retail space, you need to dynamically be able to compute what’s there, what’s not there, whether things are in the right place. That’s going to be a changing context over and over again. To be able to map that information in real-time, process that, and then get dynamic overlays on top of that is very valuable for those kind of contexts.

Alan: It’s interesting you mentioned retail, because that’s something that we focus very heavily on — retail and e-commerce, and general marketing as well. You know, something that happened on the weekend — we mentioned Coachella quickly there —  Coachella had AR navigation at the festival this past weekend, and next weekend as well. They also had an AR stage, where you can hold up your phone and see an AR activation. You can see the NASA space shuttle flying through the Sahara tent, which is pretty awesome. 

But they always have these bandwidth issues where, if you get a few hundred people on the system running it, it starts to slow down. But if you get ten thousand people, then it grinds to a halt. I think people don’t realize, fully, the limitations of 4G. They think, “oh, I can watch a movie on my phone. It’s fine.” But when you start to get into these real-time computing scenarios, like firefighters, or police, paramedics, where they need real-time data, and it can’t crash or lag because there happens to be 10,000 people in the place.

Terry: That’s right. Reliable quality of service is super important in those kind of contexts, and that’s a big value proposition that 5G puts on the table. It not only provides the higher bandwidth, but it ensures more spectrum density. So, more people in one place aren’t going to create this cascading failure problem. 

Alan: So, spectrum density is a thing.

Terry: Another really key area that ties to both of those is the idea of precise positioning. Today we have mostly the use of GPS systems, which are highly imprecise in certain contexts, and useless at others. GPS is supposed to be accurate to roughly 4.9 meters, but it’s not. For example, if you’re in downtown San Francisco, and you’re calling an Uber, you’re going to have to manually place on the Google map where you are almost every time, because it’s going to be off by half a block or more as to where your real location is. This kind of problem is going to be exacerbated when we try to get highly-accurate augmented reality content overlaid on the real world. 

We need to more accurately know where we are outdoors and indoors, and we can’t afford the inconsistency. The quality is in the consistency of positioning. This is an area that’s very actively being researched by us, as to how we can layer precise positioning on top of 5G infrastructure, so that we can get the positioning accurate to at least a meter indoors and out, and ideally a lot less than that — a lot more accurate.

Alan: I’ve seen some startups that are working on exactly this, trying to get that down to centimeter accuracy. It can be done with Bluetooth beacons and that sort of thing, but it’s not really practical in large facilities and large public spaces. But what I’ve seen is working fairly well now, is using landmarks. Using the visual camera to lock into so within five meters, you know where you are. Based on that, plus the visual marker, you can really narrow that down. Is that something you’ve seen executed well?

Terry: Yeah. There’s a company, for example, called Sturfee, which has developed an approach using satellite imagery. So, you would use your head-worn AR glasses to take a quick picture from the camera. Combine that image, along with your lat/long information, and send that off to their server. Then they’re able to process on looking in a radius around where you say you are, based on the GPS information. I think they do a search for around 30 meters, and then they’re able to figure out where you are using the image. 

They’re also able to figure out your elevation on your azimuth. They can then calculate on a three-dimensional mesh along the surface of the objects that you’re looking at in the real world, the sides of buildings. The street, of course, is the easy one, but we’re doing that against the geometry of the buildings is pretty interesting. Then you can very quickly do real world, outdoor world scale AR kind of stuff; putting signage and maps and things like that onto the surface of the infrastructure around you.

Alan: That’s interesting. Google, about a month ago, announced their AR navigation system. It does not have that kind of accuracy. So is this something that maybe would be a Google acquisition, to build into their Google Maps?

Terry: Yeah, I think everything is a potential Google acquisition these days [both laugh]. I mean, you see a lot of roll up in the industry right now. Technology roll up, between all the players. Niantic, Facebook, you name it. So, absolutely, I think it’s an innovative technical approach to use the visual positioning approach to things, but it only works in the outdoor context. So, technical problems, different technical solutions. What is very interesting to me for the real world, for business use cases, I don’t see a very good, cohesive outdoor/indoor solution quite yet. I’m a believer that cloud-based solutions like that of 1000 Realities will get us closer to that, where we can have one consistent technological approach that we can use to navigate you in world-scale AR to a business, bring you into the business, and then give you internal navigation and spatial mapping at the same time. Right now, that requires two separate solutions.

Alan: We actually built an AR navigation tool, and it works great outdoors and indoors. It works well, but the way we did it was, it was specific for location. So theme parks and malls and stuff like that. What we were using — and what we are using — is beacons. When you’re outside, it uses GPS, and when you’re inside, it knows where you are in a GPS, and then uses the beacons to triangulate that milimeter-accurate precision.

Terry: Right.

Alan: But I haven’t seen anything that has really been able to say, “hey, this is the penultimate of this.”

Terry: Yeah. Right now there’s vendors like 6D.ai which have really tremendously awesome spatial mapping tools, that require a lot of on-device horsepower to perform their task. You see the envelope of what could be in the future. As equipment and devices get more performant, that kind of technical requirement will become more commonplace available. But you also see devices like the Magic Leap, which are technically really awesome, but don’t work in an outdoor context, not just because of the display components — the optics package — but because of the sensors. You can’t scan at scale outdoors with those devices.

Alan: I wonder if you could do a combination of putting ARCore and ARKit capabilities, mixed with GPS, mixed with the Magic Leap, to give it all in one. But now you’re throwing in a ton of junk. So really, what it comes down to is 5G and cloud computing.

Terry: Right.

Alan: None of these things can really run without it.

Terry: Yeah. A lot of the work that I’m doing at Deutsche Telekom is looking at, how can we enable the Holy Grail device to exist, which is this really light, consumer-fashion-friendly, all-day device. The key to that is that we have to reduce battery drain, and move all the compute off the device that we possibly can. And to do that, we can either tether it to a phone, or we can put the compute on the Internet. Or we can do both. 

I really think that we’re moving to a mesh computing world, where we use all the compute cores, if you will, that are around us in our personal area network of devices. You’ve got your watch with some compute. You’ve got your phone with compute. Maybe you have compute on the headset, and then you’ve got compute on the network, both on the edge and in the backhaul. If you mesh all that together, we can start to shift the burden off the top of your head, and onto the network more and more. That allows these devices to get smaller and lighter, and still be very functional. You know, not to have all the tradeoffs. 

You mentioned, for example, the VUSIX Blade, which looks like it’s a stereographic device, but it’s really just for one eye. The right eye. It’s a great device, and it’s lightweight, and it’s relatively inexpensive. But because it takes a lot of tradeoffs in terms of its compute capabilities, certain business use cases may not be as viable on it as it is with other devices. When you start to integrate the use of cloud compute and technologies that run on the cloud — especially on edge — then you can start to offset some of the compute reduction that you put on the device with the network, and it starts to enable the device to be more and more capable. Like I said, in some cases, more capable than devices with the built-in sensor arrays and the inside track.

Alan: One of the showstoppers that I saw at CES this year was the NReal glasses, from a developer from Magic Leap who left to start his own glasses. He basically took the basics behind delivering three-dimensional AR with one camera, using ARCore, I guess, and then run the computer down to the equivalent of a cell phone pack running Android. I thought that was a really unique way to get some of the weight off the headset. 

But what you’re saying is that having the compute power on the glasses, and then maybe the phone, and then cloud, and then basically edge computing — all of it together — that’s going to need some sort of open frameworks and collaboration. One of the things that you’re involved in is Open AR Cloud. Do you want to talk about that, and what that means to businesses?

Terry: Yeah. So, it’s hard to understate how — or overstate, I should say — how important having a digital representation of the real world, spatially, is going to be. What we call “the cloud,” it’s going to be the foundation under which we… I’m sorry, on top of which we build tremendous amounts of spatial computing applications. We need to know the details of the geometry of the real world, so that we can position things. But we also need to understand it semantically, as well — What it is, not just where it is in house, what size it is. 

The Open AR Cloud Foundation is focused on looking at different categories of use cases, and creating open standards that all the industry players can engage with, so that we have a consistent way that we can utilize these different technical approaches to solving some of these problems. 

I mentioned, for example, the fact that currently, you really have to use different hybridizations of technologies to work indoor and outdoors with spatial mapping. What we need, though, is a consistent — as we refer to it as — a single index method to be able to, say I need a map, based on this: where I am now, or where I need the map for. Different vendors use different approaches today, to provide that indexing into the maps that they generate. And it creates a lot of havoc for people to design applications, to not have a consistent approach, a consistent method for indexing. Kind of like the Dewey decimal system for libraries, to be able to find a book. It’s the same value. If every library had its own different indexing methodology, it would make it really hard for people to go from one library to another and be able to locate books. 

That’s the goal of the foundation, I think, in many ways: create these sort of standards. And there’s a nice integration of other standardization groups which are also, in their own right, trying to create some uniformity with development. For example, if we look at the AR headset market, there’s already silos, right? We’ve got the Hololens silo, we’ve got the Magic Leap silo. We’ve got the Android-based AR headset silos. Apple, when it comes out with its product, will create another silo. But maybe as developers, we want to be able to build applications that run across these devices, and not have to develop them over and over and over again. 

Multi-platform deployment of business logic and code and graphics has been something that’s been a passion of mine since the 80s. Today, without tools that can do that — like Unity — we wouldn’t see as much proliferation of solutions for business or consumer. It’s very, very important, as we move forward, to have that.

Alan: I couldn’t agree more. You mentioned Unity, and Tony Parisi has been working in the WebAR space forever, and really pushing forward. He’s actually going to be a guest on the show as well. So, this is not an overnight fix, and it’s not something that is going to happen overnight. But I think, from a business standpoint, if I’m a business owner, and we’re talking about open AR cloud, and we’re talking about edge computing — what does this mean to a typical business? Because, as we’re doing this interview, it keeps cutting out a little bit. Let’s just unpack this: if we can’t figure out how to make a podcast record smoothly, why are we even trying to make glasses that compute in three dimensions? Because if we can’t figure out the simplicity of a podcast stream — and this is what people are asking me — “why would I get into AR, when I’m just starting to embrace mobile apps?” Maybe you can speak to the transformative power of these technologies as a business owner, as a business person.

Terry: Well, I mean, there’s two parts to this, right? We’ve been in the early days with AR devices, and the challenge has been that, if you are tied to a particular vendor and their vendor’s ecosystem, then it becomes really, really hard for you to take all of the investment you’ve made in building business use cases, and moving them to devices which might be better suited to the use case. 

For example, the Hololens. When people started building applications for the Hololens to do — things like remote maintenance, or remote support — the same applications become very difficult to move to a lighter-weight device that’s perhaps more durable, and better-used in an industrial environment – like, say, a Realwear HMT-1, or something that’s lighter weight that fits, and is more comfortable to wear on the head for an extended period of time. It’s super important that we have some standards that allow us to take the core business logic and the content, and move those projects across to different tangible devices, so that the businesses can adapt and utilize that. Not only for the use case that they’re building, so it’s better for that use case — but also, to look at it from a CapEx perspective. The Hololens is a great product, but it’s $3,500 USD, and companies can’t afford to necessarily give each and every employee [one] — even with a great ROI that might be provided. They may fail to capitalize that much expenditure. So, it becomes very valuable to be able to move your software ecosystem to a device that might be slightly less functional, but also, a lot more affordable and deployable across a wider range of people.

Alan: It’s interesting that you touched on that, because about a year ago now, I think, Microsoft moved the Hololens out of their devices division and into their Azure — or cloud-based — computing division. And I thought that was a really smart move, because once they realized the power of this technology, being able to synchronize that with the business systems that are already in place is vital. So, being able to say, “okay, you’re using a Hololens, and that’s great for these really high-end jobs. But, that same information that you’re using can be used with a smartphone now, and with another pair of glasses.” I think creating that standard, where it can be used across anything, is absolutely essential. 

Our company, Metavrse, we’ve always taken a completely agnostic approach to everything, where we said, “okay, it doesn’t matter what headset or what technology it is; what is the problem solving? How do we take this technology that is right for you, and deploy it, but also in a way that future proofs what you’re working on?” Because let’s be honest, the stuff is changing weekly. 

Our company — and that’s another question that keeps coming up from companies — how do I even get started? What would your recommendation to a company that’s looking at, maybe it’s an enterprise company and they’re going, “we have a factory, and I see these case studies from Boeing how they’re seeing 36 percent increase in efficiency.” What’s the first step for people, in your opinion, for these companies to get into it?

Terry: Today, we see a lot of companies that are doing pilots, and improving the ROI on enterprise business-to-business solutions, specifically looking at the situations. Remote assistance is a really common one. It’s almost the common denominator use case now, because the return investment’s very clearly measurable; you know that you can save money not having to fly an expert from one physical location to another. Same reason that we use Skype or WebEx or Zoom or whatever, but we extend that communication with augmented content, presentation, annotation, information displays. That the over-the-shoulder support, it feels like they are literally over-the-shoulder. Those kind of use cases that are very straightforward ways for businesses, which are in the service industry, or have technically-complex products that need to be serviced, to invest in XR. 

I think that most of the devices that are in the market today are capable of delivering really good value in that. But there’ll always be a need for use case-specific implementations on the device, depending on your context. If you’re in a hazardous environment, if you’re sitting at a desk with large, complex machinery right next to you… different needs. Sometimes you need something that can work in different lighting conditions; very bright light versus very low light conditions. Maybe you need a headset that can have a flashlight turned on, so that you can illuminate the area as you’re looking at objects in a basement or underground location.

Alan: Or night vision for the U.S. military.

Terry: Exactly. Look at the Hololens 2, in the military version/adaptation. And there’s a number of other companies that have made a good living building XR head-worn devices specifically for the military. I mentioned the flight mask for FedEx that ODG had built. I don’t think, for business, there’s going to be a single device that is going to serve all the business use cases. You have to look at the ones that are… what I tell a lot of businesses to do is start to evaluate the environments in which the devices need to be used before you start building the software, because the software is going to be a lot easier to design than trying to pick different hardware. 

For example, do you need to build or use the device indoors and outdoors? Do you need to be able to use it in different lighting conditions? Does a device need to be shared by multiple people, because you can’t afford to put one device into the hand of each individual user? What kind of mobile device management requirements do you have? What kind of security requirements do you have? All of these have to be thought about before you start picking hardware. 

Unfortunately, a lot of times, I see people pick the device first without really thinking all those out, and then they find that they’re kind of in a rut now, because they know now they’re stuck with a specific software development platform, building for a specific device’s capabilities, and they can’t really build the solution they need. It’s not practical. I mean, all these devices have great value. I work with all of them universally. But I wouldn’t take the Magic Leap and build a solution for it that requires being used in a large, open factory space, because it’s just not designed for that.

Alan: Why is that?

Terry: Because it’s both a combination of the way it displays things — the lighting conditions under which it can work — and also the sensor arrays that are on board. It can only scan a certain distance in front of you. So, if you’re trying to do things and you’re in a giant room — a cavernous room — it won’t build the final walls that are in front of you, because they’re too far away. You’ll have to walk around the area and map it manually, and it’ll take a really long time to do that. And also the environment… for example, if you have a lot of reflective white surfaces, or you have glass windows, or things that are less feature-oriented to differentiate content of the real world around you, it’s really hard to map. Magic Leap isn’t going to be that great at some of those environments, at least not currently.

Alan: All of these devices have their pros and cons. The Hololens one, for me, after about five minutes of wearing it, I got a headache, just because of the weight distribution of it. The second one, they fix that and they’ve addressed it, but they all have their limitations. 

I think one of the things that’s across all the conversations that I’ve been having is security.

Terry: Yeah.

Alan: You mentioned device management security, shared by multiple people. One of the things that came up in one of the conversations is eye tracking is going to be more and more prevalent in these headsets. And once we have really accurate eye tracking and head motion tracking — because the device on our head — you’ll be able to use things like gait, retinal scanning, heart rate. These are different biometric markers to enable security at a different level.

Terry: Yeah, absolutely. Magic Leap has some great advances in that, as does the Hololens 2. And you’re going to see more and more use of the value of that, as we move forward to enable on the contextualized and personalized information display for the user, automatically recognized by a mixture of biometrics, like you said. It’s really about reducing the friction on the user experience, so that they can have an easier way. Just pop the headset on, and maybe even have a general profile that’s stored in the cloud someplace that can be brought down to that particular device, so that it’s not just this exact device that they have to use, but it’s a device of that type.

Alan: Yeah, you’re absolutely right. One of the things that we’re working on, it’s a project — I can’t talk about the details — it’s being able to use it as a medical diagnostic device. So, being able to send this device out to remote areas that physicians — it’s either very expensive, or not possible to get physicians to these remote areas — being able to send this device out, capture the medical data from it, and then either transmit it through the cloud, or send the device back. But, how do you secure that data? How does it transfer? How do you make sure that the individual using it, like you said… the onboarding is a hard thing. If I put on a Hololens and it doesn’t have Wi-Fi, I got 10 minutes of trying to mess around just to get the Wi-Fi working. Being able to take out those onboarding challenges is really key. 

I think we’ve only scratched the surface, as an industry, on what eye tracking can even do, with obviated rendering and being able to identify and approve people. It’s incredible, what this technology is going to be able to do.

Terry: For businesses — maybe even as much or more so than consumers — it’s going to be important that the learning curve and the friction points that are involved in getting somebody [to] be able to put the glasses on, and start making practical use out of them needs to get less and less. What we’re learning when we do this for businesses will be readily applicable to consumer-grade devices that will come to market over the next 12-18 months.

Today, the most powerful devices are these all-in-one devices, like the Hololens 2 and Magic Leap-created one. But as we move over the next 12-18 months, you’re going to see more and more of the tethered devices, where they’re leveraging a smartphone and it’s compute capabilities and it’s sensors with lightweight devices. And the flow with which those kinds of experiences happens needs to be super, super easy. Ideally, I just want to be able to put the glasses on and have it start to do things for me without my having to be trained in a special class. 

Right now, it’s not there yet. We’re still finding that we have to educate people on the use of the device in a generic way. Then we have to educate them on the applications. And every application is a completely different approach to the way it spatially displays stuff. So, every time, it’s a massive learning curve. That’s going to go away in the future, but it’s going to take us a few years. It could take us longer to solve the inversion of the ecosystem issue, where we go from an app-first mentality to a people-first mentality, in the way that we design software. Then it’s going to take us time to actually get to consumer-grade device.

Alan: So, in your opinion… I mean, you’re right in the thick of this thing; you’ve created 200 different apps around the world, you work for one of the world’s largest telcos. In your opinion — you mentioned 12-18 months — when do you think consumer-based AR is going to start? When I say start, I mean hit the market where people are actually buying, and not Magic Leap being sold at AT&T stores because, you know, that’s great, but I bet you they’ve sold about 10 of them. But when do you think this is going to start to kick off? The big question mark is Apple, and what they’re going to do. But can you speak to what you think your timeline is around consumer adoption, and where we’re looking for enterprise versus consumer in the next 10 years, or five?

Terry: I think, in terms of the device, the use cases, as I said, the transition now is going to go from the all-in-one devices to the tethered devices. So there’s a few companies like Dream World, NReal, ROKiT, which are coming to the market with really high-quality tethered solutions, where you plug your phone in and leverage that compute, and that will allow the price point for the headsets to go down significantly from where they are today, to under thousand dollars — maybe well under a thousand dollars — and still have a lot of the functionality needed for business use cases. 

That will be the adoption boom. It won’t be for consumer use cases. It will still be for business — or, as I like to say, time-durative B to B to C type use cases. Like, I’m going to go to a sporting event; I want to put the glasses on, I want to wear them for two hours. I’m going to go walk around a city center on vacation; I’m gonna wear the glasses for a few hours. I’m going to go to a museum; I’m going to wear the glasses for a couple hours. So these time-durative use cases will become very valuable in driving adoption rates on the devices themselves. 

In the end, I see that happening in the next 6-12 months. I project that by the end of next year, Apple will make its entry into the ecosystem, and that will be much more of a straight-up consumer play. I don’t think they’re going to look at it as a solution for businesses at all. I think they’re going to go at the opposite end, which is what Microsoft has done. Microsoft said, “we’re going to own the enterprise space.” On Apple, I think, “we’re going to own the consumer space, and we’ll let everybody else play in the middle.”

Alan: Well, if it’s anything like the iPad — they came out with a consumer device that had far-reaching capabilities in enterprise and business.

Terry: I absolutely think there’s that. What you can use it for, and what they position it for initially, right? So, absolutely, there’ll be a lot of envelope-pushing, in terms of the categories of use cases that will be built for the device when it comes out. I just think that right now, we’re in this sort of transition from the all-in-one to the tethered, and then Apple will be playing in the tethered. From my perspective, it’s an ecosystem of devices that work together. You’ve got your Apple Watch. You’ve got your next-generation Air Pods. You’ve got your 5G-capable Apple iPhone, and then you’ve got their glasses. I think that’s going to be the ecosystem that we see when it comes to the glasses and bring it to market. My gut tells me, as a developer who’s been doing work with Apple since ’84, that they’re going to launch at the WWDC, to get the developer community ramped up and building use cases and applications on top of it. I just have a hard time imagining them waiting until 2021 to do it.

Alan: Really? Because my original prediction was mid-2021 when they announced, and then 2022 when they launch. So you’ve kind of hyper-accelerated my…AH! Oh, crap! Anybody who’s listening: you better hustle now, because once that hits, the world gets crazy.

Terry: I’m an optimist, and I have to admit that, within my group of colleagues, I’m as optimistic as it comes when it comes to devices. And I know I’m pushing the envelope a little bit. There’s also some convergence going on, with the rollout of 5G, starting now. Granted, it’s early days. If you stand in the right corner of the park in Chicago, you’ll get 5G radio on Verizon. So it’s still very limited. You have to be in the Mall of America in Minneapolis to get 5G. I think it will be in the Verizon store itself, actually. So, there’s limited access right now. But as we move forward, and all the telcos get to play, you’ll see more prevalence in there. And I think Apple’s waiting, because they’re not in a hurry. They’ve got to wait till there’s a market.

Alan: They did a really good job with introducing ARKit with the acquisition of Metaio, and then turning that into ARKit. And I often say this in my talks, is that ARKit is like the training wheels of spatial computing. “Here’s the device in everybody’s pocket, it’s fully AR-enabled: go out and build something cool. Maybe it’s a game, maybe it’s an experience, maybe it’s a marketing thing. But you have the power of the future technology in your hands. Start programming for it, so when the glasses come, you’re already able to program for these.”

Terry: That’s right. That’s exactly correct. And where we move beyond the preponderance of measuring apps built on top of ARKit, to really seeing a breadth of use cases and a lot more integration of AI on camera vision processing. We’re starting to get not just spatial mapping, but semantical mapping of the real world done. And that’s going to be really exciting, because that really changes the game. That’s why the work that we’re doing at the Open Air Cloud Foundation is so important, because when you have some standardized resources that you can tap into. No matter how big the companies are , no one company is going to be able to own the creation of all maps. So, there’s a need to share and collaborate and work together to deploy as a layer of services with these maps have been built to contain. And absolutely, on top of that, we need to understand what it is that we’ve mapped. We understand the details, the nuances of things, like the type of material that things are made out of. This is concrete. This is plastic. This is wood. This is rubber. That’s going to start to become more tangibly important to the types of applications that we see built.

Alan: It’s crazy, the stuff that’s going to come through the camera. I want to ask you one final question: what do you see as the future of VR/AR/MR/XR, as it pertains to business?

Terry: I think that, transformatively, all the software that we’re used to operating on a desktop computer will have to get redesigned to be spatial computing, where we’re going to think differently. The vision that Meta had about getting rid of the desktop display, and turning to virtualized space, is going to be part of the world that we move to. And so, software will have to be rethought and redesigned and reengineer for that kind of paradigm.

Other Episodes

Episode

October 18, 2019 00:44:03
Episode Cover

Mass VR and Squeaky Floors, with PwC’s Jeremy Dalton

One of Alan’s favourite XR experiences was running into a room at the Royal York Hotel, filled with 200 people, all deathly silent and...

Listen

Episode

December 09, 2019 00:26:37
Episode Cover

Using XR to Enhance the Hardhat, with Trimble’s Jordan Lawver

It might seem like a small, even simple fix, to attach an AR device to a hardhat, but according to Trimble’s Mixed Reality expert,...

Listen

Episode

August 29, 2019 00:07:36
Episode Cover

XR in Education, Healthcare, Policing and Social Media (XR News 8/25/19)

Welcome to the second episode of the XR for Business News podcast, the show where our host, Alan, provides a quick rundown of the...

Listen