Everybody, thanks for joining me
today for my talk about the AR Cloud,
why is the future of MR as
a device not a device, it's all of them.
I'm Jesse McCulloch. I've been doing
HoloLens development for about three years now,
pretty much since the beginning.
I got here by watching
Alex Kitman get up on stage one
day and unveil the HoloLens,
and my mind was blown.
I couldn't believe it, I watched
the whole presentation but
there's no way this thing's real.
Then I got online in a bunch
of Tech bloggers that I follow,
got to go down and try it,
and they convinced me that it was real and I was in.
I set up a bunch of alerts on my news feed,
started reading anything and everything about
mixed reality that was coming out
on Windows Holographic at that time.
And then they put out a call for
developers who were interested in Dev Kits,
so I went to fill out the application.
There was about five questions on there
also the gist of what are
some things you going to build with
the HoloLens, and I had no idea.
This was all new to me, I had never done
any 3-D or spatial stuff.
So, I pretty much said I don't know.
To all five questions,
and pretty much thought
there's no way I'm ever getting one of these devices.
Now behold in February,
I get an email that says,
"Congratulations, you're a Wave-1 developer,
we're going to get you on the list right away."
And I went "Holy crap,
now I have to find $3,000."
I decided that I was going to try
and come up with an idea for
building apps for the HoloLens,
racked my brain, thought,
I'd only having good ideas building on HoloLens,
so I'm not going to get one.
Talked to a couple people and came to the realization,
I didn't have to have cool ideas,
I just have to learn how to build for it,
and then I could build
other people's cool ideas and still make a job out of it.
So, I bought a HoloLens and started
doing development just in my spare time,
and realized that there was not
a community out there for
HoloLens development at the time,
so I went in and created one.
I had been a part of a few other Slack Communities
and decided that that's where I'd start.
So I created a Slack Community,
got a cool little auto register Website and
started throwing it out
there on Twitter, and all of a sudden,
Alex Kitman found it,
retweeted me and I went from having about 10 members of
my group to about 200 in
a matter of a week, which is awesome.
And now, we are up to a little
over 1,700 members as of this week.
So, very active community,
all developer driven,
helping each other out and it's really awesome.
About February of this last year,
I decided that I was going to
become a freelance HoloLens developer.
So, I went part time at my job,
and then in June I went full time doing that.
And then about October,
I got approached by my current boss,
his name is Michael Reed with Practical VR,
and we are building the AR Cloud.
So that's kind of what I'm up to now,
we're going to talk about that in a little bit.
I also host and put on a Mixed Reality Developer Summit,
we had our first one in February.
I had about 30 developers out
to Microsoft's Redmond campus,
and it was super successful,
and we're doing another one in August,
so if anybody's interested in that,
definitely reach out to me,
we've got tickets available.
I'm also doing some planning with
the Mixed Reality team at Microsoft for Build 2018,
putting together a really good Mixed Reality content.
So, more on that to come as well.
You can find me via
any of my social media or Email there,
I'm pretty prevalent on all of them, I'm really active.
Yeah. The AR Cloud
or MR Cloud or XR Cloud or
whichever acronym we're using today,
it's going to change, there's going to be convergence.
But there is a lot of people
who haven't really put any thought into what
the AR Cloud is or even know what the AR Cloud is.
And so, I want to talk about that a little bit.
Right now, AR is a very interesting experience.
There's some really good things,
it's amazing technology, the HoloLens.
All way Top Tier device in AR right now,
puts digital content in
your real world though I should interact with it.
It knows about all the different stuff in
your world and it's a lot of
fun, super amazing technology.
Some of the good things about it,
there's no experts in it yet.
We've only been doing it a couple of years,
there's only a few thousand
of us that are super actively doing anything with it,
and so it is still
a great time to get into this
if you haven't yet and you're interested.
There is nobody in this corner of the market,
there is still a ton of ideas that haven't been
explored and there's a lot of room for new people in it.
So, that's some of the good things.
Some of the bad things about AR at
the moment, varying hardware level,
here things again like the HoloLens, Top Tier,
developer device is $3,000,
keeps a lot of people out from doing it.
And then you can go all the way down to Phone AR,
where you're holding it up with your cell phone.
AR kit, AR core, super accessible.
Everybody's got a phone these days,
a lot of them are capable of doing it,
but a lot of the capabilities that are
built into the HoloLens here aren't available.
And so as a developer, you have to think about that,
you have to think about what am building it for,
what can it handle graphics-wise and
processing-wise and how do I
build a great app experience around it?
It's definitely not mainstream.
Again there's a few thousand people
maybe doing HoloLens development,
so it's hard to find other people to bounce ideas off of.
If you're an enterprise.Net developer,
you have thousands upon thousands upon thousands
of resources out there.
If you're a HoloLens developer,
you have dozens and dozens of resources out there.
Something that we're actively working
on changing as more and more people get
involved especially with the advent
of the immersive headsets get into it a lot cheaper,
development's roughly the same for those in
the HoloLens with few considerations.
The bad, same as the good, there's no experts.
So, you can come across
a problem and be
the only one that's come across that problem,
and there is not very many people
that can help you out with it.
So, that's kind of a double edged sword there,
there's not a lot of experts that you're competing
with and there's not a lot of
experts that can help you out.
We get to the ugly, the ugly
about AR is it's a very lonely experience.
I put on my HoloLens at
home and I'm going through a game or an experience,
and my girlfriend walks in,
all she sees is me walking around going like this.
She has no idea what I'm doing,
so there's no way for her to take part in it.
There's very few experiences that will
let you have to hold hands together,
but now you've got a $6,000
investment instead of a $3,000 investment.
So, it's not very
easy for a lot of people to get involved.
The ugly is what we want to change.
We want AR to be
experience that anybody can get a hold of,
anybody can take part of,
and so what we have to do with that is bridge
the gap of devices and experiences.
So, how do we get there?
What's the next steps?
The next step that my company practically is taking
is we want especially map the world.
We want people to help us do it.
So, what we're doing is we're building
experience that developers can put in front of their app.
Now, they don't have to build a mapping experience.
You let us handle that.
So, you put it in front of your app,
you put it your app together and publish it.
Now, when the user puts it on,
they go through our mapping experience.
We guide them around the room,
make sure they're getting a good clean map.
And then after they've got a good clean map,
it gets uploaded to our server.
Now, if there's 12 developers who have apps that all use
our mapping experience and somebody has already map
the space that you're in with one of those other apps,
we can detect that and we can say, "Hey.
You don't have to map this room.
We've already got it. Here you go."
And that's kind of
the incentive we have to have people map for
us using our services
that as more and more of these maps get into our service,
fewer and fewer people will have to do mapping.
Why are maps helpful? Maps are helpful
because it allows digital objects
to interact with your physical space.
And it also helps us to make it
more accessible to other devices.
So, if I've mapped with the HoloLens and have
a common knowledge of the world space and I
can give that to an ARKit phone and they can
understand the world the same as the HoloLens does.
Now, I can say, "Put a hologram on that bench."
And the ARKit phone can say,
"I know where that bench is."
So, I can see that hologram there too.
Now, it's less of a lonely world for AR.
Once we get past maps,
we can start figuring
out what objects are and this is off in the future.
We're not quite there with the technology yet,
but with machine learning and better sensors,
we'll be able to do things like say, "Oh.
I understand that this is a bench
or I understand that this is table."
The nice thing about that is that,
when you look at a space,
you can look at it and say, "Okay.
Some of these structures are permanent structures,
and some of these things are
temporary or can be moved around."
A table can be easily moved around,
a wall, unless you're doing construction,
it's not so easily moved.
Once we can start categorizing that kind of stuff,
it gets a lot more
interesting with what you can do because you can
say play something on
any table that you can find in the space,
or on any bench,
have a character in my game sit down on
it and not have to
actually pre-map and figure that stuff out.
Now, beyond maps and
objects and we start getting in actions.
This is way down the road,
but once we have an understanding of
our physical world in
a digital sense and the objects in it,
we can start inferring actions.
And you get into some interesting concepts
of being able to reward people for
doing stuff based on
what they've got going on in their physical world.
So, say the city of San Francisco decides that they're
going to make an initiative to help clean up their parks.
And at this point, instead of a big HoloLens,
everybody's got AR glasses
that they're wearing around all the time.
Now that we know the physical world,
we know where you're at, you're in the park.
And we know that there's objects like a trash can
and we can see that you reach down pick
something up off the ground and put it in the trash can.
The city of San Francisco go, "Hey.
I'm going to reward you with
a quarter for every piece of trash you pick up."
Now, you've got incentive to
start picking up trash as you walk by it,
because it didn't take
any really extra effort
and you can start getting rewarded for
your actions based on the technology
knowing what you're interacting within the world.
So, that's kind of where we're
going with this technology.
Again, it's a lot of fun to use cases actually really
start to stand out for themselves
after you really start thinking about this.
A store could go in and digitally
map and allow you
to put your shopping list in into their app.
And we get to the store, you fire it up,
and they show you a trail on the ground,
how most efficiently to go get all your groceries.
And it will be custom for everybody
based on their shopping list and what they know is in
their store location wise and everything. Starbucks app.
Go into Starbucks, they could display
their digital content in there and it doesn't
matter if you're wearing a HoloLens or you've got
an ARKit phone or any other device.
The same digital objects will
show up in the same physical space regardless.
So, the possibilities are really,
really interesting as you start thinking about this.
So, with our service,
once you fire it
up and you start mapping,
we are creating point cloud through
that mapping and storing that app in our cloud.
And this is kind of what those look like.
So, those first two
are a coffeehouse in Santa Clara called Hannah House.
These two are my hotel room
that I map last night actually.
And then this was the airport
in Portland when I was waiting for my plane.
We can actually see the windows looking out
over the tarmac and everybody waiting
because our flight was delayed by two hours.
So, we have a mapping experience. Again, that's guided.
So, when people put it on,
little tokens show up and we say,
"Hey, look at this token."
It disappears as you collect it.
And then another one spawns and we have
a little list that lead you around. It's pretty awesome.
We're going to be launching here
in the next couple of weeks
a contest for a mapping experience.
So, if you've got a HoloLens and you
want to be involved, make sure and reach out to me.
We're going to be
rewarding people based on the number of maps.
We have a leaderboard and everything.
It's going to be a lot of fun.
And as we collect more and more of this map data,
we can start analyzing it
and running some machine learning stuff on it,
and it's going to be a bright future.
I was talking to the guys
over at Channel 9 earlier and they were asking
about what I see being the future of
this stuff and how long it's going to be?
And we look at it and you say,
"This is a 10 year technology."
It's going to be 10 years
out and you can wait for it to happen.
We decided to look at it and say
we're going to start building it today,
and hopefully, in two or three years we
get to where most people think it will be in 10 years.
And in five years,
we'll be the leader in the market.
So, anybody have any questions?
So, right now, we are limiting it to
HoloLens only because it's
got the best sensors right now.
As other devices come to market,
we'll be working on those.
And then also ARKit and ARCore are plans for us as well.
Once we're able to figure out how to turn the slam into
a better point cloud. Great. Thank you guys.
Không có nhận xét nào:
Đăng nhận xét