>> Welcome back. We're in
the Microsoft booth live at GDC.
I am Adam Tuliper,
I'm a Tech Evangelist and
Software Engineer with Microsoft,
living in Sunny, Southern California.
And I'm here interviewing Jesse McCullough.
Jesse has been instrumental
in the Mixed Reality Community.
Jesse, can you tell us a little bit about yourself.
>> Yes. I've been doing
Mixed Reality development ever since HoloLens came out.
I was wave one developer,
and jumped right into it.
And I've kind of gone headfirst into it,
you could say, and stirred up a community around it.
It's been super successful
and having a lot of fun with it.
>> So, what wave were you
when you got your first HoloLens?
>> I was Wave one.
>> Wave one.
>> Yes.
>> So, you're hardcore into it, ready to
>> Actually, that's kind
of a really interesting story is,
when I filled out my stuff for the developer edition,
there was like a bunch of questions on that.
Like, "What are you going to build with this?
What kind of cool technology?"
I'm like, "I have no
idea what I'm going to do with this thing."
And that's literally what I wrote down.
Like there was five questions, and I wrote,
"I don't know. I have no idea."
And I'm like, "I'm going to be
the last person on earth to get one of these things."
Then all of a sudden, I get this email that says,
"Hey, you're Wave one."
And I was like, "Really?"
>> Wow. That's pretty cool.
I think I got mine at Wave four or so. That's interesting.
So, you had never done development before then,
or you had never done
Mixed Reality Development before then?
>> I'd never done Mixed Reality Development.
I was working for an enterprise company doing JavaScript.
I've always been kind of a fan of Microsoft Technologies,
but I hadn't really dove into any 3D stuff.
>> Now, if I recall,
were you a bit newer to development technologies?
In other words, how long have you been a developer for?
>> I've been a developer now for eight years.
>> Eight years, okay.
>> Yes.
>> Interesting. So, Mixed Reality came out,
you're like, "I have no idea what this is
all about. Let me get into this."
>> Yes. When I jumped into it,
I had to learn Unity.
I had a pretty good baseline on UWP,
but never in
a professional sense, just kind of it as a hobby.
But, yes. All the 3D stuff was brand new to me.
>> That's really cool.
>> It's a whole different paradigm to think in that way.
>> Is that your Wave one device?
>> It is.
>> Very cool. So, you got
the device and you started up a community.
You started the Slack Channel.
>> Yes.
>> And that's really grown to be,
I think something that you spend a lot of
time helping the community grow.
And so, from Microsoft, thank you for that.
>> Yes.
>> I think that's awesome you've done that.
But developers kind of often wonder,
the device has been on
the market for now, it's establish.
And so the developer community has grown,
and is really instrumental
for folks like you to help that community.
And even at times when we're quieter on future details,
the community is still extremely
strong and helping into that.
Can you talk about what goes
on in that developer community and what that's like?
>> Yes. So, it's interesting because you say
that this is a kind of an established device,
but it's still such a new market
that nobody's really cornered it yet.
So, if you're out there
and you're interested in it, don't wait.
This market it's not cornered at all.
So, in the Slack Community,
we get people that come in
and have never done HoloLens Development,
we get people who come in who have been
doing it since the early days of it.
And we've got all kinds of channels in there.
We've got channels about Unity.
So, if you have questions about doing
Unity development in relationship to HoloLens,
or the Mixed Reality immersive headset.
>> Cool.
>> We've got the help channel where you're like,
"I'm struggling with this, I need some help."
We've got specific channels
about the different conferences.
We've got to build channels for
the build conference coming up here in May.
So, kind of a little bit everything.
>> That's interesting.
>> And we try and just keep the news as open as we can.
>> So, do you find that
developers help each other on that channel?
>> Oh, yes.
>> Right.
>> It's almost entirely developer led.
>> Okay. I've got an issue with this.
Can you guys help me out figure
out how to do Spatial Mapping
and Spatial Understanding, and that's phenomenal?
>> Yes, and people will drop code in there,
and say, "Oh, this is how I solved it."
Or, that's not really possible yet.
We're hoping that it'll come
through in the next build update and whatnot.
>> And so, when I mentioned established device,
so there we're talking about, the device
has been out for a bit now, API is established.
We're growing over time, and
Mixed Reality Toolkit has gotten well.
Initially, the HoloToolkit,
right now it's the Mixed Reality Toolkit,
and we've been adding features over time.
You have been a contributor
to the Mixed Reality Toolkit as well.
>> Yes. I sit in on the team
that does the weekly shift room calls for that.
So, we look at the issues that are open out there,
what we have time to work on,
kind of guiding new features.
>> Okay.
>> And then trying to pull more people into being
contributors because it really is a community effort.
>> It's kind of interesting because folks will say, "Hey,
you're not running a Windows phone or something,
aren't you all only
Microsoft only kind of Windows ecosystem?"
Is like, "No. We try to enable developers everywhere."
And so a lot of that has been the
open source movement that we have.
>> Yes.
>> And so, the Mixed Reality Toolkit
being an open source community project,
you would probably know best.
There's a lot of community effort
that goes into maintaining that tool. But yes,
we have folks that work at Microsoft that do that,
but we absolutely rely on
community support and folks like yourself, right?
>> Yes. I'd say at this point,
it's equally as many community developers
are putting time in and as are Microsoft people.
>> Wow.
>> Which is really awesome.
We've got a couple of people who are
really super strong in there,
and then they're making up a large number of changes.
But a lot of people will say, "Hey, I built this."
And were like, "Hey, could we put it in there?"
>> Sounds good. Let's put it in the toolkit, that's neat.
>> Yes.
>> That's neat. So, you have
a talk that you're going to do, is it today?
>> Yes.
>> What time, four O'clock?
>> Four o'clock, yes.
Over here in the Azure Club Theatre.
>> Tell us about that talk.
>> So, I'm going to be talking about the AR Cloud.
So if you're not aware of what the AR Cloud is,
it's this idea that
our physical world needs a digital representation.
And once we have a digital representation in that world,
we can start sharing digital content.
>> Now, when you say, "Our physical world needs a
digital representation," are you talking purely maps?
>> To start with, yes.
>> Okay.
>> So, maps of the area. Maps get kind of interesting
because you've got features in
mapping that are permanent features,
like the columns in this building, never going to change.
And then you've got
features of objects that
are not necessarily so permanent,
tables and some stuff that can move around.
So, that's one of the challenges,
is being able to differentiate
things that are going to stay where
they're at versus things that could get moved
around and change the environment that you're working at.
>> Would things that are going to get
moved around just not the maps,
not included in this presentation?
>> If they're not maps, or we
need a way of identifying them.
>> Okay.
>> Right now,
it's really hard just because the
technology's not there as far as the hardware,
from being able to look at something and say,
"I recognize that this is probably a table because
it's got a flat surface between a certain height,
and it's got legs that go down to the ground.
So, I can reason this table,
and at some point it could probably move."
>> Right.
>> But we'll get there.
>> We'll get there. Right. We see the advent of
modern GPS has really enabled AI to take off,
and we have cognitive services.
You can look at a picture of a table,
and know it's a table in that picture,
but you might not know in space what's that, right?
>> Right.
>> And on the flip side, we have
the HoloLens which does its
Spatial Understanding and Mapping.
And it doesn't know what a table is
unless you define a surface, right?
A half meter off the ground, and two square meters.
>> Right.
>> It doesn't know it's a table, it's knows there's
a mesh and a surface there.
>> Right. And we can reason that it's something sitable,
or what we call sitable.
Which means, it's probably a chair,
or a couch, or some surface
that somebody could sit on it.
>> So by sitable, you mean,
I've defined something to the API,
I've said it's a half meter off the ground,
it has a flat surface area, we'll call that sitable?
>> Right.
>> And therefore a 3D character come
walking up and sit down on there, or something like that?
>> Yes, exactly.
>> There was kind of where two end of the spectrum.
Computer vision can understand it,
the HoloLens can map it,
then we kind of need to meet in the center there.
>> Yes.
>> And so, once that happens like so,
when you talk about the AR Cloud, what does this mean?
Like what can we do with that then?
>> So, once we can do that,
developers could put out,
say their application, puts digital content into
the real world either through HoloLens
or through an ARKit phone or whatnot.
>> Okay.
>> Once they have a shared understanding of the world,
that it can place that digital content in the world,
and know where it is in
relationship to the physical world.
And now you could have somebody with
a different device, say a HoloLens,
or ARKit, or whatever come in,
and look at it through their device.
And you guys can look at the same content
in the same world space.
And the reason this is important is because right
now AR is a very lonely experience.
If I'm wearing my HoloLens and my girlfriend comes home,
I could be wandering around
doing this and all over the house,
and she has no idea what
I'm doing because she can't see it,
because it's only a personal experience for me.
And there are shared experiences where you can have
multiple HoloLenses looking at the same stuff,
but we're not really at a point where that's common yet.
>> It's not generalized, right?
We have third parties that make sharing services.
We have open sourced collaboration features.
But you're right, there's
an application that has to be baked in.
>> Yes.
>> So, do you see this becoming like
a standard API where folks across
any AR type experience can just pull in
this information and then it gets rendered on the device,
and they have some understanding of it?
>> Yes. And one of the things that we're working on,
the company I work for is called PracticalVR.
One of the things we're working on
is cloud source in the data gathering.
So, having people out there who are
doing these different applications
to implement our mapping experience.
So what they do is, they
put our mapping experience at the front end,
when somebody maps a space,
it gets uploaded to our cloud service.
And so the idea being,
that if somebody else comes in later,
we can say, and they
pull up an app that uses our mapping service.
We can say, "We recognize the space."
Rather than making you map it,
we'll just download it to your device.
>> So then you build out this database of
understanding of all these spaces
inside and outside as well, right?
>> Yes.
>> And somebody else comes to the space.
Indoors, how would this work?
How would somebody know that
this space has already been mapped, for example?
>> So what they do is,
they launch whatever application is they're going to use.
And if that application implements our mapping service,
then they come in and they've started up,
our mapping service would start
at the front end of their app.
And once we get enough reference points,
we can say, "Oh,
I recognize this based on
the Wi-Fi signature that you're connected to."
And then some initial point cloud data
that we start gathering.
>> I got you. So there's
different methods to understand the space.
I look around the hall that's here,
and the chairs and the features in here.
So you look at the Wi-Fi fingerprint of the area,
and Spatial Mapping features.
There's walls and certain areas,
so the algorithms detect that. That's very interesting.
Do you suspect that places are unique enough to do that,
or will there be some confusion initially?
>> I think we're going to have a mixture of both.
There are some places that you're going to go
into that are absolutely
unique and some people
when you go into a conference room,
every conference room in that office building is exactly
the same and it's going to be really
hard to differentiate them.
Folks say, "Hey, so HoloLens can do Spatial Mapping?"
What happens if I go halfway round
the world and I
put it in a room that looks just like the room I came in,
and like well, it's going to think you're in
the same room halfway round the world.
That is I guess a common thing,
and we might not have
the information to be able to get around that,
especially if there's no Wi-Fi signals
around or some other difficult
issues to solve on that front.
How do you think we'll deal
with different device capabilities?
For example, I'm on a HoloLens and I'm rendering
something at whatever poly level
and maybe a hundred thousand polygons.
Then, whereas some other device that's maybe a little
bit older and doesn't have that same running capability,
how do you think that that would share across?
>> That's going to be
a matter of developers building their experience
with totally different levels of
detail similar to what you can do now.
You can build a game that works on
PC and XBOX all the way down to a mobile phone.
You just have to make sure that it's set up right so
that it looks at the GPU and the CPU and says, Hey
>> This is what I'm looking at.
>> This is what I'm capable of.
This is what I'm going to sacrifice,
basically, for a better experience.
>> For the platform- The platform
that you guys are working on then is really going
to be understanding with
the spatial mapping around us, understanding space.
Me, as a developer, I would go in and say,
'Okay, I understand where I am.
I understand somebody else has
instantiated a gremlin here.'
My other application is going to
clear that API and say, 'Hey, there's a gremlin.'
It's up to the application to say this 3D model.
You already know about it. If somebody
else is running a different application
in the same space,
there is a separation there, right?
>> Interesting. What's some interesting used cases
that you foresee this type of technology being used in?
>> The further you get into the future,
the more interesting the used cases get, right?
The technology gets better,
and gets smaller, more lightweight, and better battery.
Eventually, this is going to
be a pair of glasses that just,
you wear all the time, right?
Once we have a reasonable understanding of
the world and the objects in the world,
you can actually start to infer actions.
Maybe the city of San Francisco is on
a kick for cleaning up their parks,
and they can infer that.
You reach down and grab something off the ground,
and you put it in a receptacle
that they know is a trash can.
Now, they can offer some sort of reward base.
>> Fascinating.
>> To you.
>> Interesting.
>> Once we get past maps and objects,
we can start inferring actions.
>> That's fascinating. We can use AI in
the future to understand user actions, incentivize them,
reward them with this
opt-in behavior. That's fascinating.
I really like the idea that we
can share experiences right now or that
we'll be able to have a better way
to share experiences between
mobile devices and the HoloLens.
Back to the HoloLens here, because
you've been so active in the community,
do you have a favorite kind of application,
application of a device that you've seen so far?
>> It's interesting,
because I've been playing with it for so
long and I use the word
'play' because sometimes that's what it feels like.
It's such amazing technology
but I also forget how amazing it is,
because I made it every day.
One of my favorite things to do is put
it on people who've never used it before.
All the time, I'm in coffee shops, working or what not,
people like, 'Hey, it used to be like, Hey what's that?
Now it's like, Hey,
is that a HoloLens?' We're getting further.
>> People recognize in devices now. Sure.
>> People recognize for what it is.
I love to put it on people and just show
them, play some holograms.
Whenever I go into a Starbucks and I start working,
the first thing I do is I fire it up,
and I go into the holograms app.
I start putting holograms around,
because eventually somebody's going
to come up and want to see it.
>> It's all set there.
>> It's already set up.
>> You've set the trap.
>> Exactly. Polytours is
a great experience to draw people into,
for if you haven't done it,
it basically is a 360 video that you're in,
of either Rome or Machu Picchu.
It's stunning that's what they've done with it.
Fragments, if you got a good long time to
go play it, it's such a fun game.
I remember the first time I put it on.
If you're not familiar with it,
it's kind of a detective game where you're
the detective and you have
virtual counterparts that kind
of beam into your living space.
At one point, somebody,
one of the virtual characters- I have part in
my house and they put
their hands up and hike themselves
up, and sat on the bar.
When they did that, I was like,
this is the most amazing technology in the world.
>> My mind has been blown.
>> Right.
>> I tell folks,
Fragments is hands down,
my favorite experience, because it really integrates.
It overlays a new floor.
It's raining out, you see rats running around,
and these characters integrate with your environment.
We were talking before about,
how do you understand space?
Well, we can define a suitable surface to
the HoloLens and understand
that we can take a 3D character and sit down there.
In fact, that that is here now,
we can have 3D characters that
integrate with our real world, is phenomenal.
I mean, it changes how we can
interact with applications entirely.
For me, going forward with bots.
We talk about conversation as a platform is
this new way of apps to communicate
with the users but imagine when we have
these 3D assistants that
can be shared even across spaces, too.
Our context, aware and can
hang out in our living rooms and talk.
I think it's going to be a whole new frontier
on there, and it's super exciting.
>> One of my favorite videos that Mike put out
is this experience where
a woman is trying to design a shoe store.
She's using [inaudible] do it and she's
got a little bot that hangs out with her.
She brings her other co-workers in from remote locations,
then they all collaborate on designing the space.
It's interesting to watch, because all
that technology is there in its own individual form.
We just haven't tied it all together yet and once we do,
it's going to be amazing.
>> We're talking about that excitement
when you put a device on
somebody's head for the first time in the same way.
It is hands down, one of my favorite things to do.
A couple of weeks ago, I had the opportunity
to put a device on
a 98-year-old World War II vet
who was one of the remaining Tuskegee airmen.
We were in an aviation museum
and he put a device on his head.
He starts walking around looking at
holograms like, 'This is amazing.
This just made my year, sire.'
>> Right.
>> It's phenomenal.
It is one of the best things about it.
Well, Jesse, what kind of
timeframe do you think that we're looking
at going forward in this AR Cloud?
>> Practicals getting started with it now.
We're about ready to release
our STK out a little bit later this year.
The sooner we start building it,
the sooner it will be available in a wider form.
It's one of those technologies you can look at.
You can say, 'This is 10 years down
the road' or you can look at it and say,
'We're going to start on it right now and
just do what we can.'
Iterate on it and make it better.
That's what we've decide we're going to do.
>> How do folks find more information about the STK?
>> Go to our website, experience.practicalvr.com.
We can sign up for the mailing list.
We go a little bit about the company.
We also do analytics for HoloLens,
so you can find out about that.
Follow me on Twitter, @jbmcculloch, M-C-C-U-L-L-O-C-H.
I'm more than happy to
answer questions and get people involved.
>> Very cool. You've been incredibly helpful in
the HoloLens develop instruction.
How do people find that?
>> The website's really long,
message me on Twitter and I'll send it to you.
>> Okay.
>> Very good. Well, thank you very much.
It's been a pleasure having you today
and I'm sure we'll talk more mixed reality.
>> Of course, thank you.
>> Thank you very much.
Không có nhận xét nào:
Đăng nhận xét