Thứ Ba, 5 tháng 9, 2017

Waching daily Sep 5 2017

We are here for

the afternoon session of tiny Machine Learning Workshop.

Our first speaker is Manik Varma from Microsoft Research.

He'll talk about The Edge of Machine Learning.

>> Cheers, thank you Venkatesh.

Right, so I'm Manik Varma from Microsoft Research India.

And I'll be talking about the edge of machine learning.

Where I'll be focusing on developing

machine learning models that can be trained in the cloud.

But can then make predictions locally on tiny IoT edge and

endpoint devices.

Which might have as little as two kilobytes of RAM.

So the models and algorithms that I will be presenting today

have been developed Prateek, Praneeth, Naga and myself.

Though in reality all the work was done by our excellent

undergraduate reasearch fellows Ashish, Chirag, Yeshwanth and

Bhargavi.

We also have a bunch of computer architecture systems and

programming languages researchers on our team.

Including Harsha, Vivek, Rahul and Raghavendra.

And they have been helping us with very

efficient implementations

of our algorithms on these tiny IoT devices.

Along with Harsha, our developer Suresh has been

writing production-level code for an open source release.

So you can download our algorithms and

play with our code as part of the edge machine learning

library that we have released on GitHub today.

And you can hopefully also find algorithms soon

in the Microsoft embedded learning library.

Which is a specialized machine learning compiler that we're

creating that can take these machine learning algorithms in

our library.

And compile them onto these heterogeneous IoT devices.

So that's also released in GitHub and hopefully you'll be

able to find our algorithms soon over there, okay?

So before I get started, I thought that some of you might

not be familiar with the Internet of Things.

So let me start by giving some context and

introduce these IoT devices that I've been talking about.

And just specify the hardware that we will be targeting.

So here is an ARM Cortex M0.

It has only 2 KB RAM, 32 KB of read only Flash,

and no support in hardware for floating point operations.

And because of this, you can see it's really, really tiny.

In fact, it's smaller than a grain of rice.

So that's a golf ball in the back, just for

a size comparison.

Here's an Arduino Uno board.

It's built around an 8 bit ATmega328P microcontroller

that's running at 16 MHz.

And it also has just 2 KB of RAM, 32 KB of read only Flash.

And again, no support in hardware for

floating point operations.

So the way that these IoT devices work is that once you've

trained your machine learning model in the cloud.

You take the trained model, the prediction code,

the feature extraction code and any associated data and

parameters that you might have.

And then you burn them onto the flash of the microcontroller

along with the boot loader, the device driver, the libraries and

any other application code that you'll have.

And the you deploy the device in the field,

at which point of time the flash becomes read only and the only

writable memory that you'll have access to is the 2 KB of RAM.

Now, billions of these devices have already been

deployed in the world today.

And there's an Internet of Things wave that is all set to

revolutionize our society.

However, it's still early days.

And so nobody's really quite sure of which application will

take off and what will be the next big thing.

So people are trying out lots of different applications in

connected cars, industrial IoT, predictive maintenance.

Fitbits, fitness variables, smart cities, smart housing,

smart appliances, and so on.

But the one thing that is common to all these applications

is the assumption that the IoT device is dumped.

It just senses its environment and

then transmits the sensor readings to the Cloud.

Where all the machine learning algorithms reside and

where all the decision making happens.

However, we feel that there are number of critical scenarios

where it is not possible to transmit the data to the Cloud.

And where the device itself needs to be made intelligent in

order for decision making to happen locally.

So these scenarios typically arise due to concerns around

latency, bandwidth, privacy and security and energy.

And range from the low latency brain implants for

predicting seizures, so that patients can call for

help as quickly as possible.

So for example, if they're driving a car they could pull

over, or if they're standing up they could sit down, etc.

To precision agriculture on disconnected farms,

to privacy preserving smart glasses.

And to the incredibly energy efficient, but really hilarious

smart fork that starts shouting at you if you eat too much or

you eat too quickly.

>> [LAUGH] >> So

a bunch of us at MSR Redmond and

MSR India have come together to address some of these concerns.

And our objective is to build a library of machine learning

algorithms that can run on these tiny IoT devices.

So let me be absolutely clear about this scenario.

Our models are going to be trained in the Cloud.

Or on your laptop, where we assume there are infinite

resources available for training.

But then once the model has been trained, then it has to be

squeezed onto the flash of this tiny microcontroller.

Where it's then expected to make predictions in milliseconds,

fit in a few kilobytes and

ensure that batteries last forever and ever.

So the way I have structured this talk is that I'll start by

discussing or presenting Bonsai.

Which is a completely new tree model which we've designed for

these low memory regimes.

And then after that I'll briefly present Proton,

which is a new compressed [INAUDIBLE] classifier that was

also presented at ICMI a couple of days ago.

And then finally, I'll conclude by presenting results and

comparing Bonsai and Proton to the state of the art.

And showing what happens when we deploy them on in our.

Okay, so let's start with trees.

Now trees are very beautiful, but

they have a number of other advantages which make

them an essential component of any machine learning library.

So they are general and can be used for classification,

regression, ranking, anomaly detection, etc.

And since we expect to tackle a diverse range of machine

learning applications in the IoT space.

It would be good to have them be part of our tool kit as well,

at the machine learning library.

Even more importantly, balanced trees have the great property

that their predictions can be made in time that is logarithmic

in the total number of training points.

Which is a rough measure of the complexity of the learning task.

Therefore, ubiquitous in many time critical applications,

including Bing, the Kinect, the HoloNet, etc.

And this also makes them ideally suited to our IoT scenario

because they can make predictions very efficiently.

Both in terms of time and in terms of energy.

However, we can't take a standard tree algorithm,

such as a decision tree, and simply hope

to deploy it out of the box on and on for 328b, right?

And that's because,

if you have only a few kilobytes of flash memory available,

we can only hope to learn a shallow decision tree.

Which might not be very accurate in making predictions.

As you can see in this binary classification task over here,

where the objective

is to separate the red points from the blue.

So the problem is that at each internal node in a decision

tree, we learn a horizontal or

vertical cut to partition the data.

And we keep repeating this procedure until we end up in

a leaf node, where we make a very weak constant prediction

based on the majority work.

And this leads to poor accuracies because there is no

way you can separate the red points from the blue

based on the small number of access aligned cuts for

this particular dataset, okay?

So you might hope to address this problem by learning deeper

trees but then you'll run out of RAM.

And it might turn out to be the case that these trees

are still not accurate enough.

And the problem is even more severe,

because in the real world, in order to get high precision,

high accuracy predictions from most real world applications.

We almost invariably have to learn a large tree ensemble,

all right, which will take a huge amount of memory.

And where I believe we still haven't addressed,

or fundamentally addressed, the issue of accuracy.

Because we're still restricted to generating these piecewise

linear decision boundaries made up of these horizontal and

vertical cuts.

Standard decision tree ensembles cannot generate truly curved

decision boundaries, which is what's really required in

this case in order to get high accuracy with a compact model.

So rather than starting from a huge tree ensemble and

then pruning it very aggressively to get it to fit

within a few kilobytes of RAM and

suffer a large loss in accuracy as a result.

Ashish Kumar, Saurabh Goyal and

I decided to come up with a completely new tree model,

called Bonsai, based on the following three key ideas.

First we're going to design Bonsai to be a single, shallow,

sparse tree.

Which will make it incredibly compact, but

then we'll also make each node in Bonsai much more powerful

than a node in a standard decision tree.

In order to Bonsai to accurately learn these non-linear decision

boundaries.

Second, in order to further reduce model size.

We are going to take the training data and

project it into a low dimensional space.

And then learn all of Bonsai's parameters in that space,

in order to get Bonsai to fit within a few kilobytes.

And finally, we're going to learn this past projection

matrix and all the nodes in the tree jointly.

So as to ensure that the matrix is, sorry, so

this ensures that the matrix has learned to

maximize Bonsai's prediction accuracy.

Whereas the tree's to maximize the memory budget utilization.

So let me flesh out each of these ideas in a little bit

more detail.

So we'll start by replacing the horizontal and

vertical cut learned in a decision tree by oblique cuts.

So we're going to, at each internal node in Bonsai,

we're going to learn a full hyperplane with normal theta.

So that we can branch points left or

right when an input comes in with feature vector x.

We can determine whether to branch it left or

right by evaluating the sign of theta transpose x.

Now this will allow us to learn slightly shallower tree.

So it'll reduce the depth of the learned tree somewhat.

Because now each node in Bonsai is much more powerful than

a node in a standard decision tree.

And will also allow us to generate these oblique

decision boundaries that you see over here.

Now this is a very common idea.

And you might have seen it many times in the past.

So think about decision jungles from MSR Cambridge,

as well as variants of the perceptron tree.

But unfortunately, it doesn't quite cut it in our case.

Because these learned trees are still too deep to fit with a few

kilobytes of flash.

And at the same time,

we still can't generate code decision boundaries.

And we're still restricting to making constant predictions in

the leaf nodes.

So this is the point in the past where everybody had abandoned

this model and started exploring other models.

But what we realized is that if you were to take each node and

make it even more powerful.

In fact, allow each node to make a nonlinear prediction,

then the model starts to pan out.

So what we're going to do is that at each node in Bonsai,

we're going to learn two vectors, v and w.

So that each node can make a nonlinear prediction by

computing w transpose x into tan hyperbolic v transpose x.

And this is what will allow Bonsai to learn these curved

nonlinear decision boundaries.

Now, the particular form of this nonlinearity is

not very relevant.

You can choose whatever nonlinearity you like.

We chose this particular form because it worked well for

us in Berkeley.

But if you think that some other nonlinearity

will work better for you in your domain,

then you should just feel free to go ahead and do that.

Everything else in Bonsai will still hold.

So Bonsai's overall prediction is the sum of the individual

node predictions made along the path traversed by a point

from the root to the leaf.

Now, I should also point out that in trees,

there is a unique correspondence between paths and leaves.

So rather than targeting each node to make a prediction,

I could have forced all the nonlinearities

down to the leaf nodes.

And I could have gotten equivalently, where the only

leaf node classifiers are making these nonlinear predictions.

And so this corresponds to taking a standard decision tree

and now having much more powerful leaf node classifiers.

And again, this is an idea that has

been tried out many times in the past.

Cho-Jui, Si Si and Inderjit have these great DC-Pred++ trees,

where each leaf node is a kernelized SVM, RBF-SVM.

And people have put neural nets and other classifiers as well.

But the point is that if you were to put a partial classifier

in each leaf node and

then let all of them be completely independent,

you'd run out of memory very quickly.

So, for instance, over here,

if you were to force all the nonlinearities to the bottom,

we would have 3 v's and 3 w's in each leaf node.

And there are 4 leaf nodes.

So that would give us a total of 12 v's and

12 w's if we were to let them be all independent.

Which is far more than what we have over here.

So the budget would get exhausted.

So path based prediction allows a lot of

parameter sharing in Bonsai.

Which is what really allows us to learn this really,

really compact model while still making nonlinear predictions.

Okay, so to recap, our first contribution,

which differentiates Bonsai from standard decision trees is

the fact that we're going to learn these three vectors,

theta, v and w.

Where theta transpose x is going to control the branching.

And w transpose x into tan hyperbolic v transpose x is

the nonlinearity that's predicted by each node.

So all this is well and good.

But for the fact that theta, v, and

w have the same dimensionality as x.

So if the input feature vector becomes very high dimensional,

then each node in Bonsai will become very bulky and

will again run out of RAM, out of flash memory.

So in order to address this limitation, we add a second

ingredient, which further differentiates Bonsai from

standard decision trees, which is a Sparse Projection matrix Z.

That takes the input and

projects it into a very low dimensional feature space.

In fact, on many of our IoT experiments,

we kept only a five-dimensional feature space.

And then all of Bonsai's parameters are now learned

in this low five-dimensional space.

So what this implies is that at each node in Bonsai,

we have to store at most 15 numbers.

Five for theta, five for v and five for w,

which hardly takes any space whatsoever.

And this is what allows Bonsai to finally fit in a few

kilobytes of RAM.

Make predictions very efficiently in milliseconds

even on these slow microcontrollers.

And make sure that batteries last longer as compared to any

other algorithm out there.

While at the same time delivering classification

accuracies that are significantly higher than

the state of the art.

So one more thing that I should mention over here.

It's an important implementation detail is the fact that

this past projection can be computed in a streaming fashion.

So this allows us to tackle lots of IoT applications

where the feature vector for

even a single point will not fit into two kilobytes of RAM.

And this is very important for

trees because they otherwise won't work with streaming data.

Okay, so I was focusing on the binary classification case all

this while, but you can also extend Bonsai fairly

straightforwardly to handle multiclass classification,

regression, ranking, other tasks in supervised machine learning.

So for instance, if you wanted 2x and

Bonsai to handle multiclass classification,

all we would need to do is to replace the vectors V and

W at each node by matrices.

And now rather than predicting a scalar for

binary classification, Bonsai will start predicting a vector

of class codes which you can use for multiclass classification.

Okay, so that's it for the model.

Let me also quickly discuss how we can train the parameters in

the model from data.

So remember that we have two groups of parameters that we

need to train.

The first group is the sparse projection matrix Z.

And I'll refer to the other group which is all the tree

parameters put together in this matrix capital Theta.

So we obtained that by taking all the nodes and

then taking the theta, V and W in each of the nodes and just

stacking them all together into one big, huge fact matrix, okay?

So here is Bonsai's objective function.

It's really, really, simple.

It has just three terms.

The first two terms are L2 regularizers on the tree

parameters and the sparse projection matrix.

And the third term is any suitable loss function that you

might have chosen for classification, regression,

ranking and so on, right?

So for binary classification,

you could have used the hinge loss for ranking.

You could have used any CG gradients, alpha gradients,

sorry, lambda gradients, etc.

And then in addition to the objective function,

we place explicit memory constraints on

the projection matrix Z and the tree parameters theta.

And that'll depend on how much budget you have on your flash

of your device, right?

So notice that this is a hard, non-convex, non-smooth problem.

And it's incredibly difficult to optimize.

But if you're a guy who learns trees every day, you can say,

well what's the big deal?

And trees have always been non-convex and non-smooth.

So I'm just going to learn Bonsai in the standard.

We are first going to learn the root node,

then I'll learn the two children, and the two

children are four grandchildren after that and so on.

So, you could do that but

then the problem with that is that you won't get a very

optimal allocation of your memory budget.

So, if you are going to learn the root node at the very

beginning, you have to decide how much budget should

you allocate to the root node.

Are you going to allocate an entire kilobyte to that or

just a little bit?

I mean, it could be the root node is the most important

because all the data flows to it.

Or you could assign an equal budget to each of your nodes.

So, rather than using a heuristic to determine how much

budget to give to each node, we thought it would be very good

if the optimal allocation of memory budget could happen

automatically as part of the optimization algorithm, right?

So in order for that to happen,

we have to learn all the nodes in the tree jointly.

I know there are many ways of doing that but

we thought we'd try and do this based on gradient descent, okay?

So trees are discontinuous.

So we have to first move them in order to get gradient

descent to work.

And the reason trees are discontinuous is because if you

take any point,

it follows a discrete path through their tree, all right?

So this particular point,

it might go and end up in the leftmost leaf node.

And now if you are to change the splitting function at the root,

you change the theta a little bit.

The point could flip over to the other side.

Go to the other extreme, and

now you'll make a very different prediction.

So there's a huge jump over there, right?

So in order to address this issue,

we use a very standard trick which is to smooth the tree.

Points can go both left and

right at each node with some probability.

And when we start off, we can let this probability be uniform.

So points are now going to travel to all the nodes in

the tree which is great, which makes the tree differentiable,

but then you lose all your efficient prediction properties

because now you have to visit every node.

So what we can do is that as we start training,

as our optimization progresses,

we can sharpen this probability distribution.

So that by the time we are nearing convergence,

this distribution has gone back to being an indicator function.

And points will take only a single path through the tree.

So you get the differentiability as well as

efficiency in this particular case.

Okay, so here is our algorithm for

training Bonsai based on gradient descent.

It has just two steps in each iteration.

And these are the two steps.

And we'll keep repeating these two steps until we converge, and

typically we converge in about 300 or

400 iterations very quickly.

A good quality solution.

So let me just outline the two steps for you.

The first step, we start off with the feasible initialization

of the budget for the parameters, and then,

we freeze the budget allocation to the various nodes, and

then we apply K steps of mini batch gradient descent.

So in this we are constantly improving the model,

we're constantly lowering the loss, but

the budget is fixed across nodes, right?

>> [INAUDIBLE] >> The memory budget, right?

So you can see the root node has, I don't know, you tell me

how many red squares, that's the budget that was allocated to it,

and then the leaf nodes have a slightly lower budget, etc.

>> The number of support.

>> Yeah, so the support is fixed and the number of non zeroes

that you will allow in each of these theta t's and W's.

So we take K steps of gradient decent with this fix support,

with the fixed budget for the nodes.

And then in order to find a better allocation of the memory

budget, we take a dense gradient step in the next iteration and

this will give you a completely dense solution.

But then you can project back onto the feasible step using

iterative hard thresholding.

So you get a better allocation of the budget.

So, for example,

some budget might move from the root node to the left child.

Another piece of the budget might move from the left most

leaf to the next child, etc.

So we keep applying these iterations, and

we find that empirically we, on many datasets, we converge

within 300 iterations to quite a good quality solution, okay?

Let me also briefly present ProtoNN,

which is a compressed k-nearest algorithm

that has been developed by Prateek and his team.

I won't go into the details of nearest neighbor classification.

All of you know that already, but suffice to say that

k-nearest neighbor algorithms are very general, they're easy

to understand and debug, and they can work very well in

scenarios when we have little amounts of training data, right?

So for typical maker or creator scenarios where it's hard to

go out and gather more data and

label it, you might prefer using k-nearest neighbor algorithm.

But just as was the case with decision trees, you can't

take a standard nearest neighbor algorithm and then hope to

deploy it on a non Cortex M0 or on Atmel chip, right?

And that's because you have to lug around the entire training

set for prediction, right?

So in order to make even a single prediction,

you need the entire set around,

which means your prediction costs are going to be really,

really high.

And at the same time,

your predictions might not be very accurate, because we don't

know what is a good distance metric in feature space.

So in order to address these issues,

ProtoNN follows a very similar approach to Bonsai.

We start by learning a sparse projection matrix Z.

And in this case, the matrix not only projects the data from

a high dimensional space to this low five dimensional space, but

it also learns the distance matrix from us according to

the given task, right?

Then in order to get further compression,

we start learning exemplars for the training site, right?

So rather than using the entire training set to serve as

exemplars for nearest neighbor, or even choosing a subset of

the training set to be our exemplars, we're going to learn

completely new points, which are not part of the training set,

as the exemplars for nearest neighbor classification.

So that buys us a lot of compression, but

then, in addition to that,

what we also do is share the exemplars between classes.

So let's say, if you were to learn m prototypes, the first

prototype code, say, I'm an exemplar for both class one and

class ten, and the second prototype could say,

I am gonna be an exemplar for, let's say, class one and

class two, and the third prototype could be,

again, class two and class ten, and so on.

So in terms of notation,

I'm going to be learning m prototypes, b1 through bm, and

these are going to be five dimensional vectors,

if my projection space is going to be five.

And at the same time, I'm going to learn the votes with which

each prototype is going to vote for the various classes.

These are going to be w1 to wm, and these are going to be l

dimensional vectors for an l class problem, okay?

So when a new point comes in,

in order to make a prediction on a device with ProtoNN, we

are first going to again project it using the sparse matrix z,

go down to the five dimensional space.

And then we're going to get each prototype to vote for

the particular point using its class scores, right, and

then we'll take a weighted majority vote, right?

So if you have,

let's say the fifth prototype, it's going to vote for

the various classes with w5, but then it's going to be weighted

by the distance of b5 to the projected point, right?

So it's a weighted majority vote.

That's how we'll make predictions.

The training procedure for

ProtoNN is very similar to that of Bonsai.

The objective function is also very similar.

We don't have the regularization terms but

the loss function is the same.

It could be any general loss that's suitable for

classification, regression, ranking, etc.

And we have the same set of budget constraints that we had

explicitly l0 constraints on the memory budget for the W,

B and Z.

And we're again going to learn this using accelerated

proximal gradient descent with iterative hard thresholding.

An additional feature with ProtoNN is that all

the algorithms we've seen today so far in this workshop,

very few of them have any theoretical guarantees.

But predictor has managed to prove some sorts of theorems for

ProtoNN, we'd say that they don't do badly in good settings,

right?

So in particular, if you had data drawn from, let's say,

2 Gaussian distributions coming from different classes which

were well separated, then you can prove that ProtoNN will

learn these cluster means, which are the truly

representative points of the prototypes over here.

Very accurately in polynomial,

in fact in linear time in many cases.

So that's something nice that's about ProtoNN.

Okay, so let's finally move on to some results.

We evaluated ProtoNN and Bonsai on a bunch of benchmark machine

learning and IoT datasets for binary classification,

multi-class classification, and ranking.

And I'm going to take the number of classes in the dataset and

append it to the dataset name, so as to distinguish between

the binary and the multi-class version, okay?

So in our first experiment, we compared Bonsai and ProtoNN's

prediction accuracies using really tiny models to state of

the art uncompressed methods that are running in the cloud

with infinite resources available for prediction.

So our prediction time are all in terms of infinite memory

prediction time, energy consumption, etc.

And these methods include gradient boostered decision tree

ensembles, or RBF-SVMs, k-nearest neighbor classifiers

as well as neural nets with a single hidden layer.

Now there are many numbers to report over here.

So in order to avoid confusion, I'll first always mention

Bonsai's number in red and then ProtoNN's number in blue and

then all the algorithms afterwards.

Okay, so the interesting thing to note in this experiment

is that there are some cases where Bonsai and ProtoNN can

actually have higher prediction accuracies as compared

to all these uncompressed methods over here, okay?

So many people before me have mentioned that, hey,

it would be great that if algorithms could be good for

not just these devices but also on the cloud.

So here is a situation where that actually does happen.

And the case is also true for the other two datasets.

Let's say first if you take the first dataset, right?

So that's, I think, the task of right whale detection.

Over there,

the best cloud-based model is gradient boosted decision trees.

But Bonsai has a 5% higher prediction accuracy as compared

to the tree ensemble.

And ProtoNN has a 3% higher prediction accuracy.

While Bonsai's model is about 900 times smaller, and

ProtoNN's is about 300 times smaller.

So the trends are very similar on the character recognition in

the whale dataset for both the binary and

the multi-class version.

But I think these results we'll see only once in a blue moon.

I think in the real IoT scenario, what we'll

expect to typically see, the results that you see for

the Berkeley wearable activity recognition dataset.

So over here, again, it turns out that by some luck, the best

cloud based model is gradient boosted decision tree ensembles.

And in this case, Bonsai's performance is about 1% lower in

terms of prediction accuracy, ProtoNN's about 2% lower.

But both Bonsai and

ProtoNN's models are 75 times smaller than the tree ensemble.

So, they can be incredibly efficient over here.

In our second experiment, we actually implemented some of

these algorithms on the Arduino Uno.

And compared the prediction costs when the model

was restricted to being no larger than two kilobytes.

So now you can see over here that Bonsai and

ProtoNN have much higher prediction accuracies.

Ignore the Cloud-GBDT bar for a moment,

we'll get to that experiment in a second.

But apart from that bar, Bonsai and

ProtoNN have much higher prediction inaccuracies now

while having very low prediction costs.

Just a few milliseconds per test point on a 60 MHz

microcontroller.

And it take only a few millijoules of energy per

prediction, okay?

Actually as I had mentioned,

we have very efficient implementations of Bonsai and

ProtoNN, which we call Bonsai off-time ProtoNN on.

And this allows us to bring down our prediction cost

even further.

And now you can see over here that Bonsai and

ProtoNN can actually sometimes have prediction cost that

are even lower than that of an optimized linear classified.

So now you have all the benefits of full blown non-linear

classification while paying virtually less than linear costs

in some cases.

A final experiment that we did in this particular slide is we

asked what would happen if we actually had Cloud connectivity

and we could transmit the feature vector to the Cloud, and

then run a gradient booster decision tree ensemble

over there, right?

So as you can see from the numbers over here,

just the cost of transmitting the feature vector could be 200

to 1,000 times more in terms of the prediction energy and

the prediction time.

So it's much more efficient to use Bonsai and ProtoNN and

to predict on device than it is to transmit data to the Cloud or

use any of the other algorithms.

In our third experiment, we compared the performance

of Bonsai and ProtoNN to state of the art algorithms for

resource efficient machine learning.

Including gradient booster decision tree ensembles and

the best technique for pruning such large ensembles,

which came out of and group.

We also compared Bonsai and ProtoNN to Decision Jungle

from MSR Cambridge, as well as Feature Budgeted Random Forest

and Pruned Random Forest from Venkatesh's group.

We also compared Bonsai and

ProtoNN to Stochastic Neighborhood Compression,

SNC, which is from Killian Weinberger's team.

It's one of the best methods for

compressing Nearest Neighbor models.

And also to Local Deep Kernel Learning and

neural network pruning, or Deep Compression from Song Han and

Bill Dally, okay?

So over here, I'm plotting graphs of

prediction accuracy versus model size.

And you can see that the gap between Bonsai and ProtoNN and

the second best method is as much as 6% for

the binary classification dataset, and

in fact more than 30% for the multi-class dataset.

And it turns out that some of these methods actually don't

even register on the graph over here because their prediction

accuracies are lower than the Y axis limit for

this model size arrangement.

And the results are representative, in fact,

the trends are almost identical on all the other dataset.

Bonsai and ProtoNN's curves dominate all the other

algorithms for this entire model range,

indicating that perhaps they are the best models for

these low regimes, low memory sized regimes, okay?

Then finally, I also wanted to demonstrate that our algorithm

is generalized beyond the IoT setting

to other resource constraint scenarios.

So the L3 ranker in Bing has to operate under very tight service

level agreements.

And what I'm showing you over here is the performance of one

of the classifiers that has been developed for

this task by Chris Burges and called FastRank.

So I'm showing you its performance as a function

of the model size.

And the interesting thing to note over here is that

with Bonsai and ProtoNN, we can get models that are smaller by

about 700 times but with almost no loss in ranking accuracy.

So as I was saying earlier,

even if you are not interested in IoT,

if you're just a cloud provider, you might consider using some of

these algorithms to bring down your operating costs.

Okay, so to conclude,

I have only a few take home messages for you.

I think that machine learning for the Internet of Things

will provide many high-impact opportunities for

transforming our society.

Based on this observation, our teams at MSR Redmond and

MSR India are creating a library of machine learning algorithms

that can run on these tiny IoT devices,

as well as a specialized machine learning cross compiler

to compile these algorithms onto different types of IoT devices.

As part of this Edge machine learning library,

which you can find on GitHub, we are releasing the code for

Bonsai and ProtoNN today.

It's completely open source.

You can download with it, play with it, do whatever you like.

And these algorithms are incredibly fast, accurate,

compact, and energy efficient.

So just to show you a teaser of results, here are some of

the results that are coming out of the Redmond lab.

So this is a deep neural network running on a Raspberry Pi live.

That's from Mathai Philipos.

I don't know, can you see the two frames moving?

I can't over here.

>> It's frozen.

>> It's frozen?

So maybe I have to click on it to get it to run.

>> There you go, it's running now.

>> Yeah.

So this is similar to the Apple demo we saw earlier in the day

but this is actually running on a Raspberry Pi.

So a lower class device than the smartphone,

than the Apple iPhone.

Okay, so that's it from my side.

I'm very happy to take questions.

>> [APPLAUSE] >> Question.

>> Thanks man, fascinating stuff.

One thing that I was not sure, maybe I missed,

you said in the beginning that some of these devices do not

have floating point units.

>> Yeah. >> But the models seem to

require floating point computations still.

Can you comment on that a little bit?

>> Yeah, I'm sorry, I should have mentioned that explicitly.

We convert everything to a fixed point.

Okay, so are you doing rounding of the weights or

you're just quantizing?

>> Yeah, you're just quantizing to 8-bits or

however many bits is your architecture supports.

>> I see.

Okay, I was also curious.

So in a different context stuff interpretability,

one word that came out maybe about six months back or so

that we found interesting was by Sharad Goel from Stanford and

Dan Goldstein from MSR New York,

and some people who were finding.

They took some large collection of data sets, talk a bunch of,

again, linear, non-linear classifiers.

And instead, they just compared to what a linear thing that was

just predicted, constrained to have waves in a few discrete

values on every coordinate would do.

And they were finding that actually, they're getting close

to the best accuracies across the board.

And yeah, so

I was curious if you've taken a look at that sort of models?

>> Yeah, so actually, from all these data sets,

the y-axis was said to be the best accuracy we could get for

a linear classifier.

So it's not just an arbitrary access.

We did L1 regularization, full linear classifier.

We tried lots of different things, and

that's how we chose this.

So in the kind of data sets we've looked at,

going on linear actually helps a lot.

But I agree, if in your case,

it's better to go with the linear classifier.

You should absolutely do that because it'll take less memory,

less time, less energy.

That should be the first thing you should try for

a particular task.

>> All right, cool, yeah, thanks.

>> Yeah.

>> Any other questions?

>> So I had a quick question on the other graph that you showed.

So what's the variability of the results?

So when you're repeating the train-

>> Yeah, so over here,

I'm just plotting the mean rather than mean and

standard deviation.

We've calculated those as well.

So their standard deviations, in most cases, are very small.

These results are statistically significant.

Otherwise, this is lot too much clutter on the graphs.

>> I was wondering if you could comment a little bit on your

size constraints as a form of regularization.

I see in several of your plots that actually, your methods

decrease in accuracy as the size goes up, which suggests to me,

there's some kind of regularization going on.

Do you have anymore intuition on that or?

>> Yeah, so in a sense, so

if you compare us to a standard decision tree rate, we have to

learn this extra projection matrix as well and stuff.

So I guess, if you don't have a lot of data,

maybe we could actually decrease in terms of accuracy.

Because I mean, if you told us a priori, which of the matrix

elements were non-zero, that would be fine.

But if you have to learn all of that from data and

you don't have enough data,

you could decrease in accuracy in some cases, I guess.

Apart from that, I've not really thought of this in terms

of providing regularization for our classifiers.

I mean, all the classifiers,

I think, tend to follow the same kind of trend.

They will reach a peak and then they will start dropping down.

I don't know whether these odds will do that earlier

than others.

I mean, certainly,

it looks like that we hit our peak accuracy very quickly.

But yeah, I haven't thought about that.

So I need to think about that in more detail, [LAUGH] yeah.

>> Do you have any data [INAUDIBLE]?

>> Yeah, so the Microsoft Embedded Learning Library,

I think it's github/ELL.

So github.com/Microsoft/ELL.

An edge machine learning library,

we'll put up a link to that.

If you just search for

edge machine learning library, you'll get it.

Or you can get it from my website as well,

though the link isn't working yet.

The link on GitHub is.

You can just get it off GitHub.

Yeah, sorry about that, yeah.

>> Yeah, I had a question, Marik.

>> Yeah. >> The ProtoNN,

>> Yes.

>> Is it some sort of sparse dimensionality

reduced svm-type thing it resembles, right?

>> Yeah, so, Ampadi would be the best guy to answer that

question, but absolutely.

So if you look at this thing over here, right?

So this is very much like a kernel,

this expression you have for- >> Yeah.

>> It's very much like a kernel, so yes, you can.

>> So yes, there is a code, I guess, also available.

I guess, this also has Like exemplar,

that may be one difference.

>> Right, so now instead of the support vectors coming from

a subset of your training data, you're going to

learn the position of these support vectors,

and you're going to restrict yourself to this kind of kernel.

So all of the is there.

And you can also interpret this as a neural network,

as you can with Bonsai.

You can also think of Bonsai as a SVM, if you wanted to, or

a near neural net if you wanted to.

You can divide the kernel map for back as well.

>> But what I meant was, so

there are either sparse SV data compare it with sparse SVM,

in some sense getting a sparsity with dimensionality reduction.

I think there is, I don't know about the exemplar.

>> Yeah, local deep kernel learning is,

we compare to lots of SVM methods as well.

Yes, and again I mentioned in the these DC plus, plus, trees.

So again, the SVM compression methods come in different

varieties.

Some in the speed up time.

They won't to reduce the model size.

Others will tend to do both.

But again, compared to them, we are much better.

>> Yes, you had a question? >> Yes, so you mentioned that in

one example with deep learning, there's only one [INAUDIBLE].

>> That's right.

>> So my question is [INAUDIBLE].

>> Yeah, I think Toby we did those experiments as well.

Bonsai and Proton are still the state-of-the-art.

We did those experiments after the ICML paper went into

publication.

But I think we'll put them in an appendix somewhere and

upload them.

So Bonsai and Proton are still very much the state-of-the-art.

In some cases, I think the accuracy comes down a little

bit so the gap will come down.

So rather than being 30% it will now be 20%, 10% etc.

So these are primarily like deep learning networks that are all

fully connected.

I think if you add convolution on top of that then

the accuracy gains will come down even further.

But in a sense then these graphs are no longer fair, right?

Because convolutions they take up almost no space in memory.

They take very little RAM.

But then they're extremely expensive in terms of

computation and battery.

So this is probably not the right way to compare them.

But in terms of just absolute accuracy, yes,

there would be a fall in the gap.

But in terms of the budget, the Bonsai and

Proton would still be more accurate.

>> So a really dumb question. How do I decide whether I want

to use Bonsai, or I wanna use Proton?

>> You try both, then you use both, and you pay us for

[LAUGH] both.

[LAUGH] It's a general question.

When do you use trees,

when do you use Nearest Neighbors?

You choose.

>> [INAUDIBLE] smaller amount of data.

>> In some situations, yes.

>> [INAUDIBLE] more data [INAUDIBLE].

>> No, of course.

But everything will be based optimal when you have

lots of data.

Yeah, I only have lots of data but

Nearest Neighbors can hold a candle to deep learning.

So where I've seen Nearest Neighbors to be really effective

and [INAUDIBLE] based to be really effective is in very

small data regimes.

There they tend do better than discriminative methods.

Again just with my [INAUDIBLE] based, right?

If all your dimensions are really independent,

you'd probably need only 10 data point to learn 1,000 dimensional

classifier.

But with the discriminative method you need 1,000 into 10.

So at least in my experience,

Nearest Neighbors has worked well over that.

Then if you want to add metric learning on top of that,

sure, you'll need a lot of data for that.

>> Okay, thank you.

>> Cool, thanks guys.

>> [APPLAUSE]

For more infomation >> The Edge of Machine Learning: Resource efficient ML in 2 KB RAM for the Internet of Things - Duration: 44:03.

-------------------------------------------

U.S. To Phase Out Program For Young Immigrants - Duration: 2:44.

For more infomation >> U.S. To Phase Out Program For Young Immigrants - Duration: 2:44.

-------------------------------------------

Floridians Bracing For Hurricane Irma - Duration: 1:26.

For more infomation >> Floridians Bracing For Hurricane Irma - Duration: 1:26.

-------------------------------------------

NC Emergency Management preps for Irma - Duration: 1:33.

For more infomation >> NC Emergency Management preps for Irma - Duration: 1:33.

-------------------------------------------

Real Madrid to return for Man Utd star David De Gea next summer - Duration: 1:48.

Real Madrid to return for Man Utd star David De Gea next summer

Real Madrid will make another move for Man Utd star David De Gea next summer.

The Spain international is a long-term target for the La Liga champions. De Gea nearly joined Real in 2015 but a late fax ended any hope of his dream return to Spain.

And Spanish transfer outlet Don Balon say Real will make another bid for De Gea, 26, next summer.

They claim Real club president Florentino Perez has his heart set on landing the United star at all costs.

Don Balon add that Keylor Navas could be sold to PSG to make room for De Gea. De Gea has been vocal about his Red Devils future, with the stopper claiming this week it's a dream to play for United.

Of course, you feel really proud when people think this about you, De Gea told MUTV. It's really good, but I like to keep my focus, keep working hard and doing my best.

To be fair, when you are really young, you don't think about the future too much, you just want to play with your friends.

When you get older, you start to dream about being there, about being at a top team. So of course it's a dream to be at a team like Manchester United..

For more infomation >> Real Madrid to return for Man Utd star David De Gea next summer - Duration: 1:48.

-------------------------------------------

SUPER SMASH BROS For Nintendo Switch News And Update - SWITCH SUPER SMASH BROTHERS - Duration: 3:53.

no I relapse Cavalry's a hello guys and welcome back to gamer 2

4 2.com the best place for gaming news and updates

today's video is update on Nintendo switch Super Smash Brothers Super Smash

Brothers has become a staple of Nintendo platforms ever since the Nintendo 64 the

latest offering on Wii U was arguably the best we've seen yet bringing with it

a truly staggering roster of fighters it's since been supported by

downloadable content to keep hardened fans welded to their controllers whether

it ends up being an enhanced port of the existing version or an all-new

experience Super Smash Brothers feels like an inevitable part of the Nintendo

switch library with splatoon to Super Mario Odyssey and Xenoblade Chronicles 2

rounding up the switches 2017 offerings we might not be seeing anything from

Smash Brothers until 2018 and if so and the Nintendo switch version of Smash

Brothers ends up being an enhanced port of the existing version there 5 thinks

that can make the game better number one give us one huge bundle if Super Smash

Brothers switch is simply a fancy port of the previous iteration we'd love to

have all the downloadable content bundled in for no extra cost

Bayonetta Lucas Ryu and even Final Fantasy sevens cloud have been

incorporated since release and they're all brilliant in many ways this could be

a goty treatment for Smash Brothers that we simply don't see from the company

Nintendo has the opportunity to refine some of the rough edges while providing

us with excellent value for money number two the return of story mode

Super Smash Brothers Brawl Adventure mode otherwise known as the Subspace

Emissary was kind of awesome it thrusts all of

our favorite Nintendo characters into an inter woven narrative that felt utterly

bonkers in so many ways despite combining an endless array of different

locations and characters it rarely broke away from the fast-paced action that

made the series so appealing number three new fighters

during the last console generation Nintendo gave fans the opportunity to

vote for the characters they'd love to see debut in Smash Brothers Bayonetta

and cloud were the evident result of this showing that the tried and true

gaming giant is more than willing to take a few unorthodox risks we want to

see more outside the box fighters debut on switch lending the roster and aura of

unpredictability alongside its already bustling variety Nintendo's plethora of

universes are growing constantly so there's no telling what heroes and

villains could take to the ring next number four more stages super smash

brothers for Wii U and 3ds had a lot of stages the number of established and

niche properties represented throughout Nintendo's brawler was a sight to behold

but as with its roster there are still some environments we've yet to duke it

out in number 5 improved challenges this wish is definitely more for

completionists than anyone else but you can never have enough challenges so long

as they provide satisfying rewards Smash Brothers has done just that in the past

so why stop now what's on your wish list for Super Smash Brothers on Nintendo

switch let me know in the comments and don't forget to subscribe to keep

yourself updated

you

For more infomation >> SUPER SMASH BROS For Nintendo Switch News And Update - SWITCH SUPER SMASH BROTHERS - Duration: 3:53.

-------------------------------------------

BREAKING NEWS Today 9/5/2017 - some in japan preparing for north korean nuclear attack - CNN News - Duration: 2:36.

For more infomation >> BREAKING NEWS Today 9/5/2017 - some in japan preparing for north korean nuclear attack - CNN News - Duration: 2:36.

-------------------------------------------

mosaic for children flower white yellow blue educational video for children - Duration: 1:54.

are kids, are kids allowed to go to the gym, are kids allowed to kiss, are kids allowed to play airsoft, are kids allowed to vape, are kids dumb, are kids growing up too fast, are kids racist, are kids ruining airsoft, are kids shows better now than ever, do kids, do kids go to hell, do kids go to jail, do kids grow up in skyrim, do kids hair, do kids know, do kids know 60s music, do kids know 80s music, do kids know anime, do kids know music, how kids, how kids are born, how kids can make money, how kids have fun, how kids have fun sis vs bro, how kids make money, how kids make slime, how kids play star wars, how kids react, how kids tv, kids, kids 0, kids 0-3, kids 0-9, kids 0-9 cough & cold, kids 000, kids 02, kids 0oem, kids 0wnage 1, kids 1, kids 10 commandments, kids 100m sprint, kids 101, kids 12, kids 123, kids 1234, kids 1992, kids 1995 review, kids 1st kiss, kids 2, kids 2 kings, kids 2 songs, kids 2016, kids 2016 full movies, kids 2017, kids 2017 full movies, kids 2017 movies, kids 2017 music, kids 24, kids 3, kids 3 point contest, kids 312, kids 35, kids 360, kids 360 vr, kids 3am, kids 3d, kids 3d movies, kids 3d songs, kids 4, kids 4 gaming, kids 4 videos, kids 4 wheelers, kids 4 years, kids 4c hair, kids 4k, kids 4th of july, kids 4th of july songs, kids 4x4, kids 5, kids 5 minute meditation, kids 5 minute timer, kids 5 years, kids 5 years old, kids 50 states song, kids 50cc, kids 50cc bike, kids 50cc quad, kids 5c, kids 6, kids 6 pack, kids 6 pack abs, kids 6 songs, kids 6 years, kids 6 years old, kids 6 years old songs, kids 65, kids 66, kids 661, kids 7, kids 7 minute workout, kids 7 songs, kids 7 to smoke, kids 7 years, kids 7 years old, kids 7 years old songs, kids 70s, kids 72, kids 7th birthday, kids 8, kids 8 bit, kids 800m, kids 80s, kids 80s dance, kids 80s movies, kids 80s shows, kids 80s songs, kids 80s tv, kids 88, kids 9, kids 90s, kids 90s hip hop, kids 90s movies, kids 90s shows, kids 90s song, kids 911, kids 911 calls, kids 999 calls, kids 9th birthday, kids abc, kids abcd, kids ads, kids again, kids animation, kids are awesome, kids are lit, kids arguing, kids art, kids art hub, kids baking, kids ball, kids basketball, kids bike, kids books, kids bops, kids bowling, kids boxing, kids braids, kids camp, kids car, kids channel, kids choice awards, kids choice sports, kids city, kids comedy, kids cooking, kids crying, kids cursing, kids dance, kids dance battle, kids dance music, kids dancing songs, kids doctor, kids doing makeup, kids drawing, kids dress, kids driving, kids driving cars, kids eat, kids education, kids eggs, kids eminem, kids english, kids english movies, kids entertainment, kids episodes, kids exercise, kids experiments, kids fails, kids falling, kids fighting, kids films, kids first kiss, kids fishing, kids football, kids fun, kids funny, kids funny videos, kids games, kids get arrested, kids getting hurt, kids getting kidnapped, kids getting scared, kids getting shots, kids going to jail, kids got talent, kids grinding, kids gymnastics, kids hair, kids haircut, kids hairstyles, kids halloween, kids health, kids high on anesthesia, kids hindi poem, kids hip hop, kids house, kids hub, kids in a candy store, kids in america, kids in glass houses, kids in jail, kids in love, kids in pool, kids in prison, kids in the dark, kids in the hall, kids in the kitchen, kids jail, kids jazz dance, kids jcb, kids jeep, kids jeopardy, kids jesus songs, kids jiu jitsu, kids jokes, kids jumping, kids just dance, kids karaoke, kids karate, kids kidnapped, kids kids, kids kids videos, kids kidz bop, kids kiss, kids kiss cam, kids kitchen, kids kitchen set, kids laughing, kids learning, kids line, kids lip sync battle, kids love, kids love movies, kids love story, kids lullaby, kids lying, kids lyrics, kids makeup, kids makeup videos, kids making slime, kids meditation, kids mgmt, kids morning routine, kids movie songs, kids movies, kids movies full, kids music, kids naat, kids nail art, kids nails, kids nasheed, kids nerf guns, kids nerf war, kids netflix, kids new movies, kids ninja warrior, kids nursery, kids of america, kids on america's got talent, kids on dirt bikes, kids on drugs, kids on ellen, kids on the block, kids on the slope, kids on the voice, kids one, kids one liners, kids painting, kids party music, kids party songs, kids playing, kids playing basketball, kids playing football, kids playing soccer, kids poem, kids pranks, kids programs, kids quad, kids quad bike, kids quad racing, kids quest, kids questions, kids quiz, kids quiz show, kids quizzes, kids quotes, kids quran, kids raging, kids rapping, kids react, kids react to dantdm, kids react to food, kids react to jake paul, kids react to logan paul, kids react to music, kids react to poppy, kids rhymes, kids saying bad words, kids shows, kids singing, kids snippets, kids song, kids spiderman, kids story, kids stuff, kids swearing, kids swimming, kids table, kids these days, kids toy, kids train, kids trucks, kids try, kids try food, kids try not to laugh, kids tv, kids tv shows, kids ufc, kids unboxing, kids underwater, kids united, kids united chante, kids united imagine, kids united karaoke, kids united lyrics, kids united on ecrit sur les murs, kids urdu poems, kids watch, kids water park, kids wb, kids wear, kids who kill, kids with guns, kids workout, kids world, kids wrestling, kids wwe, kids x factor, kids x factor 2017, kids xbox, kids xbox 360 games, kids xbox games, kids xbox getting destroyed, kids xmas songs, kids xylophone, kids xylophone music, kids xylophone songs, kids yeezys, kids yoga, kids yoga challenge, kids yoga music, kids you guessed it, kids you won't believe, kids you won't believe exist, kids youtube, kids youtube channel, kids youtube videos, kids zipline, kids zombie apocalypse, kids zombie movies, kids zombies, kids zone, kids zoo, kids zumba, kids zumba dance, kids zumba songs, kidz bop 35, kidzania, liberty kids 01, liberty kids 03, what kids, what kids lie about, what kids really mean, what kids say, what kids say vs what they want to say, what kids see, what kids see vs. what we see, what kids think, what kids want to be when they grow up, what kids want to say, when kids, when kids are in charge, when kids csgo, when kids fight, when kids get a whooping, when kids get gta v, when kids get in trouble, when kids go to jail, when kids kill, when kids play csgo, where kids, where kids come from, where kids go to jail, where kids go to play, where kids in america, where kids play, where kids rule, where kids sleep, where kids song, where kids under construction, which kids, which kids are tiny's, which kids did daddy o five lose, which kids movie, which kids to adopt skyrim, who kids, who kids are, who kids are on growing up hip hop atlanta, who kids are these, who kids are voting for, who kids kill, who kids kill a&e, who kids kill show, who kids want to have dinner with, who's bad kids, whose kids, whose kids are on growing up hip hop, whose kids are these, whose kids are these boxing, whose kids are these vine, whose kids are who in boruto, whose line kids, whose song for kids, why kids, why kids are awesome, why kids are better than teenagers, why kids are stupid, why kids are the worst, why kids cry, why kids join gangs, why kids kill, why kids lie, why kids should learn to code

00's kids tv theme songs, 00s kids tv, 1970s kids tv uk, 1980s kids tv uk, 3 kids tv, 40 kids tv, 4fun tv kids, 4kidstv, 60s kids tv, 60s kids tv shows, 60s kids tv themes, 70s kids tv shows, 70s kids tv shows themes, 70s kids tv shows uk, 70s kids tv themes, 70s kids tv uk, 80's kids tv show themes, 80's kids tv shows, 80s australian kids tv, 80s kids tv programmes, 80s kids tv quiz, 80s kids tv shows uk, 80s kids tv themes, 80s kids tv uk, 90s british kids tv, 90s kids tv programmes, 90s kids tv quiz, 90s kids tv shows, 90s kids tv theme songs, 90s kids tv themes, 90s kids tv uk, abc 4 kids tv, abc kids tv 5 little monkeys, abc kids tv yes papa, all 4 kids tv, are kids tv, are you kids tv, asian kids tv 31, australian kids tv shows 90s, cartooning 4 kids tv, desenhos do tv kids, digimon 4 tv kids, do kids know 90s tv shows, do kids tv, do you know kids tv, dragonflytv kids do science, duck duck kids tv 3d, happy kids tv youtube, hobbykidstv, hobbykidstv 360, hobbykidstv 3ds, hobbykidstv karate, hobbykidstv news, hobbykidstv on youtube, hobbykidstv zombie, how do kids tv, how do kids tv surprise toys, how kids tv, how kids tv program, how kids tv programme, how kids tv show, how kids tv surprise toys, how many kids tv, i kids tv 611, kids tv, kids tv 1 2 3, kids tv 123, kids tv 123 3d shapes, kids tv 123 5 little ducks, kids tv 123 5 little monkeys, kids tv 123 50 states, kids tv 123 abc, kids tv 123 days of the week, kids tv 123 numbers song, kids tv 123 phonics, kids tv 123 phonics song, kids tv 123 phonics song 2, kids tv 123 planets, kids tv 123 shapes, kids tv 1980s, kids tv 2, kids tv 2 for learning, kids tv 2000, kids tv 2003, kids tv 2005, kids tv 2006, kids tv 2017, kids tv 23, kids tv 3 little kittens, kids tv 3d, kids tv 3d shapes, kids tv 4u, kids tv 5, kids tv 5 little ducks, kids tv 5 little monkeys, kids tv 5 little penguins, kids tv 5 little pumpkins, kids tv 50 states, kids tv 60, kids tv 70s, kids tv 80s, kids tv 80s uk, kids tv 90s, kids tv 90s uk, kids tv 99, kids tv a, kids tv a is for apple, kids tv abc, kids tv abc 123, kids tv ads, kids tv alphabet, kids tv animal song, kids tv animals, kids tv apple and banana, kids tv arabic, kids tv baby, kids tv ball, kids tv bath, kids tv batman, kids tv bd, kids tv bedtime, kids tv bingo, kids tv bottled squad, kids tv brazil, kids tv bus, kids tv cars, kids tv cartoon, kids tv channel, kids tv channel india, kids tv channel transformer, kids tv collection, kids tv colors, kids tv commercial, kids tv commercial philippines, kids tv conan, kids tv daddy, kids tv dance, kids tv deutsch, kids tv deutschland, kids tv dinosaurs, kids tv dog, kids tv download, kids tv drama, kids tv drawing, kids tv duck, kids tv educational, kids tv egg, kids tv egg videos, kids tv elsa, kids tv elsa and spiderman, kids tv english, kids tv episodes, kids tv español, kids tv español latino, kids tv everything, kids tv finger family, kids tv fish, kids tv five little babies, kids tv five little monkeys, kids tv for kids, kids tv francais, kids tv frozen, kids tv fruit song, kids tv fun, kids tv funny, kids tv game night, kids tv game shows, kids tv games, kids tv garbage truck, kids tv german, kids tv go, kids tv goes wrong, kids tv gone wrong, kids tv guide, kids tv gunge, kids tv halloween, kids tv happy kids tv, kids tv hd, kids tv hello neighbor, kids tv hello neighbour, kids tv hindi, kids tv hobby kids, kids tv hobbykidstv, kids tv horse, kids tv hut, kids tv ice cream, kids tv if you're happy and you know it, kids tv in french, kids tv in hindi, kids tv incy wincy spider, kids tv india, kids tv indonesia, kids tv intro, kids tv intro quiz, kids tv italiano, kids tv jack and jill, kids tv jack be nimble, kids tv jan cartoon, kids tv japan, kids tv japanese, kids tv jingle bells, kids tv johnny, kids tv jumping on the bed, kids tv junior, kids tv karaoke, kids tv kids tv, kids tv kisses, kids tv kodi, kids tv kodi addon, kids tv kyle, kids tv learn colors for children's, kids tv lego, kids tv lego dimensions, kids tv letters, kids tv little bo peep, kids tv live, kids tv live stream, kids tv logos, kids tv london bridge, kids tv lullaby, kids tv maggie, kids tv meltdown, kids tv minecraft, kids tv minions, kids tv mlp, kids tv monkey, kids tv movies, kids tv mr bean, kids tv music, kids tv my sweet baby, kids tv network, kids tv new, kids tv ninja kids tv, kids tv ninja turtles, kids tv now, kids tv number song, kids tv numbers, kids tv nursery, kids tv nursery rhyme, kids tv nursery rhymes shapes, kids tv octonauts, kids tv old king cole, kids tv old macdonald had a farm, kids tv on youtube, kids tv one, kids tv one two buckle my shoe, kids tv only, kids tv openings, kids tv original song, kids tv oz, kids tv panda, kids tv peppa pig, kids tv phonics, kids tv planets, kids tv play doh, kids tv playing minecraft, kids tv poems, kids tv portugues, kids tv power rangers, kids tv programs, kids tv q song, kids tv quiz, kids tv quotas, kids tv quotas btn, kids tv quotes, kids tv remix, kids tv reviews, kids tv rhymes, kids tv rhymes hindi, kids tv roblox, kids tv roku, kids tv row row your boat, kids tv rude, kids tv russia, kids tv russian, kids tv sat 7 kids, kids tv series, kids tv shapes, kids tv show quiz, kids tv show remix, kids tv show songs, kids tv show trivia, kids tv shows, kids tv songs, kids tv surprise, kids tv surprise eggs, kids tv surprise toys, kids tv theme quiz, kids tv theme song, kids tv theme tunes, kids tv themes, kids tv themes 90s, kids tv time, kids tv toon, kids tv toys, kids tv train, kids tv trivia, kids tv u song, kids tv uk, kids tv uk octonauts, kids tv urdu, kids tv usa, kids tv wheels on the bus, kids tv wheels on the tow truck, kids tv where's the monkey, kids tv whisper, kids tv who's your daddy, kids tv why don't you, kids tv with, kids tv world, kids tv wwe, kids tv wwe great balls of fire, kids tv yes papa, kids tv youtube, kids tv yum yum, kids tv zap, kids tv zone, kids tv zoo, old kids tv shows 80s, queer kids tv, retro 80s kids tv, rich kids tv kidnapped, sims 4 kids tv, sirsa kids tv do chuhe the, spy kids 4 tv spot, tv kids primeiro video do canal, uk 90s kids tv shows, what kids tv, when kids tv go horribly wrong, when kids tv goes horribly, when kids tv goes horribly wrong, when kids tv goes horribly wrong 2017, when kids tv goes horribly wrong channel 5, when kids tv goes horribly wrong watch online, when kids tv goes wrong, when kids tv goes wrong channel 5, which kids tv, why kids tv, xl kids tv, y kids tv, you kids tv, your kids tv, yummy bites tv kids, zee tv kids dance, zee tv kids show, zz kids tv, zz kids tv superheroes, zz kids tv zombie, zz kids tv zombie pokemon

#nodiadaprova do kids & toys, 123abc kids toy, 123abc kids toy tv, 1916 electric kids toy oven, 90s kids toy, 90s kids toy commercials, best kids toy ever, bubble pop kids toy egg, bubble pop kids toy opening, cartooning 4 kids toy story, cutting kids toy open, eu quero toy kids, galerinha do toy toy kids, how kids toys are made, how many toys do kids need, japanese kids toy, japanese kids toy review, just dance kids toy story, kidkraft kids toy kitchen, kids can do toy review, kids fighting over toy car, kids movies toy story 1, kids movies toy story 2, kids movies toy story 3, kids playing with toy kitchen, kids playing with toy nerf guns, kids toy, kids toy 2016, kids toy 2017, kids toy abc, kids toy ads, kids toy adverts 2016, kids toy airplane, kids toy ambulance, kids toy and adventures, kids toy and joy, kids toy animals, kids toy army, kids toy atm, kids toy baby, kids toy band, kids toy barbie, kids toy bike, kids toy bin, kids toy boat, kids toy boss, kids toy box, kids toy box diy, kids toy bus, kids toy cars, kids toy cartoon, kids toy challenge, kids toy channel, kids toy channel 2015, kids toy channel 2016, kids toy channel 2017, kids toy channel new, kids toy collection, kids toy collector, kids toy commercials, kids toy commercials 2015, kids toy commercials 2016, kids toy cooking, kids toy corner, kids toy corner zoo, kids toy dance, kids toy demos, kids toy diggers, kids toy dirt bikes, kids toy diy, kids toy doctor, kids toy dog, kids toy dolls, kids toy drawing, kids toy drum set, kids toy egg surprise, kids toy eggs, kids toy electric car, kids toy elsa, kids toy engines, kids toy excavator, kids toy factory, kids toy fail, kids toy family, kids toy fan review, kids toy fire truck videos, kids toy fire trucks, kids toy fishing, kids toy food, kids toy freddy, kids toy fridge, kids toy games, kids toy garage, kids toy genie, kids toy goes into fire, kids toy grill, kids toy guitar, kids toy gun, kids toy gun collection, kids toy gun videos, kids toy gun war, kids toy hacks, kids toy haul, kids toy hello kitty, kids toy home, kids toy horse, kids toy hospital, kids toy house, kids toy house tour, kids toy hub, kids toy hunt, kids toy hut, kids toy ice cream, kids toy ice cream maker, kids toy ice cream truck, kids toy ideas, kids toy images, kids toy instruments, kids toy invention, kids toy iphone, kids toy iron, kids toy japan, kids toy jcb, kids toy jeep, kids toy jelly, kids toy jelly beans, kids toy kids toy, kids toy kinder joy, kids toy kitchen, kids toy kitchen set, kids toy knife, kids toy lab, kids toy lab tv, kids toy land, kids toy laptop, kids toy lawn mower, kids toy learning, kids toy lego, kids toy light switch, kids toy lol, kids toy loom bands, kids toy machine, kids toy maker, kids toy makeup, kids toy making, kids toy market, kids toy media, kids toy mermaid, kids toy mobile, kids toy motorcycle, kids toy movies, kids toy music, kids toy names, kids toy nerf, kids toy nerf guns, kids toy new, kids toy new episode, kids toy office, kids toy opening, kids toy orbeez, kids toy organizer, kids toy organizing ideas, kids toy oven, kids toy park, kids toy paw patrol, kids toy phone, kids toy piano, kids toy plane, kids toy planet, kids toy play, kids toy play doh, kids toy playhouse, kids toy police car, kids toy r us, kids toy repair, kids toy reveal, kids toy review channels, kids toy reviewer, kids toy rides, kids toy robot, kids toy room, kids toy room organization, kids toy room tour, kids toy shopping, kids toy show, kids toy slime tv, kids toy songs, kids toy sports, kids toy storage, kids toy store, kids toy story, kids toy style, kids toy surprise, kids toy testers challenges mermaid, kids toy to see, kids toy tools, kids toy tractors, kids toy train, kids toy train video, kids toy transformers, kids toy trucks, kids toy tutorials, kids toy tv, kids toy unboxing, kids toy unboxing videos, kids toy utub brazil, kids toy washer and dryer, kids toy washing machine, kids toy watch, kids toy water snake, kids toy weed eater, kids toy weed trimmer, kids toy weights, kids toy wholesale, kids toy workbench, kids toy world, kids toy youtube, kids toy zoo, kids toys 2017, kids toys channel play do, kids toys play, kids using toy guns, nail art kids toy, new kids toy review, new sky kids toy kitchen, quero assistir toy toy kids, toy 4 kids, toy and funny kids surprise eggs, toy cutting kids pink kitchen, toy elephants for kids, toy kids 3, toy kids life 4 u, toy r us kids, toy reviews for kids 2016, toy reviews for kids yummy nummies, toy story 2 kids, toy story 3 kids, toy story 3 kiss scene, toy story 4 kids, toy toy kids barbie leticia 1, toy zombies for kids, toys et fun 4 kids, turminha do toy toy kids, videos do kids toys fan, videos do toy toy kids, vídeos do toy toy kids, what's inside kids toys, why do kids toys, why kids toys were banned, youtube kids toy reviews

#nodiadaprova do kids & toys, 123abc kids toy, 123abc kids toy tv, 1916 electric kids toy oven, 90s kids toy, 90s kids toy commercials, best kids toy ever, bubble pop kids toy egg, bubble pop kids toy opening, cartooning 4 kids toy story, cutting kids toy open, eu quero toy kids, galerinha do toy toy kids, how kids toys are made, how many toys do kids need, japanese kids toy, japanese kids toy review, just dance kids toy story, kidkraft kids toy kitchen, kids can do toy review, kids fighting over toy car, kids movies toy story 1, kids movies toy story 2, kids movies toy story 3, kids playing with toy kitchen, kids playing with toy nerf guns, kids toy, kids toy 2016, kids toy 2017, kids toy abc, kids toy ads, kids toy adverts 2016, kids toy airplane, kids toy ambulance, kids toy and adventures, kids toy and joy, kids toy animals, kids toy army, kids toy atm, kids toy baby, kids toy band, kids toy barbie, kids toy bike, kids toy bin, kids toy boat, kids toy boss, kids toy box, kids toy box diy, kids toy bus, kids toy cars, kids toy cartoon, kids toy challenge, kids toy channel, kids toy channel 2015, kids toy channel 2016, kids toy channel 2017, kids toy channel new, kids toy collection, kids toy collector, kids toy commercials, kids toy commercials 2015, kids toy commercials 2016, kids toy cooking, kids toy corner, kids toy corner zoo, kids toy dance, kids toy demos, kids toy diggers, kids toy dirt bikes, kids toy diy, kids toy doctor, kids toy dog, kids toy dolls, kids toy drawing, kids toy drum set, kids toy egg surprise, kids toy eggs, kids toy electric car, kids toy elsa, kids toy engines, kids toy excavator, kids toy factory, kids toy fail, kids toy family, kids toy fan review, kids toy fire truck videos, kids toy fire trucks, kids toy fishing, kids toy food, kids toy freddy, kids toy fridge, kids toy games, kids toy garage, kids toy genie, kids toy goes into fire, kids toy grill, kids toy guitar, kids toy gun, kids toy gun collection, kids toy gun videos, kids toy gun war, kids toy hacks, kids toy haul, kids toy hello kitty, kids toy home, kids toy horse, kids toy hospital, kids toy house, kids toy house tour, kids toy hub, kids toy hunt, kids toy hut, kids toy ice cream, kids toy ice cream maker, kids toy ice cream truck, kids toy ideas, kids toy images, kids toy instruments, kids toy invention, kids toy iphone, kids toy iron, kids toy japan, kids toy jcb, kids toy jeep, kids toy jelly, kids toy jelly beans, kids toy kids toy, kids toy kinder joy, kids toy kitchen, kids toy kitchen set, kids toy knife, kids toy lab, kids toy lab tv, kids toy land, kids toy laptop, kids toy lawn mower, kids toy learning, kids toy lego, kids toy light switch, kids toy lol, kids toy loom bands, kids toy machine, kids toy maker, kids toy makeup, kids toy making, kids toy market, kids toy media, kids toy mermaid, kids toy mobile, kids toy motorcycle, kids toy movies, kids toy music, kids toy names, kids toy nerf, kids toy nerf guns, kids toy new, kids toy new episode, kids toy office, kids toy opening, kids toy orbeez, kids toy organizer, kids toy organizing ideas, kids toy oven, kids toy park, kids toy paw patrol, kids toy phone, kids toy piano, kids toy plane, kids toy planet, kids toy play, kids toy play doh, kids toy playhouse, kids toy police car, kids toy r us, kids toy repair, kids toy reveal, kids toy review channels, kids toy reviewer, kids toy rides, kids toy robot, kids toy room, kids toy room organization, kids toy room tour, kids toy shopping, kids toy show, kids toy slime tv, kids toy songs, kids toy sports, kids toy storage, kids toy store, kids toy story, kids toy style, kids toy surprise, kids toy testers challenges mermaid, kids toy to see, kids toy tools, kids toy tractors, kids toy train, kids toy train video, kids toy transformers, kids toy trucks, kids toy tutorials, kids toy tv, kids toy unboxing, kids toy unboxing videos, kids toy utub brazil, kids toy washer and dryer, kids toy washing machine, kids toy watch, kids toy water snake, kids toy weed eater, kids toy weed trimmer, kids toy weights, kids toy wholesale, kids toy workbench, kids toy world, kids toy youtube, kids toy zoo, kids toys 2017, kids toys channel play do, kids toys play, kids using toy guns, nail art kids toy, new kids toy review, new sky kids toy kitchen, quero assistir toy toy kids, toy 4 kids, toy and funny kids surprise eggs, toy cutting kids pink kitchen, toy elephants for kids, toy kids 3, toy kids life 4 u, toy r us kids, toy reviews for kids 2016, toy reviews for kids yummy nummies, toy story 2 kids, toy story 3 kids, toy story 3 kiss scene, toy story 4 kids, toy toy kids barbie leticia 1, toy zombies for kids, toys et fun 4 kids, turminha do toy toy kids, videos do kids toys fan, videos do toy toy kids, vídeos do toy toy kids, what's inside kids toys, why do kids toys, why kids toys were banned, youtube kids toy reviews

Không có nhận xét nào:

Đăng nhận xét