Thứ Năm, 26 tháng 4, 2018

Waching daily Apr 26 2018

Box of toys Baby Doll Monkey Dance – Funny Toys for Kids

For more infomation >> Box of toys Baby Doll Monkey Dance – Funny Toys for Kids - Duration: 2:43.

-------------------------------------------

Incentivizing Societal Contributions for and via Machine Learning - Duration: 57:06.

>> Great. Let's get started,

I'm very happy to introduce Yang Liu today.

He's a postdoctoral fellow at Harvard.

He's working with Jing Zhang and David Parks there.

He got his PhD in Michigan.

He works on many things that are related to things

we work on here, including crowdsourcing information.

He's interested in sanitizing exploration,

his paper on algorithmic fairness

in a multi-armed bandit setting.

So, his love interest that crossover,

he's actually visiting here for two weeks.

This is the second week of his visit.

Yang is leading a few of us

here who are advisors on an IR program

on Hybrid Forecasting that

combines human and machine intelligence.

Our academic partners of David and

Yang is working on that at Harvard,

so he's here for that reason.

Right now, he will be giving

a talk on Post-acquisition Service.

>> Thank you, Dave. Can everyone hear me?

Good. I have from

January first day I exert to many job talks already.

I didn't plan as this one,

but thank you for coming.

>> We weren't talking about you.

>> Thank you. Just disclaimer,

I didn't prepare this slides for a postdoc job talk,

so the technical details

may be at a relatively higher level,

but I'll be here for another day and a half.

So, if you've got interesting details come to me,

I stay in the next two days.

So, I'm going to be talking

about several Machine Learning things,

but all is a contact survey incentive design question.

It sounds like a econometric question,

but the goal of today is to cover it [inaudible]

and this question is very well connected to

a lot of questions in Machine Learning,

and it can be very well solved

using Machine Learning techniques.

To give broader picture,

the motivation of my research is

really inspired by this connection between technology,

like Machine Learning with our people.

These days Machine Learning is

everywhere has to be applied to

many and many societies relevant applications.

Like when I was making the job talks slides,

it was not hard for me to find applications

such as in healthcare,

Machine Learning had been helping doctors

to make diagnosis diseases.

More recently, probably this is more controversially,

Machine Learning has to be applied in

this domain of criminal justice decision-making,

helping judges to make

bail decision and also serious decisions.

The other direction is also true,

the fast development Machine Learning is also relying

on a lot of contributions

on a lot of input from our people.

Here, I'm not even talking about the people in the room,

I'm not talking about the general population,

it's not Machine Learning researchers,

general population.

As this a very fresh example for Facebook,

when Facebook announced that they were

mad about this amount of fake articles.

Obviously, moderating the fake news is

hard for these Machine Learning techniques,

because it requires a cognitive reasoning at

the human level which is

missing in many of these algorithmic design approaches.

So, as a first step Facebook is going to

pull data from users to help them

detect whether the article

is likely to be fake or not fake

because this is one type of

user input to the Machine Learning system.

This is the other example,

some here are talking at Microsoft.

A girl talk about prediction market,

where you're appealing forecaster or predictors to

predict political events or some other events.

A concrete first step is to pull

people's opinion from a general crowd.

So, over the past many years,

probably more than 20 years,

we've been spending a lot of time,

a lot efforts in developing

more and more accurate Machine Learning algorithms.

We aim for 99 percent accuracy.

Many times, 99 is not accurate enough.

Add another nine to

the obvious that another nine is better.

But what Machine Learning is really a full pipeline,

you got data collection,

flows into the algorithm,

and then you applies outcome.

With more connections with people,

I would say there are unique challenges arising

from the front and the end, license.

When you're applying outcome to people,

let's not talk about movie recommendation,

let's talk about healthcare.

How do I make sure this application is taking a good care

of its buyers in either the data, or the model?

You don't want people to be treated unfairly.

When you're collecting data from people,

this is not as easy as collecting data from

a physical process because people can be strategic,

people can manipulate, and people can be sloppy.

So, my research in the past few years focused on

addressing several challenges arising on the two ends.

On the outcome part, I study Algorithmic Fairness.

When we are applying Machine Learning

outcomes to applications rather than to people,

how do we protect

their fair chance of getting a good decision?

Also studied this year data security,

when we're pulling data from people how

to better protect their privacy,

so people feel safe in

terms of participating in this system.

More recently, I studied incentive

questions in Machine Learning.

This is a very active project

we've been building at Harvard.

This talks about how do we

use Machine Learning technique to build

an incentive framework that can

quantifies the value of the information.

So, we can incentivize people to

give us more and more accurate data.

Besides that, I also study the post-processing procedure.

After you've collected the data,

what is the best way to do aggregation?

So, some Machine Learning system will

benefit the most from this aggregation.

I can talk about the other part very briefly at the end,

but today's focus is really on the incentive part.

The ally is quite clear,

I am going to introduced what is this question,

and how to formalize

this question as incentive Machine Learning.

It's multiple papers but it's really just one topic.

How to use Machine Learning techniques

to design incentive?

So, we can acquire data from strategic agents.

After that, I got to just briefly talk about

the practice how to do

this sequentially in more dynamics setting.

I'll conclude the talk with future works.

So, what does the incentive in Machine Learning?

I suppose I'm using Facebook.

One day I woke up there's

news pumped on my timeline

saying Google just bought Apple.

Is that a fake news or not fake news?

I didn't make this up,

this news was actually

viral on Twitter several months ago.

People were irritated about it.

Suppose this shows my dad, my mom,

and myself's timeline, and we read the article,

we have different observations.

We have different educational backgrounds.

We can form different opinions.

Does it sounds right,

or it's fake? We have different opinions.

My mom and dad, they

have more time than me at this moment,

and probably they're necessary people.

So, they are going to read

the article, form their opinion,

and tell Facebook I

think it's fake or I think it's not fake.

But my data may be missing because

helping Facebook detect fake news is

not one of my incentives to use Facebook.

I don't have a reason to do it.

Even worse, if Facebook asks me,

you got to tell us whether it's fake or not fake.

Otherwise, we're going to disable

your Facebook account for fifteen minutes.

Very easy thing for me to do is just randomly

click on fake or not

fake without even reading the article.

That's the easiest thing for me to do.

But as Facebook, now you

couldn't tell the difference between these two data.

Is this label coming from a software procedure,

or is this data coming from a random spamming procedure?

Now, this phenomena is pretty common.

It's not very rare.

You know these machine systems,

especially when we are getting training labels,

training data from Amazon Mechanical Turk,

you lost control of how much effort people put in,

how accurate people's labels are.

Are you guys Uber rating?

Uber is using auto rating data to do road recommendation,

trip recommendation,

but when was the last time

you give a software review to Uber?

I could not remember my last time

in giving software review.

Because how accurate my reviews are,

does not affect my future business with Uber.

I don't have incentives to do it.

To summarize all these applications,

they have one thing in common.

If you lost control of the quality of training data,

if the data coming from different sources have

different amount of noise embedded in the training data,

see, these certain hyper-value trend may

be highly unpredictable or highly unreliable.

Therefore, hardly you can say,

even though the generalization error is very small,

you achieve 99 percent accuracy,

whether you're generalizing a representative set of

data or not is hard to say.

Our goal with this project is to

incentivize people to report

a higher quality data so we're going to

have less bias in training a Machine Learning system.

And we cannot build a scoring system, right?

We're going to build a scoring system

that quantifies the values of

reported data and the scores

can be used in many different ways.

Can you use a score to pay people

or can you use the scores to build

a reputation system to

drive people to participate

and contribute a higher quality data?

This can help us align the incentive for participation,

giving us a source for data

and also giving us more representative data.

Again, this

is another very timely article when I was traveling.

I saw the article saying,

now our data is very crucial for many AI applications.

Shall we be paid for our data?

I think many part of my research,

I want to strongly

advocate the answer as yes, we should be paid.

But the question is how.

How do we pay people for their data?

We want people to have a software data.

We want them to transfer report and we

want to add a press tag to each of datapoints so

people will be motivated to give us a good data versus

bad data because it creates a difference

in the price they got.

So let's do this.

How do you formalize this question?

Suppose you have N data to be labeled,

you can imagine there are

articles to come as a feature vector,

so you can imagine the feature vector as

the source of the information

that you can apply an LP

to extract keywords from the articles,

so on, so forth, how many times they got retweeted?

As a true label, according some prior distribution,

has a true label with either yes or no,

it's fake or not fake.

Is this truly label? We don't know.

We want to label this data.

So I color sense different layer.

If you're lost in the notation,

everything I color in red means

something we don't know, we don't observe.

Everything that is color in

black means something we know.

Your sense of data to people,

conditionals on the current truth which we don't know,

people have different opinion.

Even though I put in a lot of effort,

I still may not agree with the current issues.

People have different subjective belief.

So this probability is 60 percent, 40 percent,

this captures the inherent noise of people,

and I call this, people's data.

Again, it's hidden, we don't know.

What we know is the reported data,

all the collected data from people,

and the goal is very simple.

We want to report the data to

equal to the true data in people's mind.

So we want them to truthfully

report their belief in some way.

>> I have a question.

What's the distinction between

people's data and collected label?

>> People's data is,

you can imagine your true belief.

Like, I release this article,

I form my belief,

I think it's fake or not fake.

Reported data is anything I tell you.

It could be coming from a software

procedure but it could be just

like I'd rather not tell, fake or not fake.

It doesn't need to be equal to

the as if true data in your mind.

>> So, maybe I'm getting confused by what red means.

Yes is in blue and red is no. [inaudible].

>> Oh, no, sorry. This is bad notation.

So these are the ground truths,

so the red color doesn't apply to the no.

So regardless of yes or no.

>> Where is that 60 percent

versus 40 percent noise coming in?

Is it between label and people's data or data [inaudible]?

>> So this is between the ground

truths and People's data.

If the ground truth is yes,

40 percent of the time, people will make a mistake.

If ground truth is no,

80 percent of the time, people will.

>> [inaudible] model is homogenous or cross-denomination?

>> It's heterogeneous. Different people

have different error rates.

>> Okay. Yeah.

>> But I cannot make an assumption

as homogeneous across tasks.

So I stream for 10 articles,

people roughly make similar model noise.

>> Thumbs down's as supposed to

mean not fake? Is that correct?

>> Down meaning fake.

>> Down means fake?

>> Right. Up means not fake.

>> But if it's not fake,

then most people say it that,

no means not fake, right?

>> Right.

>> Then most people say it is fake when they [inaudible].

>> It could be possible. Yeah. It could

be the ground truth is in a minority people's head.

I don't make assumptions that

the majority people gather ground truths.

So this could be like there's

some cases that are extremely hard,

the ground truth is no,

but the most people got it wrong.

So it's possible. Okay, good.

>> Are you assuming the distribution over yes and no?

Is that what 60 percent, 40 percent means.

>> Yeah, there's a prior.

>> You're kind of operating in a fully Bayesian setting?

>> Bayesian setting. Exactly. [inaudible] structures.

Questions? All right.

This question has been studying

[inaudible] setting which is called Information

Elicitation when it has ground truths.

Now, this question talks about how to

incentivize or how do you process

the data where you observe

the ground truths information, that is.

You know the true label and

how do you price people's data.

How do you evaluate the people's predictions?

You may you may wonder,

if I know it's fake or not fake,

why do I still care about people's data.

But for some other cases,

this is a question I've been carrying a lot in the past,

fair amounts like whether Boston is going to snow or

not in the next three days.

This help people to make travel plans.

This thing, this type of questions,

you're gonna know the ground truths after three days,

so you can't come back,

price people based on these ground truths.

The solution is called Strictly Proper Scoring Rules,

has been studied for many years,

probably for more than 70 years, 60 years.

The idea is quite elegant, it's quite straightforward.

Says like, I try to find a square of function S that

evaluates people's report using the ground truth Y,

such that if I truthfully report my data,

in my belief, I get

a higher score than the case if I misreport.

Now, if we have this score,

it gives the incentive property.

It can use a score to pay people a different way,

so if I know,

if I truthfully report,

I can have higher score,

I have incentive to report

data compared to sending spam data.

There exists manymany scoring functions that can do this.

For binary signals, you're asking yes or no.

This one over Prior scoring function basically

checks whether people also

report matches ground truths or not,

normalized by the popularity of this answer.

This will do it. If you're asking for more,

you're asking for the probabilities,

how likely the article is fake,

you're asking for a continuous number,

this Prior score is going to do it.

Essentially, it checks out how much he misreported,

you missed reporting the ground truths.

It's quadratic loss form of this reporting.

This actually is quite similar to

this quadratic loss function in

machine learning which is

a connection I'm going to talk later.

>> This notation means that they are equal?

Like Y_i report of Y?

>> This is an indicator function.

>> You've been on the second line,

or is that product?

>> Now, it's at probabilities,

so this is a sort of vector,

how much probability allocated on each of the outcomes.

>> [inaudible] question but ground truth is known here and you take

into account the fact that people just might be wrong,

it might just be a [inaudible].

>> Right. Ground truth is not here.

>> The reason you have that ambiguity is because,

it just might just be hard generally,

for people to be, is that intuition?

That their Branch,

just looking at whether their match is not enough,

because it depends on

the question that you're asking them.

>> Right. So that's why you are normalizing by the prior.

That's the popularity is

if the question is extremely hard,

meaning this prior is very low,

so you cannot reward people more.

>> Yeah, the assumption is the world is

not colluding against you, somehow.

>> Right. Yeah, that's about right.

All right.

Good. Well, the challenges

is it doesn't apply to our setting because it's

not a possible to verify the ground truth.

As for the label in case,

you don't know the ground truth label

for peer grading or peer review.

You don't know the true qualities article

or you don't know true quality of the homework.

And also, if you ask questions like,

will people land on Mars by 2030,

tou just don't know the ground truth answer

until 12 years later.

Noticing this challenge, there has

been a line of research cards,

jointly called peer prediction that was proposed.

It's a family of

mechanism that can truthfully incentivized

people to report their private signal

in mind at equilibria.

So it's like, it's a game [inaudible] solution.

Meaning, if I believe

everybody else is truthfully reporting the data,

it is also my best interests to truthfully report.

The idea is very simple. You don't have

ground truth to verify the data,

you're going to assign each data to two people,

at least two people and use their answers

to cross-validate or cross-check each other.

You're going to reward people or score people

based on how each reporting correlates with another.

Often, under some conditions,

you try to prove truthful reporting

as equilibria, so that's a go.

Don't take me wrong, I

really like peer prediction research.

And many of the course are here are Armor Heroes.

I can talk for hours like

what I was saying about peer prediction,

but since we only have half an hour left,

I'm going to tell you what are the caveats

and how we want to fix it.

So first of, this notion,

this solution notion, spelled our equilibrium.

Meaning, I need to assume everybody is rational.

But in practice, when we implement this mechanism,

people can easily ask the question,

do I trust other people to

follow in Nash equilibrium or not?

In practice, we often say, sound as no.

We just heard this on lunch talk.

Say, many times people can

be strategic and they don't really

follow the equilibrium strategy.

And even worse, in peer predictions,

there exists multiple equilibria.

This creates a lot of strategic manipulation.

In practice, people can ask

which equilibrium do other people follow?

And this not easy to coordinate

among a large population of crowd.

Instead, we want a dominant truthfulness.

Meaning, regardless how other people play,

we want to have a scoring system that is

always your best interest to report the data truthfully.

So to reduce this cognitive reasoning load.

Second [inaudible] caveat is,

peer prediction checks on correlation.

It doesn't really calibrate the value of the information.

Consider for example you're applying.

This is the reason why peer prediction

fails for peer grading or peer review cases.

I suppose I need to review the article

by tomorrow but I'm running out of time.

I can easily, instead of reading the proofs,

I can say, okay,

it's a long article with a lot of mathematics.

The chance of being

good article is good. I say it's good article.

I'm not suggesting this is the right way to do it,

but people can form this signal.

This signal, I call it cheap

signal because it's so cheap,

I believe everybody else should observe it too.

This type of signal has low qualities.

It doesn't reflect the quality of the article but

it has higher correlation compared to the signal.

I read article, I read the proof,

and the proof is not trivial.

In that case, I need to believe

every other reviewers also reads the article,

read the proof and grave is me,

the article is not trivial.

Reporting the ChIP-Seq is always a better strategy.

It cost less but it gives you

a higher correlation. We don't want that.

We want to scoring systems that calibrate

the exact model information in the report.

There are some other caveats,

but I want to keep in mind these two things

are main things we want to solve in the next 10 minutes.

In the past two years,

we built this machine-learning framework.

This question can probably

only be solved using machine-learning techniques.

We don't know the other ways or some of it.

And we build a framework that is to some degree,

we proved is able to do it.

We decided to take a take a kind of

a different viewpoint compared to peer prediction,

which checks out the correlation.

My idea was inspired

by this single equation I showed you earlier,

for strict Popper square arrow

is very beautiful equation,

gives you the incentive property

but we don't have the ground truth,

so it's not applicable here.

I was talking to [inaudible] we're asking,

instead of knowing the ground truth,

can we predict on the ground truth?

Can we take one step back?

Saying, can we predict on ground truth?

Is not super trivial but the answer is partially yes.

You can use machine-learning. A big part

of machine-learning is to predict.

We know feature vector.

Suppose you can construct,

can find a classifier.

This classifier will maps

the feature vector to a prediction or the ground truth.

The question is, can we just applies ground truth to

some Machine Learning algorithm

to the strict proper scoring rule and make it happen?

The answer is no, because it's not ground truth.

The predictor introduces bias,

unless it's a perfect predictor,

which is unlikely to happen in practice.

Next question is, can we remove the bias?

This sounds like another machine learning question.

Removing the bias in the data is kind of

interesting to a lot of

Machine Learning researchers. Can we do that?

Instead of using strict proper scoring of S,

can we find a bias removal procedure

that removes this bias?

So still,

you can roughly have as if you know the ground truth.

I can explain later but

the idea is we want to remove the bias.

>> What's the difference between the two side [inaudible] quality?

>> Yes. This is the right side

is like you don't report truthfully.

>> So if the Machine Learning prediction is unbiased,

will it be replacing by the ground truth? [inaudible].

>> Right, so if the Machine Learning outcome

is unbiased, you can just plug it.

It's very simple to show all the results will hold.

But the case is says, will it face bias.

>> So you said previously that

the noise model between why and why people,

can differ from person to person?

>> Right.

>> But then isn't it allowed

that the [inaudible] why people to be completely unrelated?

>> Oh, good point,

the question is can I allow people's data to be

completely unrelated to the ground-truth?

>> Right.

>> That's the only corner case,

if people's data is entirely

independent as a ground-truth,

we cannot learn anything.

>> Right.

>> So in that case.

>> Nothing can be done.

>> Nothing you can do. That's called

stochastic relevance in this setting.

We got to assume the signals

is at least informed theories that way.

>> I see. So it will come as an assumption data.

>> Right.

Yeah.

So the goal is to have a function R takes people's input.

Still, people choose for report you get a higher score.

But if you don't have ground truth,

we're going to use Machine Learning.

The input of Machine Learning is

whatever data you collected from other people.

So we don't assume people are truth for

reporting so there's no requirement

on the reporting procedure.

So it's not equilibrium argument.

So I'm going to tell you how to

do Machine Learning predictions.

Notice because this input is noisy information,

we only have information from people.

How do you learn a good classifier?

And I'm going to tell you how to do score function

combines two things.

It almost happens every single time

when topple two things like people thought about it

saying you need to know how much noise in

people's report but if you don't

know the ground-truth, how do you learn that.

I got to tell you how to learn

the amount of error in people's report.

Even though you don't know the ground-truth.

I have the Machine Learning predictions,

for each user i,

you have set of training data which I

denote as X_j Y_j report.

And this could be very noisy

because it's simply, it's not ground-truth.

Say, conditional ground-truth people's report

can be very different from ground-truth.

But let's categorize with two parameters,

what I call E plus E minus because these

are the true inactivate rates

and the false positive rates.

Doing risk minimization does not seem to

work because if you are just doing

regression or doing classification like

training procedure over this data,

it's not correct, because

this Y report it is simply not Y,

it's a noisy copy with ground-truth.

You're minimizing or you're

generalizing the over's and wrong loss functions.

So the idea is to defy

unbiased surrogate loss to remove the noise.

So this is, some idea that has been

a Machine Learning researcher for some time.

Instead of L [inaudible] loss function of

pie takes a noisy report as the input.

Such that when you take expectation A equals to

true loss, unbiased the way.

Though, you don't need to know the ground-truth label but

we have an unbiased estimation of this loss.

Then you can solve it instead of doing risk minimization,

you can just argue risk minimization.

And by law of large number,

this summation converges to expectation which is

equal to the true loss, okay?

That just very simple. Now I'm showing you

a lengthy equation for two purposes,

you don't need to raise the equation.

First it's doable, as is just one example there exists

several other surrogate loss functions

that can remove this bias.

Secondly, the definition of

the surrogate loss depends on the error rates.

You need to know the error rates, makes sense?

You are removing the noise,

you got to understand how much noise is in the data.

Otherwise it sounds too much magic.

Sorry, third point, I said two.

But sorry one, it goes back to Nan's question.

What if the data is entirely irrelevant?

That corresponds to the condition

E plus plus E minus equals one,

meaning this is not well-defined.

So for that quarter case you couldn't learn anything.

>> [inaudible] is it even possible to do this?

>> Right. So why is it even possible?

The idea is.

>> He's asking why is it possible?

>> Why it's possible to do.

>> In the non quarter case.

>> So I guess the question is like why it

is possible to remove the bias.

>> I don't know if it's right but

it sounds like you shoot to

the problem to estimating some bias and

some noise which itself seems like it could be as

hard as the general problem itself, right?

>> Yeah.

>> So I just want to know why it somehow got more easier?

>> So why carry estimates noise?

>> Yeah.

>> Right, I can tell you.

That's the last part of this section.

Can you bear with me for now like

suppose you can learn error rates for now.

>> Okay. It's a lot easier than

learning than the original thing

you are trying to do for this.

>> Yeah, great.

So we already noticed our error so this error is one,

error happened to me.

You need to know the error rates in

order to do this learning. Go ahead.

>> Is there an assumption that

the error rates are less than a half.

>> Good point. So is there an assumption

the error rate is less than half?

It's on the next slide, it doesn't need to.

We need to know the error rates but to make life

easier we first show,

first of all you don't need to know the exact error is,

you only need to know what

the noisy error is, makes sense?

Just do sensitivity analysis.

Secondly, even though the error rate is more than half,

you can still learn it.

>> You just [inaudible]

>> Very intuitively. So if

you trust me I'm going to tell

you how to learn the error rates.

Now, you have a report and you have a prediction.

We have a lot of data is converging to the optimal one

which we assume is informative in some ways.

So next question is how to combine two numbers.

I'm going to give you

a high level intuition like what we are doing,

why we think this result isn't as important.

There's two areas of study

likewise information elicitation using scoring rule,

and now there's Machine Learning.

The left hand side information elicitation

has been studied for a longer period of time.

It has maybe a richer literature.

Well like recent growth, Machine Learning

needs much more faster but

in terms of time horizon

this question has been studied much, much earlier.

And people have been wondering about

the connection of these two literature

because both studies can spot evaluating an information.

Scoring function your values are information from people,

Machine Learning evaluates

the information from a classifier,

evaluates how well the classifier performs.

In the ideal world we know the ground truth,

this connection has been established.

To manage degrees scoring

function are equivalent to loss functions.

If you can remember supplier score

the quadratic loss score I showed you earlier

is actually corresponding to

exactly the quadratic loss in Machine Learning.

This is good because now,

whatever scoring function you have in

that literature can be

transformed into a loss function in Machine Learning.

But that's why this result was exciting for people.

But for the cloud world

we don't know the ground truth. Is there a connection?

We have a scoring function

taskforce people with our ground truth.

We have loss function that's

through Machine Learning without a perfect label.

Do the two things collide with each other?

We didn't solve everything,

but we gave it preliminary evidence

showing theory is also a collection.

Formally, surrogate loss function,

we proved, can serve as a scoring function,

giving [inaudible].

You have a scoring function that you are aiming

for takes input as report classifier.

We have a surrogate loss function takes

input as classifier as a noise label,

you don't have the perfect label.

The classifiers stay the same.

The report can be reinterpreted as a noisy ground-truth,

actually prove maximizing the reward

equals to minimizing the loss.

The only thing you need to play

with is these two parameters.

A truthful reporting, gives you

optimal loss because the bias is removed.

This classifier, converging into the optimal classifier.

Any deviation gives you sub optimal loss,

that is something that you can prove,

which finish the proof.

>> So you are particularly [inaudible]

>> Is there a question like, I'm looking for [inaudible] ,

it's very general. All it needs to be classic [inaudible] .

It doesn't need to be [inaudible]

>> Excuse me, what's the expectation?

>> Expectation is taking over

people's subjective belief as noise in this noise label.

>> Marginalizing S or conditional?

>> Marginalizing [inaudible]? Good point.

So when those current row has been studied.

There are many different families scoring

rules to elicit different type of

information in the ideal world when you have

the ground-truth but again the cloudy world,

we don't have the grand-truth there's

relatively less results to be known and [inaudible] scoring

rule really gives you a bridge that

shapes every single scoring function

here to the noisy world.

It doesn't depend on the specific form

as a scoring function.

How to do it, just very simple,

example; find a noisy ground-truth.

Either ask people or use Machine Learning.

In the lengthy equation I showed you

earlier replace loss function

with a strictly proper score function.

It removes bias and now the two properties we were aiming

for can both be achieved because

we know in strict proper scoring rule

is always best to report truthfully.

And this is true here and also

reporting a low quality signal is not good because we

know the proof is very intuitive but the idea

is we know strict proper scoring rule rewards accuracy,

so the more accurate you are the higher score you

get but it's all based on this single equation.

>> Sir, in the denominator you still have P

plus one minus one? [inaudible]

>> It doesn't depend on the,

this is like depending on the outcome like E

plus plus C minus so both of them are there.

Instead of proving out the properties of it,

I can assure you, you will like the project,

a very exciting project to me,

Dave mentioned earlier this is

a Hybrid Forecasting Competition.

We got like point seven k participants,

we are hoping to build

a platform that will be incentivizing people to come,

make a forecast about

some geopolitical events like whether

Bashar will cease to be the president of

Syria before January 2019.

So this is a system we built is very rough.

As more complicated than this but the idea

is like we want to have platform people come.

Raise a question forms

their belief is that how likely this is going to happen.

They tell us the forecast,

we do aggregation and

this aggregated prediction and will be validated.

The Machine Learning part that

helps us to do the aggregation,

we don't want to just add up people's predictions.

But there's also another Machine Learning block,

after we evaluate people's answers

we wanted to do recommendation,

we want to say like which

people is better at which question.

So this like collaborative

filtering works we want to like,

push the most relevant question

to the most relevant people.

So first of all you need to incentive to,

incentivize people to come so surrogate scoring rule

can be applied here to score people over the time.

They don't need to wait until the last day to see

their scores and because

surrogate score is unbiased in expectation,

it actually can serve as

a calibrated weights to do aggregation.

Over time you're going to know whose data is

more informative than others so you can use

this as a weights to do with

aggregation and this feedback loop was

not was not possible with our surrogate scoring rule

because the collection of [inaudible] data is really slow,

you need to wait until 2019

to see the performance of people but

now the scores are unbiased so you

can give it some unbiased feedback before the last day,

to make it happen.

Right so now I'm going to tell you how to do

the error rates in order to do the scoring.

>> So you may answer

this later but there is something I don't

understand like the example you

gave where like you want to review a paper,

it's really long or you actually read

the proofs so suppose

the Machine Learning classifier that

I'm using is like overly simple

it's like a decision stump or something and

the only feature it gets to look at is

the length of the articles so then it

seems like you're going to basically

really reward people who agree with the [inaudible].

>> Right.

>> Class.

>> Good, that's a good question.

So the question is like what if

that Machine Learning classifier is simply

rewarding people according to the length of the article,

in that case the scoring system will

fail so you got to like make sure that

Machine Learning classifiers are

complicated enough so people don't really

know which feature is really

playing what rule is playing in this classifier.

If I know like the exact weights in

the classifier then I can just

report according to classifier,

I don't need to do my work. That's a great question.

So I need to make it so everything's [inaudible] meaning like,

[inaudible] I don't have a specific belief about the classifier.

So I just take, I only know

the classifier's performance in

January but I don't know

the specific dimensions of this classifier.

All right, so how do I

learn error rates like E plus E minus in

people's report even though you

don't know the ground-truth?

Again, a disclaimer we thought this was new

but after [inaudible] attempts to this target I

say or some statistical measure and they

say it's actually quite similar to

some high order moment generating function type

works but let me still talk about it anyway.

So you don't know,

people give me like yes or no,

you don't know how many times they make

a mistake but there's

one equation you can know

that is how often they give you,

plus that you could observe.

This number can be written as functional E plus E minus,

first term P is a prior.

Like a mountain of articles,

how many of them are fake.

Like 50 percent of them are

fake or 50 percent are not fake.

This is like prior is plus

one minus C is you got it correctly so it's plus.

One minus p is wrong and you got

a wrongly C minus so still plus, we have this equation.

You can stack up

the others equation saying "Now I'm asking two people."

How often they agree with each other is the plus label,

this is also observable and the first term is

simply S plus and most people

get it correctly always

minus and most people get it wrongly.

You have second order equation,

now I have two equations

two [inaudible] variables can potentially solve it.

>> It's has the same plus one minus one?

>> Great, they do have a same E

plus plus and E minus as averages of the crowd.

So, that's why it's approximation

because if you draw two people,

the average differs by one people,

but if you have n people,

the difference only by one over n. So,

I'm going to show you some complex data bound.

But I have two solutions

because the other two for this equation.

I thought that we couldn't tell the day from night,

but it turns out if we just do

three people checking out half and three people

agree with each other this gives you set other

more many question and they will do it.

Meaning the three matching equations

uniquely defines the arrays,

even though majority people got it wrong.

This experiment saying like

75 percent of the time people got it wrong,

we can still learn that majority people

are wrong with this equation.

You can imagine like when you

can learn more and more accurately,

this truthfulness I showed you

earlier for the clean case,

well-preserved was a noisy case.

We recently have some solvers that's showing

combining three equations Bayesian Inference you

can learn the States Award

faster in more complicated way.

We also had some real experiment

Amazon Turk but I guess I'll escape,

the idea is like the target score

is unbiased in expectation but

the people maybe interested in how fast it

converges to the truth,

how much the variance affect the scoring.

Roughly like with more than 500 data is

converges to a kind of a safe region of the scores.

So the blue curve is

the true score as if we are using the ground-truth and

the red curve is the target score which we are using

as the nosy ground-truths.

So, running out of time,

I'm going to briefly mention

a sequential data setting because

the one I talked about earlier is you have a lot of data,

you tried classifier, you score at people right at

once but what if the data arrive sequentially.

This wall breaks this assumption you have enough data

to get a good classifier and how do you do this.

The idea is like instead of scoring and we can use

another dimensional incentive which

is called dynamic assignment.

Instead of paying people to

exert more money they deserve,

we can show we keep fixed payment but

your future assignment whether you're gonna

get the job or not depends on the past work quality.

If your quality is higher,

you'll be more likely to be given

the task which gives another dimension of the incentive.

The goal is still we have I'm going

to all made all the details

but the goal is that you have data to label,

they arrive sequentially over time

and we want to incentivize people to give a good effort.

Time is discretized like t equal to one to

capital T. The goal is to maintain

a reputation score for each of

the worker so we know who is roughly doing

better than others and each time we're going to

select a worker or

some workers according to the reputation score.

So this will give the incentive.

If I know my reputation score reflects my qualities,

if my reputation score is higher,

I will be more likely to be selected.

I want to ask

this question I found very

similar to a question that is called Multi-armed Bandit,

you have options, you don't know their qualities,

you want to select the best one over

time as causes this is

classical question in exploration versus exploitation.

There exist many solutions,

but this one looks exactly what we

want because this UCB solution calls,

you give it optional index that consist of two parts,

the first part is the empirical reward,

by selecting this option which is

exactly reputation score if you imagine this option

is the people and the second term is a confidence

bound is evaluating how

confident you are in this estimation.

So this looks just as a reputation score.

We know is (log t) regret.

We have lots of experts in this room

so I'm not going in to details.

However, if we want to do this,

there's a challenge, if we take each people as an arm,

we want to select multiple of them.

There's a question of how do you

evaluate the quality of the people because we don't

have the ground-truth label and

how much effort people play is hidden you don't know.

You only have partial observations,

you don't know the ground-truth.

Are the qualities strategically decided?

There's multiple challenges in order to apply

Bandit's setting but I

told you how evaluate

the data when you don't know the ground-truth.

So we study this scoring rule-based UCB.

It's very simple, just replace

a reward using scoring function R, I showed you earlier.

Now, each people's data is going to be evaluated by

the scoring function with another machinery model.

In the paper, we studied regression model,

so in this case it's

just a regression parameter we

learn from other people's data.

Was there a question?

Okay. The idea of stress-free forward but there's

technical challenge because now

each people's index is

depending on other people's data because you

are using other arms data to making evaluations.

So there are multiple learning processes

are coupled with each other,

so this gave us a technical challenge

and each arms' observation is going to

affect other arms so there's

another level challenge but not at least,

I don't have time to go through the proof

but agents place

optimal effort levels that you have in mind.

It's going to be a

approximate Bayesian Nash Equilibrium meaning,

they can manipulate this long-term system

but they will not gain by too

much because gains are most responded by

square root log T by T

which goes to zero as T goes large.

What's the measure we extend his boundary

setting into a partial observational setting

with strategic people,

that is a contribution we found it to be interesting.

We have a non-linear

regression, Machine Learning algorithm,

simple machinery models that will be covered

by these resource too and we also have

privacy preserving setting because we're talking to

some people this could be useful in

terms of building reputation market

people are saying but you are using

other people's data to score each with the agents.

People may infer how

other people are doing like the information

by other people but we can do a privacy setting for that.

So I have done

some other works like algorithmic

fairness I mentioned earlier.

This will work out

again in a sequential decision-making setting when

people come like me I come with my resume

looking for a job by how do you treat

these people fairly in term of their true qualities.

We are also studying in this project,

the Harvard who are studying like we're training

a linear classification how to add

a fairness constraints to the training procedure.

We built a data breach system to forecast

data breaches it's actually

running cycles credit score system.

It's a measurement study like we are measuring

the risk behavior of

different organizations and try

to make up a scoring system.

This is quite interesting to me like

how we have a wisdom crowd but

can you get a better wisdom crowd

like from the big crowds,

learn a smarter but smaller crowds from this data.

It's my PhD adviser at Michigan.

I know myself, it's not my decor.

I want to build

a computational framework with Society-in-The-loop.

I want to quantify,

I am interesting in Machine Learning but

I'm also super interested in human intelligence.

At the same time,

I want to address societal issues like

the incentive fairness and

security these are all relevant.

My particular interest, I study a lot of sequential works

so it's like I care about

a consequence instead of outcome.

I mean I care about outcome too

but I'm more interested in how the outcome

today will affect the evolution

of the system or the consequences tomorrow.

I can use two minutes to go through like

these are typical topics like once I

am really excited right now is to use

human intelligence especially where

wisdom crowd can fail,

how to use the pour

people's data with Machine Learning to detect fake news.

There's a recent article

saying "The Science of fake news."

I am quoting a sentence saying,

"People needs to be incentivized

to give you informative answers."

See one of the answers in the room

so there is an article, let's do that.

Also, I will study

behavior models as I

showed earlier as I was arguing earlier.

Nobody's really fully rational

and nobody really believes everybody else's

fully rational so what it

will be good behavioral model to

start with to revise managerial decision-making systems.

I don't have time to

talk about the figure but some experiment I deal with

actually being many years ago to

show people don't really react

to too many in the rational way,

so I want to study how people react at different actions.

There's another project we are doing,

we call it Fairness Machine

with Moral Machine project at MIT.

We're gonna study like since there are 21 definitions

of Fairness Machine learning and

which fairness definition is more fair.

I want to understand the social norm.

We are crowd sourcing this task.

We are transferring the definition into

different problem settings and ask people to pick which

was it if there is more fair compared with others so

we're going to do irrigation to

show what is a social norm in many sense.

I want to study like

safe exploration for autonomous system.

Suppose you are building a robot

using Reinforcement learning,

we don't want robots who learns that

is what you are delivering a package is the best way is

to throw the package and claim I delivered there'll be

a too bad for us and also in self-driving car,

you don't want the AI to say, "Okay in emergency,

I should save myself or save

the driver and claim

I tried to break but it didn't work."

So how do I be careful in terms

of putting a safe objective function

in the Reinforcement learning setting or

any autonomous learning setting.

All right, I just want to acknowledge

my advisers and authors. That'll be all.

Again, I will be here a day and a half so feel free to

ask me questions. Thank you.

For more infomation >> Incentivizing Societal Contributions for and via Machine Learning - Duration: 57:06.

-------------------------------------------

Kids Suing Gov. Rick Scott For Inaction On Climate Change - Duration: 11:42.

Last week, a group of children and young adults, represented by a group called Our Children's

Trust, filed a lawsuit against Florida governor Rick Scott.

The lawsuit claims that the governor's inaction on the issue of climate change has left huge

areas of the state, including some of the most heavily populated areas, vulnerable to

rising sea levels that are already threatening areas like Miami and Key West.

This lawsuit represents the latest round of litigation aimed at forcing politicians from

the state to the federal level to finally take action on the issue of climate change.

And a good portion of these lawsuits are being brought on behalf of the youngest generation

of American citizens.

Our Children's Trust is also representing young Americans in lawsuits in the Western

part of the United States, and areas where the rising sea level will present a massive

challenge to survival for this upcoming generation.

But it isn't just coastal areas that are joining the legal fight against climate change.

Inland states are also feeling the effects of endless fossil fuel emissions, and they

too have decided to take some of the largest polluters in this country to court.

From Colorado to Florida, Oregon to New York, and everywhere in between, climate change

is a threat to our very existence.

The science has told us this, the internal documents uncovered from oil companies have

told us this, and the majority of people in this country, regardless of party affiliation,

all agree on this.

It seems the only people who want the US to continue doing nothing about the issue are

the people who are personally profiting from it, be it financially or politically, from

doing nothing about the issue of climate change.

Joining me now to discuss this issue is Holland Cooke, host of the Big Picture, right here

on RT America.

So Holland, let's start with the state of Florida here, what can you tell us about the

lawsuits against our governor, Rick Scott?

Well, I think it's dramatic that teenagers are doing this, particularly on the heels

of this Parkland shooting thing, where the kids have carried the message forward as dramatically

as they have.

And somebody's got to, if the grownups can't fix this, the kids are going to have to, because

it's their future we're dealing with.

And you hit a nerve when you talk about the coastal areas, because you've tracked me down

where I live, in Rhode Island, the ocean state, which is also home to the USA's first off

shore wind farm.

We take this coastal area stuff very seriously here, and less any of your viewers think that

this is alarmist rhetoric from treehuggers, I refer you to the Wall Street Journal, no

lefty rag, and the headline says "The Rising Seas Are Reshaping The Miami Home Market."

So, few areas are going to be impacted less than where you are in Florida.

These suits being filed in the state of Colorado, these are slightly different.

Both a different argument, a different group bringing the suits.

So what can you tell us about the issues in Colorado?

They're not dealing with rising sea level, so what is their grievance here against these

oil companies and politicians?

Yeah, there's two wrinkles to the Colorado story that are different from Florida, and

I've been following this fascinated because of what you said, it's not coastal, it's a

mile high.

But climate change affects them too, ecosystems, agriculture, certainly tourism, the ski industry

is huge in Colorado.

And in the next few decades it's been estimated, from what I've read, that the economic impact

could be like $100 million or so.

And the other difference between the Florida action and the suits in Colorado is in Colorado

it's communities, municipalities, filing suits against the fossil fuel companies about the

impact of climate change.

City and county of Boulder, county of San Miguel are proceeding against ExxonMobil and

Sun Core.

So this is the first limited liability suit that has been filed about climates that is

not a coastal area.

So these are both fascinating developments to watch.

You know, earlier this week I saw that Fox News tweeted out a story talking about the

fact that oh, if we allow these lawsuits to go through, it's going to be damaging for

the oil companies.

But recently, documents were released showing that the Shell oil company knew about the

dangers of climate change as well their role in it since at least the 1980's, which is

actually similar to the documents from a few years ago that show that Exxon was also well

aware of the threat of climate change long before the public understood the risk.

So, do you think that these new documents that we've got could help the courts force

these companies pay up for the damage they've caused?

Hey, let's hope so.

Remember the suits from the cigarette companies said "We don't cause cancer."

What do you expect to hear from the oil companies?

Don't go throwing any dinners for them, because they are taking plenty of money from Uncle

Sam.

What they should be investing in, instead of oil, is energy.

This is the fastest growing sector in our entire employment landscape, renewable energy.

And there's a lot to be done there, rather than leaving our sustenance to the whim of

geopolitics and weather and all the market manipulation that the oil companies themselves

are suffering.

So, I think it's a wall of noise, and that these lawsuits, particularly because they

are being brought by children, do have a hope of attracting attention.

And what's really interesting about these lawsuits too is in the past I've had a lot

of people ask me "Why can't we just sue these people for causing climate change?

Why can't we do this?"

Well, it's because you've got to have standing.

You've got to be able to show that the actions of company or person X affected person Y with

result of Z.

But these children, because the science has evolved so much in the last few years, they're

able to do that.

They're the ones who are going to be affected the most by this, and they absolutely do have

standing.

But, because of the inaction in DC, under both democrats and republicans hear, are these

lawsuits the last best hope that we have to enact any kind of meaningful climate policy

here in the US?

I hope not last best, but you put your finger on something when you talk about the students.

As we've seen after the Parkland shootings, they attract coverage, and we've got a message

that has to come out.

So I think that that is fundamental to the effectiveness of this effort, is that we're

finally listening to reason.

You know, when you talk about the children here, a lot of these lawsuits are being brought

by children, Our Children's Trust is representing them.

We're seeing the same group take the lead on the issue of gun control, as you pointed

out.

So seeing how energized and mobilized this young generation has become, should we be

hopeful about the future?

Is there going to be real change on any issue here in this country any time soon because

of what these children are doing now?

Yes, soon, for fairly undramatic reasons.

As a consultant, I've learned to use the word demographics carefully, because people start

yawning, but all these kids are now turning voting age.

So they have the ultimate power, and in the mean time, as you know, when these lawsuits

go forward, we have discovery.

We have request for production of documents.

We have depositions.

So the oil companies can't hide forever.

This stuff is going to be on the record.

And it's really great too that on the environment issue, on the gun control issue, it's not

just going out there and saying "All right we're going to sue this group," or "We're

going to have one march."

These groups, and on several other issues, have become active, they've stayed active,

and they're not going away.

They're not letting go of these fights, because they understand on every one of these issues

that they're fighting for, whether it's an increase in the minimum wage, environment

protection, gun control.

These are issues that affect them for the rest of their lives.

They can't let this go, because this is their future.

They're trying to build a better country than what we're handing to them as a new generation

of voters.

And you really can't talk though about environment policy today, without bringing up the EPA

administrator, Scott Pruitt.

Last week we had more than 100 members of congress come out and call on Scott Pruitt

to resign, but not one single republican actually joined in those calls for him to resign last

week.

What's your thoughts on this entire Scott Pruitt thing?

I mean this is getting to the level of just being absurd at this point.

Sword of Damocles.

And Trump plays this like a Stradivarius.

When it's handy for him to divert attention, he's already got something on Ben Carson.

When it's handy for him to divert attention again, he's already got something on Scott

Pruitt.

He nominates somebody to run the VA, turns out there's some issues with him.

It's all about watch the other hand, because Donald Trump is a genius at changing the subject.

So if I'm Scott Pruitt I am not, as they say, buying any green bananas.

So, I think part of the issue with the Scott Pruitt, all of his spending scandals, everything

like that, the republicans, and the Trump administration, they don't want to get rid

of him, they don't want to do anything about all of these spending scandals, even though

it's the party of the fiscal responsibility people.

They don't want to do anything about it, because Scott Pruitt is still enacting their agenda.

This is the Koch agenda, the Exxon agenda, the Chevron agenda.

As long as he does that, it doesn't matter what kind of scandals he's involved in, right?

They're going to go along with it because he's doing the things they've always wanted

to do.

Until he is politically disabled, or it is opportune for the president to, as I say,

divert attention and make a change.

The swamp is deep there, and when it's handy, he'll be gone.

What's really interesting about this is that the only swamp that these republicans in power

today seem to want to protect is the one that Trump himself is creating in the White House.

There was no drain the swamp that actually happened, you pointed out the fact that Dr.

Ronny Jackson, we're now finding out from the senate committee, trying to look into

his background, that oh, he liked to get a little tipsy while at work and overmedicate

people, over prescribe drugs.

Yeah, the pills.

And that's what we've come to expect from DC today.

It is a caricature of itself, and unfortunately I don't see this ending any time soon.

But anyway, Holland Cooke, host of the Big Picture here on RT, thank you very much for

joining us today.

For more infomation >> Kids Suing Gov. Rick Scott For Inaction On Climate Change - Duration: 11:42.

-------------------------------------------

DA Steele Applauds Andrea Constand For Her Courage In Cosby Retrial - Duration: 1:44.

For more infomation >> DA Steele Applauds Andrea Constand For Her Courage In Cosby Retrial - Duration: 1:44.

-------------------------------------------

Erasa XEP 30 Rejuvenation Serum for Line Lifting and Crow's Feet - Duration: 1:07.

Erasa XEP 30 Rejuvenation Serum for Line Lifting and Crow's Feet

✔ ERASE ALL VISIBLE SIGNS OF AGING: Clinically proven to erase visible signs of aging and reduce wrinkles, frown lines, expression lines, crows feet, age spots and dark circles.

✔ IMPROVE SKIN TONE, TEXTURE, ELASTICITY AND BRIGHTNESS: Developed by the most distinguished scientific team in the industry and backed by independent clinical testing.

✔ ERASA SCIENCE: Scientifically designed to mimic snail cone venom to safely and effectively smooth facial, forehead, eye and smile lines, creases and wrinkles.

✔ SIGNIFICANTLY REDUCE WRINKLES: 64% Average Wrinkle Reduction | 90% Wrinkle Reduction On Some Subjects | 64% Reduction Of Under-Eye Front Wrinkles | 86% Pore Size Reduction | 99% Shine Elimination.

✔ RESULTS AFTER ONE USE: Unlike creams and serums, the Erasa concentrate works after just one application. Your skin will appear firmer, more uniform and brighter while wrinkles, fine lines, age-spots and dark circles will begin to disappear.

For more infomation >> Erasa XEP 30 Rejuvenation Serum for Line Lifting and Crow's Feet - Duration: 1:07.

-------------------------------------------

Setti Warren drops out of race for Massachusetts governor - Duration: 0:22.

For more infomation >> Setti Warren drops out of race for Massachusetts governor - Duration: 0:22.

-------------------------------------------

DA: Kevin Steele: 'Andrea Constand Came To Norristown For Justice' - Duration: 0:50.

For more infomation >> DA: Kevin Steele: 'Andrea Constand Came To Norristown For Justice' - Duration: 0:50.

-------------------------------------------

Jeep Wrangler Eibach Pro-Truck Sport Heavy Duty Rear Shock for 0-2" Lift (2007-2018 JK) Review - Duration: 4:16.

The Eibach Pro-Truck Sport Heavy Duty Rear Shock is for those of you that have a 2007

and up JK with zero to two inches of lift, that are looking for a very high quality shock

absorber that's going to give you a pretty comfortable ride.

Now shocks in general can appear to be a little bit of a magical black box.

Not a lot of people know exactly how they work on the inside, how to purchase one over

the other, how long a shock you need, and a lot of those things are taken care of for

you when you purchase a shock from a high-quality company like Eibach.

They've been around for a long time, they've made a lot of different suspension components,

for a lot of different vehicles and this is going to be a shock that will definitely fit

if you have zero to two inches of lift, and is going to give you a comfortable ride because

it is designed for the JK.

This is something that will install very easily on your Jeep.

Definitely a one out of three wrenches.

You really just have unbolt the bottom and the top of your factory shock, remove it and

install the new one.

You may have to pop the tire off to give you a little bit more access to that area, but

still overall very easy install, and we'll talk a little bit more about that in just

a second.

There are a couple of factors you're going to wanna look for when purchasing a shock,

and one of them is definitely going to be the build quality, the construction of the

shock.

A shock absorber will take a beating, especially if you're somebody who's doing some off-roading.

Maybe you're doing a little bit of off-roading at a higher speed, or of course, just driving

down the road and hitting those potholes, your shocks will take a beating.

And if you wanna make sure you're not gonna end up with one that's blown out or bent,

you want something that's going to be a high-build quality, and that's what you're going to get

from Eibach.

On top of that, you want something that's going to ride comfortably, and that's what

I mentioned before.

This is speced [SP] from Eibach for the weight of the Jeep.The sprung and the unsprung [SP]

weight and the spring rate and all of those things are taken into account when a company

sells a shock for a specific application.

So, you know that this is going to be where will you need it to be from a valving perspective

in order to give you a nice comfortable ride.

Now, there are really two main categories of shocks when you're shopping for shocks

that you'll find.

One is a hydraulic shock and the other is a nitrogen-charged shock.

Now this is a nitro-shock.

And what it means is that it has a nitrogen charge inside of it and that nitrogen charge

is there to eliminate any foaming, bubbling, cavitation that can occur inside of the shock

fluid.

When you get all of those things you can end up with shock fade, especially when you're

working the shocks very hard.

And the nitrogen charge inside the shock is there to help eliminate some of that.

In general, a nitro-shock will ride a little bit stiffer than a hydro-shock but at the

end of the day, the valving of the shock has the most to do with the ride quality and the

comfort that you're going to get.

And again, from Eibach you can expect a nice, comfortable ride.

So to get this installed is definitely a one out of three wrench install.

Only about an hour or so to get a pair of these installed, but as with anything on your

Jeep, or really any vehicle, that will depend on rust.

If you have some rust, it's gonna take you a little bit longer.

So, go ahead and spray all of the nuts and bolts that are associated with this install

well ahead of time with a good penetrating oil, and that will speed things along.

So at the bottom you're going to have one bolt, up at the top you're going to have two.

Once you remove those three bolts you can remove your factory shock completely and install

your new Eibach shock.

This shock's gonna run you right around $100.

And that is going to be in the middle to upper end of shocks.

You can find shocks that are going to be significantly less expensive, but again, you're not going

to get the same build quality, you're also not going to get the same ride comfort.

And you can find shocks that are significantly more expensive.

A lot of those are going to have additional features.

They may have adjustable valving, they may have a reservoir so that they can have some

more fluid in there, a lot of different options that you can get when you get into some of

those higher-end shocks that are going to be significantly more expensive.

Overall, for the quality of the shock and for the ride quality you can expect from this

Eibach shock, I think $100 is a very fair price.

So if you're looking for quality shock at a fair price, I definitely recommend taking

a look at this one from Eibach, and you can find it right here at extremeterrain.com.

For more infomation >> Jeep Wrangler Eibach Pro-Truck Sport Heavy Duty Rear Shock for 0-2" Lift (2007-2018 JK) Review - Duration: 4:16.

-------------------------------------------

Five Little Ducks Funny Song For Kids by SingingKittens - Duration: 3:35.

Five Little Ducks Went Out One Day Over The Hills and Far Away

Mother Duck Said Quack Quack Quack But only four little Ducks Came Back

For more infomation >> Five Little Ducks Funny Song For Kids by SingingKittens - Duration: 3:35.

-------------------------------------------

Macron and Trump: An inspiration for all of us? - Duration: 2:46.

Emmanuel Macron was visiting Donald Trump in Washington and all we are talking about is the body language.

For example how they are cleaning each other's shoulder.

Donald Trump does it with the index finger and he appears a little bit dominant.

Why is he appearing dominant?

That reminds us of primates. If the alpha gorilla is supported by a lower standing gorilla during a fight,

the alpha gorilla shows gratefulness by grooming his new friend.

And grooming means cleaning the other gorilla. That is a signal of dominance.

Because mostly it is done by the higher standing for the lower standing.

That is why mommy is cleaning the mouth of the child. Not the other way around.

And when we see how they hug each other, we must not forget that also Emmanuel Macron is quite dominant.

Because he set the rules and he set the benchmark.

From the first moment of their encounter he was looking for physical contact.

That is a french culture thing. Because in France physical contact is more the rule, is an everyday program.

He is used to that. That's is why he is looking much more elegant when touching Donald Trump.

And when we see how he is grabbing the shoulder of Donald Trump in that scene the shoulder of Donald Trump,

we might think that is a signal of dominance.

But to be honest, that is also the only way how a man can grab the body of another man.

Of course he also could have grabbed the hips of Donald Trump but that would make a completely different impression.

So when we see the high frequency of physical contact, we see that that makes a huge impression on us.

And you know what?

Too often we see politicians standing side-by-side with a body language like this.

And now we saw two politicians hugging each other, looking for physical contact, even grooming each other.

Could that be an inspiration also for us?

For more infomation >> Macron and Trump: An inspiration for all of us? - Duration: 2:46.

-------------------------------------------

DA Steele: 'For All Of Us It Was Just About Doing Justice' - Duration: 1:26.

For more infomation >> DA Steele: 'For All Of Us It Was Just About Doing Justice' - Duration: 1:26.

-------------------------------------------

Battle For Azeroth Beta: Balance Druid Class Changes! - Duration: 2:24.

Hey Everybody

It's Solarkitty, and I'm here to talk about Balance druid class changes on the Battle for Azeroth Beta.

The BFA Balance Druid talents are very similar to how they were in Legion, with a few exceptions.

Here are the main talent changes for Balance druid as seen in the beta:

Displacer beast has been removed from the game, and replaced with Tiger Dash.

The balance druid artifact ability from legion, new moon, has been added as a talent in the level 90 tier,

and astral communion has been removed.

New moon, half moon, and full moon function the same way as they did in legion.

Fury of Elune has changed.

In Legion, you had to place this spell on the ground, which made it impractical.

In BFA, fury of elune follows the target and causes massive AOE damage, but costs 80 astral power.

Hibernate is an old balance druid spell that is returning in BFA!

It gives druids the ability to put beasts and dragonkin to sleep, and is usable in PvP.

For more infomation >> Battle For Azeroth Beta: Balance Druid Class Changes! - Duration: 2:24.

-------------------------------------------

Gaming for Everyone at GDC 2018 - Duration: 2:17.

>>When you talk about Gaming for Everyone

you really have to make the effort to extend a hand to everyone.

And I think that

Xbox is doing that.

>>You can easily think of Microsoft as this big

non-approachable entity,

and these type of events

it makes it really clear that that's not the case.

>>I don't know, it's nice going into a place and feeling immediately

welcome, you know?

just like friendliness and camaraderie and sense of like solidarity.

You can mention an experience and someone will be like,

"totally."

>>I don't know, it was a really cool feeling seeing all these women here

cause it was like, you know, I'd just come from all my classes that were like

I was the only girl in my Advanced Game Projects class right now,

I was one of 3 girls in my Game AI class, and it was just kind of like

oh, there are these girls out here doing this,

making careers out of this,

so it was really awesome to see that for me.

>>Inclusion is the future and I think that

developers seeing us and talking to us and

knowing that we exist and knowing our experience is like the key to that.

>>There is a space for Latin American development in the world, and

I only see that because of gatherings like Latinx

that you can really see the power and the unity and

the knowlege that's over there in that part of the world.

>>I would love just to expand this.

I just want this to be bigger, I want there to be more people,

I want there to be more games, I would love to see this get bigger and bigger.

>>You as a company, you're working towards the diversity dream

and I hope it will be opened up

for even more people to come in the future years.

For more infomation >> Gaming for Everyone at GDC 2018 - Duration: 2:17.

-------------------------------------------

Israel Scraps Plan for Mass Deportation of African Migrants - Duration: 0:54.

For more infomation >> Israel Scraps Plan for Mass Deportation of African Migrants - Duration: 0:54.

-------------------------------------------

Arbeits 2 Bits: Internship Advice for New Interns - Duration: 0:24.

Hi guys today's my last day with Arbeit. So before I leave I'd like to give you

guys some tips on how to succeed your first internship, so the first one is um

always ask questions. Don't be afraid to ask questions and second one is take notes

during your meetings with the boss and the third one is immerse yourself with

the company culture. It's a way it's a great way to connect with employees in

the future too.

For more infomation >> Arbeits 2 Bits: Internship Advice for New Interns - Duration: 0:24.

-------------------------------------------

Students at Southfield special education school excited for prom - Duration: 3:08.

For more infomation >> Students at Southfield special education school excited for prom - Duration: 3:08.

-------------------------------------------

Help Me Hank gets answers for grieving families of funeral home shut down in Detroit - Duration: 1:01.

For more infomation >> Help Me Hank gets answers for grieving families of funeral home shut down in Detroit - Duration: 1:01.

-------------------------------------------

How you could buy Lil Kim's $3 million New Jersey mansion for $100 - Duration: 1:34.

How you could buy Lil Kim's $3 million New Jersey mansion for $100

$100 is not a typo.

The opening bid for Lil Kim's New Jersey mansion will be a paltry $100 when it hits the auction

block on May 11, 2018.

That's a steal compared to the $2.3 million the rapper paid for the 6,026-square-foot

home in 2002.

With an estimated net worth of $18 million, Lil Kim has more value than her home, though

she struggled to pay mortgage for years.

This prompted HSBC Bank to initiate foreclosure in 2010, according to NJ.com.

Lil Kim (birth name Kimberly Jones) and the bank were in mediation in 2015, and a judge

ruled she was in default last May.

While the $100 bid by the Bergen County Prosecutor's Office is low, the 1989 mansion will realistically

sell in the millions.

The two-acre property is located in Alpine, a posh neighborhood just minutes from Manhattan,

which is one of the wealthiest zip codes in America.

The median home value in Alpine is $2.9 million, with notable homeowners including comedian

Chris Rock and President Donald Trump's adviser Kellyanne Conway.

39 Timberline Drive, soon to be Lil Kim's former address, is estimated to be worth $3

million.

Không có nhận xét nào:

Đăng nhận xét