So here we are on New Year's Eve, getting ready to head into 2019, the 50th Anniversary
of the Moon Landings, but it also happens to be the 35th anniversary of an article Isaac
Asimov wrote on December 31st, 1983 for The Toronto Star, which they recently reprinted,
predicting the world of 2019.
We already did our year end episode, and for that matter our own predictions for 31 years
from now for the year 2050 a couple weeks back, but since Asimov is still the Grandmaster
of science fiction, even 27 years after his death, and my namesake, and since we're
skipping this month's livestream, it seemed worth taking a bit of time to look at the
world he saw for tomorrow.
A link to the article is in the video description.
On the eve of 1984 he starts off by noting that he picked 35 years ahead because George
Orwell's novel 1984 had just celebrated its 35th anniversary.
Interestingly it's also the year my favorite film, Blade Runner, is set in, which had come
out the year before.
Asimov definitely paints a more optimistic future than that film.
He begins by noting 3 concepts that had come to dominate our thinking at the time: Nuclear
War, Computerization, and Space Utilization.
He chooses to assume we won't have had a nuclear war by 2019, not because he thought
it unlikely but because he didn't see any point in examining a future where that happened.
Similarly he notes the possibility of a modern Luddite rebellion smashing up all our technology,
but focuses on a future where that does not occur.
His second prognostication, computerization, is unsurprising coming from the man who coined
the term robotics, and was indeed a dominant factor in the decades that followed.
Seems an obvious one nowadays, and in truth was hardly a shocking one then either.
Personal computers had started hitting the market a few years before, and Asimov himself
was the spokesman for Tandy/Radio Shack's TRS-80 computer, which had an amazing 4 Kilobytes
of RAM.
But he had been predicting that computerization would have a big impact on the future long
before 1983.
In addition to the term Robotics, he also coined the term Psychohistory, the notion
we'd predict large-population human behavior with computers.
It's true that we're nowhere near the accuracy Hari Seldon's work in the Foundation
Series would have required, and as we discussed in our episode on psychohistory, probably
can't be that accurate, but that series takes places tens of thousands of years in
the future, and we already do use computers to predict major trends in human behavior.
Asimov's third prognostication for 2019, space utilization, was, like most predictions
made about space travel shortly after the Apollo Missions, not wrong but a little optimistic
about the schedule.
He puts quite a focus on mining the moon and building power satellites, something we talked
about on the show quite a lot this year--and like Asimov then, we still talk about it as
something that might happen in a generation or two.
Perhaps they'll say the same in 2050, but putting too much emphasis on timelines being
pushed back is too cynical.
We do have a lot of tech that futurists of the past predicted.
Sometimes we get it sooner, sometimes later, sometimes never because we got something better
instead.
Sometimes it's just more complicated, after all space utilization or fusion power or robotics
are areas we've made huge strides in.
Maybe Asimov was so optimistic about the 'when' because he assumed that if we somehow didn't
obliterate ourselves in a nuclear war by 2019, it'd be because we'd managed to link arms
and start creating the future together.
Not many writers or futurists of that era imagined we'd neither destroy ourselves
nor unite in peace but keep grinding along just on the brink of wars for decades.
Call it the Fulcrum Fallacy, a tendency to assume the future will either be utopian or
dystopian, rather than somewhere in between but probably a little better overall than
before.
What I really want to focus on today are his predictions about what would become known
as the Digital Divide, between those who have computers, technology, and the skills to use
them, and those who do not.
As well as the importance he places on continuing education for new jobs, with more skills required
and less tedious, menial work involved.
Few of us would disagree with that, but I wanted to call my namesake out on what I see
as his one big error here--albeit one that I think most folks today still make.
Asimov states that he expects a lot of problems transitioning to this more automated and educated
society, but that it should be over by today.
It clearly is not over, and I think he missed the boat there by assuming it was a one-time
problem rather than one we expect to have constantly for quite some time to come.
He notes we've done it before, for the switch in focus from agriculture to industry a century
prior, but I think that's what causes his error here, because it wasn't a single event,
it's something we've been tackling as long as we've been around.
I note though that reaction and adaptation are hardly specific to improvements in technology,
so we screw up our thinking a bit by trying to look at change and adaptation as technology-specific.
A farmer or herder needed to think and adapt to more than just new plow designs or crops,
they had to adapt to localized climate changes, shifts in taxes or leadership, shifts in trade
or competition, supply and demand for what they produced or needed for that production,
and so on.
This is the human condition and not technology-specific, that's just a big factor and bigger focus
nowadays.
Twin to that, while the digital divide is real enough, it's just another in a long
line of specific factors that varies by time and place.
A Blacksmith working in a seaport was going to fare a lot better if he had some knowledge
of ships and what damage seawater does to metals and what techniques help minimize that,
while one working inland better know about plows and hoes and horseshoes, and both need
to know what their town needs and what their neighbors tend to need or produce already.
A hunter-gatherer needs to be constantly adapting to changes in migration, terrain, survival
in that terrain, what kind of arrowhead works better on a deer than a wolf or a marauder,
and which fruits and nuts and roots grow where, can be eaten or not, how they can be stored,
prepared, etc.
We are clever apes, we react, we tinker, we engage in abstract thought and we adapt, and
that's what made us dominant.
One other small mistake he made, another that many folks still make, is to say that while
new technology often eliminates many jobs, it always make new ones.
Of course many feel the error there is that it always makes new jobs, which we'll get
to in a moment, but he's confusing jobs with tasks, and I think a lot of folks do
that.
I mentioned earlier he was the spokesman for some early computers and he talked elsewhere
of being rather intimidated by them and taking quite some time to adjust to using a word
processor over a typewriter, indeed he never entirely did so.
That wasn't his job though, he was a writer, and it's really only when a very specific
task makes up the entirety of a job that technology can sweep a job away.
We don't have many typesetters anymore for instance.
Back in 1957, Asimov wrote a short story called Galley Slave, predicting the use of robots
to proofread books, a near miss on the prediction of spellcheck in word processors, though he
made up for that by his prediction of some of the frustration caused by speech to text
software in his Second Foundation novel.
Even a job synonymous with a task that seems to disappear often has not.
Take the milkman, eliminated by cheap refrigerators, cars, and grocery stores.
Only it wasn't, every grocery store still has milkmen, it's just that they have a
broader job as they have to stock several products, not just milk, and on the back end,
there are certainly plenty of folks still employed tending and milking cows and transporting
and processing that milk to get it to the store.
Modern methods might be eliminated down the road too, as the "milkman" may come back
in the form of delivery drones specialized to handle food items.
I would not be surprised if future houses start featuring a drone door or window for
securely and automatically dropping off goods unaided, or if cows are put out of business
by vat grown synthetic milk.
Grocery stores and retail in general, after exploding in size during the last century,
might wither during the next, as we discussed in the Santa Claus Machine a couple weeks
ago.
But the basic truth remains: we need food, it must be created, moved, stored, and prepared.
The individual tasks involved can vary a lot and change often, and sometimes jobs focused
on a specific task can simply end, but most often are merged into a broader job or use
fewer humans.
100 people did a specific task full time before, now only 10 do, or 100 do but they do a dozen
other tasks too.
They don't really get eliminated much because those core needs still exist and humans are
supremely flexible.
We don't have very many blacksmiths anymore but we produce far more -- and more sophisticated
-- metalwork.
So this goes to the notion that we'd eventually just have fewer jobs, not enough for everyone.
We can't automatically rule it out that one day, maybe even soon, our automation will
get so good that only specific intellectual and creative tasks will remain, and only the
best of the best at that specific area will have work.
Or even that creative jobs might be taken over by AI, but as long as we have things
we need and want we will simply expand into providing those.
Offices, factories, farms, etc all change their output and improve their product, creating
new tasks to fulfill expanded demand for a product or improvements to that product, or
new varieties.
We don't make phonographs or records much these days, but those were never the true
product; easy and quality access to music was.
Yet that looming possibility of no jobs tends to imply something a bit sinister, the notion
that the Digital Divide or wealth stratification will reach a point where an elite few have
everything and the rest have nothing, either no work or make-work made because the aristocrat
would rather have someone scrubbing their floors instead of machines that do it better
and cheaper.
Ignoring the very cynical attitude implied there, which I tend to think says more about
the speaker's mind and behavior than their targets, it doesn't really hold up to inspection.
Asimov, like many of the folks of his period, also expressed concerns about over-population
and we see in a lot of science fiction this dystopian culture born out of technology and
fear.
He tended to be more optimistic than many others, but it's still there.
This notion that an elite few might have all the benefits of technology while others did
not, when historically what has usually happened is the elite few get it first, and as they
use those prototypes, the tech gets better and cheaper, while the benefits of that tech
and others make people more prosperous on the whole, so more can easily afford that
now-cheaper widget.
If you've got a machine that can do all or most other tasks we need people for now,
including building other machines, it's true that you don't need many or even any
workers anymore, but what it overlooks is that all you have to do, just one person and
one single time, is tell that machine to build a copy of itself and donate it to someone,
and tell them to pay it forward.
An unemployed society is not in itself a bad thing, but a starving society is, and you
can only get an unemployed society if you've eliminated a need for workers, which can only
happen if you don't need anyone to work to fulfill anyone's needs.
All the indications are that people will always need people to do the newest and most varied
jobs, so perhaps the real fear is simply the fear of change.
Now that level of automation has its own potential problems, like a potentially hedonistic civilization,
something we discussed in the Post-Scarcity Civilizations series this year and which Asimov
discusses in many of his robot stories.
But looking at his predictions for 2019, and his stories in general, something like 500
books as the man could write like a demon, he is often criticized for being too optimistic
about technology and the human future.
I'd go the other way.
If the aptly-titled Grandmaster of Science Fiction had one fault, I'd say it's that
he was a bit too pessimistic about people in general.
I, of course, often get accused of the reverse, but I go into this New Year as optimistic
as in previous ones, exactly because I tend to think humans on average are rather smart,
adaptable, forward-thinking, and have good intentions.
There's always some new problem over the horizon, but we've done well at tackling
those and making things better for everyone.
Admittedly not always as good as we could, but I think that we will continue to improve.
On space and New Horizons in general, we're still not quite done for the year . I'll
be making a short appearance on Launch Pad Astronomy's 24-hour livestream from APL
hosted by astronomer Christian Ready as the New Horizons space probe makes its rendezvous
with Ultima Thule.
Streaming will be from noon on New Year's eve to noon on New Year's day, Eastern US
time, and will be having a lot of guests, including our good friends Fraser Cain and
John Michael Godier.
Times are still a bit tentative as I write this, but I should be on at 3:45 pm Eastern,
December 31st, and it should be an exciting event.
We will also of course have our usual Thursday episode on January 3rd, on Virtual Worlds,
to begin our fifth year here at SFIA, and I hope to see you then,
and until then Happy New Year!
Không có nhận xét nào:
Đăng nhận xét