For those of you who’d like to look at something shorter before watching the my hour-long contribution to Talks at Google (or better yet, before reading Rage), here’s a six minute excerpt that gives you a feel for the material.
I was lucky enough to be invited to give an hour-long talk on Rage at Google recently, and I’m glad to say it’s now online. Those at the talk seemed to really enjoy it, and the organiser called it “fantastic” and “an ML talk unlike any other ML talk.”
I’ll be excerpting it a bit later for those who want to get a shorter feel for what’s in the book via video.
There’s a subtle connection here to an idea in my book, Rage Inside the Machine. Around 56 seconds into the minimate, you’ll see a sketch of The Fates. In Rage, I talk about how the idea of these fateful figures, who actually worked above the Gods, and would respond to no plea or prayer, were abandoned as a concept with the widespread adoption of a single omniscient God, and the invention of probability theory in the Renaissance.
That development leads directly to the idea held by many AI technologists that if we just calculate the right set of statistics from big data, we can make the highest merit decisions about the future. But, this overlooks the fact that probability theory does not model the complexity of the real world, and never can.
So the abandonment of the concept of The Fates leads not only to the tyranny of meritocracy that Michael Sandel talks about, but also a “decision meritocracy” based on faith in big data algorithmics. This new faith overlooks the real-world uncertainty that non-probabilistic, human decision-making has been adapted specifically to address.
So it’s not just that we have a false idea of social merit that’s connected to our divisive world, it’s the idea that we can make decisions of inherently high merit based on algorithms and big data. We need to re-centre humanity in our decision-making and be more humane in accepting that much of our success and failure lies in the hands of Fate.
And I also recommend Sandel’s fantastic book What Money Can’t Buy: The Moral Limits of Markets.
Alan Turing, the man who created computer science, while simultaneously saving the entire world from fascism through his critical role in winning WWII, only to later be chemically castrated and driven to suicide as a punishment for the crime of homosexuality, is to finally be honoured by appearing on British currency, in particular the new 50 pound note.
My book, Rage, retells some stories about Turing, including a re-appraisal of his eponymous “test” (aka The Imitation Game), a casting of new light on how Universal Computation relates to human thought, and Turing’s little known role in evolutionary algorithms.
But all that’s by-the-by. Turing was a great hero, and bestowing this public honour on him is a small start at redressing the historic injustice he faced. For my money, he should be on the far-more-used 20, but none this less, this is a triumph. Hurrah.
I’m happy to say I have an article in this month’s AMBITION, the magazine of AMBA, The Association of MBAs. The piece is entitled “Do Algorithms Have Their Own Business Ethics” (and you can read it online by clicking through on the title). Those of you who know me, or have read Rage, won’t be surprised to know that I think they do, but you may find this slightly different take on the subject interesting.
This week’s Big Issue has the fabulous David Lynch on the cover, but inside you’ll find a full page article from me! The article is about face recognition and deep fake video technology, and its relationship to the dubious history of quantitative human identification in criminal justice, which spans back to the 19th century. But I don’t want to give too much away about the article, because you really should buy The Big Issue.
And not just because I have an article in it. The Big Issue is a publication that is trying to improve the lot of the homeless, by giving them the chance to sell the magazine, as well as supporting a charitable foundation that aims to end the poverty giving rise to homelessness.
So buy this issue, and read my article, but don’t stop there. Buy it every week. It’s not only a bargain (it’s a surprisingly good magazine) it’s an opportunity to change someone’s life, and possibly change the world.
I can’t even say how excited I am that Cory Doctorow has posted a rather glowing review of the book on BoingBoing, what is arguably the most popular English-language blog in the world. He calls the book:
a vital addition to the debate on algorithmic decision-making, machine learning, and late-stage platform capitalism, and it's got important things to say about what makes us human, what our computers can do to enhance our lives, and how to to have a critical discourse about algorithms that does not subordinate human ethics to statistical convenience.
You can read the full review here on BoingBoing, and I’ve copied it over to the Rage website here.
Thank you Mr. Doctorow, and I’m so glad you enjoyed it!
Well, today’s the day, and the book is now shipping (internationally) from amazon.co.uk. Those of you who pre-ordered in the UK should have copies next Tuesday, and others should have it soon after!
The book features a lot of examination of how algorithms create feedback loops that influence human society. Ironically, this is true for the algorithms that promote books at Amazon, so, if you are interested in the message in Rage getting propagated, one way to help is to go to the Rage page at Amazon and give the book a few stars. It’d be a real help to me, but more importantly, I hope it’ll help people get a more informed perspective on algorithms, so we can start changing their effects on everyone’s lives.
Salon is an ongoing series of events in London that focus on permutations of the theme “science, art and psychology,” and I’ve been fortunate enough to speak at a couple of their gigs, and attend a few more.
Also, in my opinion, is the best ideas festival in the UK. It’s sometimes called “TED talks in a field,” but frankly, I think that’s an undersell. If you like hearing from interesting people, and fancy a quirky, bucolic setting, there is nothing finer than Also.
And at the core of these events is Helen, who (even in this interview) is far too modest about her talents. She’s a born curator, and that’s a skill I really appreciate. One thing I talk about in Rage is that creativity comes from juxtaposition of ideas, and Helen’s events prove she’s a real master at that.
Her breadth of knowledge and insight (plus the fact that she must read 100 books a year) is one of the reasons I’m so glad she’s spoken kindly about Rage. You can read her comments here.
It’s very gratifying to have someone read Rage and say complimentary things about it. For instance, Chris Kutarna, co-author of Age of Discovery: Navigating the Storms of Our Second Renaissance (an excellent book, which I’ve previously praised), said that my writing…
“…accomplishes what few people could attempt: to humanize the discourse on artificial intelligence.”
(You can read Chris’s complete review of the book here).
But what’s really gratifying is that in his recent newsletter (to which I strongly recommend you subscribe), Chris shared insights from what he gleaned from the book, and I’ve got to say, he really got it. In particular, he very cleverly adapted a figure that I created. I, in turn, adapted that figure from another source, and I think it’s interesting to see the evolution, particularly since the figure is itself about evolution.
Dave Goldberg was my PhD supervisor, and in his excellent book The Design of Innovation: Lessons from and for Competent Genetic Algorithms. Dave figured out how selection (the process of deciding in such algorithms) has to be balanced with mixing (the process of creating new things by stirring them together) to get an effective genetic algorithm.
Here’s the original figure, which is actually made from running thousands of genetic algorithms on mathematical optimisation problems:
(By the say those three “boundaries” are derived from maths Dave did, and the actual algorithms he ran confirmed that his models were correct.)
Pretty obscure stuff, but as the main title of Dave’s book suggested, it has broader implications. In particular, what I observed in Rage is that the strict, quantitative “selection” of algorithms in our lives needs to be balanced with human beings mixing ideas, for us to have a society that is designed for effective social innovation. So I simplified Dave’s figure to create this:
Chris created the following figure:
And this is where he really got what I was talking about in Rage. This figure is about the perception that algorithms (AI in the figure) deliver something intrinsically more “optimal” than human choice (H in the figure). This is a false perception, predicated on the idea that in human problems, an “optimal choice” exists (which very often isn’t the case, for reasons I discuss in the book). That means that when algorithms decide for us, we are further out along the left-to-right “selection” access in the figures created by Goldberg and me.
Chris further adapted those figures, first showing a simplified form of my adaptation of Dave’s chart:
His new labels are insightful: the lower part is “Fragile” because false “optimal” solutions are insufficiently adaptive in one way, and the upper part is “chaotic” because it is insufficiently adaptive in another way. Then Chis introduces his finale figure:
And that’s the point, all summed up in a figure. To get effective evolution of society, we need to have an algorithmic infrastructure that balances pressure towards so-called “optimal” solutions, and the mixing of human ideas to create new, innovative ideas.
Thanks for mixing up innovative illustrations of the ideas in Rage, Chris!
There’s an interesting article about the book, entitled “The dark heart of the algorithm” in this summer’s issue of Work magazine. Work is a strategic journal of the CIPD (Chartered Institute of Personnel and Development, which is the UK’s professional organisation for those working in HR and related fields.
In addition to the content alluded to in this article, I’m sure any CIPD readers of Rage will find the chapter entitled Value Instead of Values interesting. In that part of the book, I discuss the now-commonplace prediction that two-thirds of human jobs are at risk of computerisation. It turns out an algorithm generated this result, and it exemplifies precisely the simplifying characteristics of algorithms on which the book focuses.
For instance: guess what that algorithm said was the 33rd most computerisable job out of the 702 occupations it ranked?
Here’s a hint: Blue Steel.
Some may have noticed that the headline quote above the title of Rage is from Derren Brown, and you may have wondered: what is an award-winning TV and stage illusionist doing commenting on a book about algorithms and A.I.?
I don’t think it’s surprising. Illusionists are interested in people’s perceptions (and tricking them, but I don’t think that’s the whole point). Stage and TV magic focuses on the fact that people’s perceptions aren’t merely sensors tuned to reality, like those of a machine. Instead, human perception is integrated into a physical human being, with a psychology and a sociology, and those complex integrations are an inescapable part of perceived reality. Illusionists (be they magicians, mentalists, or even escape artists) exploit that complex reality to mislead and entertain.
But the best of them, those like Derren Brown, also use it to teach people about themselves. In that way, their magic is a kind of demonstrative philosophy: a way of making the philosophical examination of how we perceive reality practical and instructive. It’s therefore unsurprising that Derren recently wrote a best-selling book on how being “happy” is a philosophical concept that we can use in our own lives. In fact, if you look back at Mr Brown’s books, I think you’ll find they all have something philosophical to say about how people perceive, think about, and live their worlds.
That isn’t unlike the perspective I try to relate in Rage, because I think of A.I. as a branch of philosophy too. Historically, I see the pursuit of mechanical intelligence as a way of seeing the holes in machine perception, and what those holes can teach us about what’s uniquely human in our seeing and dealing with our complex and uncertain world. And I think they can teach use something about living better, with or without smart machines.
I imagine this is why Derren seemed to like the book so much, saying it is “beautiful, accessible and truly important” and that he “loved it.”
Derren has sent me a few probing philosophical questions since he read the book, and his having found the book stimulating of these questions is even more gratifying to me than the enthusiasm of his review (which you can read in full here).
Thanks, Derren Brown, and I look forward to discussing Hume and Kant with you in the near future!
I’ve posted a short video where I introduce the book on the new Rage YouTube channel (where I intend to post more videos ASAP):
I haven't read it yet, but from interviews, it seems that unlike many writers, McEwan realises that A.I. isn’t something in the future, it’s with us all right now, and it already has dramatic real-world effects. In a Channel 4 News interview, referring to the tragedy of the recent Ethiopian Airlines crash, McEwan called the plane’s control system “a giant brain that decides the aeroplane is stalling,” and further commented:
“the brain thinks that it’s stalling, a child looking out the window can tell you it’s not.”
Whether a child could have told the plane was in trouble or not, the crash (along with an earlier and similar crash of a Lion Air 737 Max), and the subsequent grounding of all of Boeing’s 737 Max planes, is an illustration of how increasingly intelligent software is making critical decisions in human lives. While investigations into what caused the 737 Max crashes are ongoing, the situation certainly highlights the complex interdependencies that arise at the intersections of human and computer decision making, not just in the cockpit, but throughout the development of complex systems like aircraft.
Role of flight in human life has changed dramatically, and there is an ever-evolving market for air travel and aircraft. Consider that after WWII, Boeing numbered its products with 300s and 400s representing conventional prop planes, 500s turbine engines, 600s rockets, and 700s jet aeroplanes. While Boeing’s work on most of those product lines is less well known, it was the Boeing marketing department thought to start and end with seven sounded better as a brand, and so the first Boeing jet, created in 1958, was called 707. Thus, the 737 is only the third jet aircraft that Boeing built, starting in 1964, 55 years ago.
Of course, the airline industry has changed massively between then and the creation of the first flight of 737 Max in 2016, for many reasons. First is that the cost of air travel has dropped dramatically (along with massive increases in the numbers of flights, and a corresponding drop in airline profit margins). A second impactful change is that the price of fuel has skyrocketed (along with concerns about the environmental impacts of its consumption). Finally, the competitive landscape for jet aircraft manufacturer has radically transformed, from a large number of international companies making jumbo passenger jets to only two: Boeing and Airbus.
In 2006, Boeing was considering replacing the ageing design of the commercially lucrative 737 with a “clean sheet” design, following along from the high-tech developments that are in its 787 Dreamliner, which first rolled off the assembly line in 2005. However, in 2010, Airbus, Boeing’s only remaining competitor, launched the A320neo series, with a new design that utilised high thrust engines with greater fuel efficiency to better fit into the modern travel market. Boeing was keen not to lose business to this new aircraft, so it decided to move quickly, abandon an entirely redesigned 737, and simply modify the existing 737 design to have new engines. To accommodate this change, the engines were moved upward and forward on the plane. The combined effect of newer, higher-thrust engines on 737 provided the expectation that Boeing’s plane would be 4 per cent more fuel efficient than the competing Airbus product.
However, as is always the case with the highly interdependent systems of aircraft, the re-engined design introduced a new problem: on its own, 737 Max no longer had positive static aerodynamic stability. That’s a fancy way of saying that under disturbances, the plane didn’t always return to a nice, level flight attitude. It isn’t uncommon for complicated planes to have this sort of instability.
Some say that the reason the Wright Brothers were the first men to fly was that they were bicycle mechanics, and they understood that a human pilot, through intelligent and continuous control actions, can make an unstable system stable. After all, a bike without a rider always falls over, but one with a rider can resist all manner of bumps in the road. Thus, the Wright’s didn’t attempt to create a plane that was stable without a pilot, and this is one of the reasons their designs succeeded where so many others had failed, creating the era of manned flight.
The particular instability of 737 Max was directly dependent on its redesign. Because of higher thrust further forward and higher up, 737 Max wanted to pitch its nose up, which created a danger of the plane stalling. This is where the “A.I.” comes in. Where the Wright Brother’s planes were made stable by the pilot, Boeing fixed the 737 Max’s nosing up by installing a new kind of flight control software. MCAS (The Manoeuvring Characteristics Augmentation System) was created to sense when the plane was pitching up, and autonomously act to pull the nose back down. Thus, new interdependencies were created, between autonomous decision-making software (a simple form of A.I.) and fundamental aerodynamic stability of the aircraft.
The web of interdependencies goes further than that. MCASrelied on airspeed, and a single angle-of-attack sensor to decide whether tonose the aircraft downward. It remains unclear what role that (possibly faulty)sensor and MCAS played in the crashes of Lion Air Flight 610 and EthiopianAirlines Flight 302 soon after take-off, but MCAS was intended to operate without the pilots being aware of its action to provide positive aerodynamic stability to 737 Max. Boeing explicitly stated that "a pilot should never see the operation of MCAS" in normal flying conditions.MCAS was not described in flight crew operations manual (FCOM), and it has beenreported that Boeing was avoiding “inundating and average pilot with too muchinformation” from additional systems like MCAS. It is possible that the pilotsof the two crashed planes were insufficiently informed regarding the system,how it might fail, and what it was doing during their fatal crashes. It isreported that the EthiopianAir crew followed appropriate procedures, and turned MCAS off, but afterbeing unable to regain control, they turned the system on again, and it put theplane into a dive from which it could not recover.
It will be some time before we understand what happened inthe two 737 Max crashes. However, what is clear is that the plane existed in asystem of interdependencies. Markets affected plane design, plane design affectedaerodynamics, aerodynamics were affected by increasingly autonomous software,and these interdependencies, in turn, could have affected the human system oftraining and emergency reaction when things went catastrophically wrong.
Regardless of his fictional perspective on A.I., McEwan isright that 737 Max is a tragic encounter with autonomous decision software, which can be correctly called A.I. This is why many journalists are talking about the implications of the plane’s troubles for new technologies like self-driving cars, which will inevitably link the commercial, the mechanical, and the human together in complex and vital relationships.
I sincerely believe that increasing interdependency between human systems (from markets to pilots executing emergency procedures) and algorithmic systems (from flight controls to more general purpose A.I.) are only going to have increasingly vital effects on people. It’s one of the things I’m trying to reveal in Rage Inside the Machine, it’s one of the reasons that my company BOXARR is so focused on helping people understand these complex interdependencies. In case anyone wondered, this is how I think these two activities of mine are linked. Because regardless of the promise of A.I., human insight will always be required to ensure that people are made safer and happier in the world of the future.
My wife, Paula Hardy, is a travel writer, and she recently penned an excellent piece for The Guardian, as a part of the "Europe Gets it Right" series, on the problems of overtourism in European cities, and how in Venice, one of the cities most afflicted, grassroots activists and local people are positively addressing the challenges involved.
Since few people pay for newspapers made of real paper anymore, good journalism has to support itself in ways other than print advertising and subscription fees. While many have opted for paywalls, The Guardian uses a unique combination of reader support, inventive strategies, and online ads to fund its good work.
Just like everyone else online, the ads served up with Guardian stories are selected by algorithms. And as I discuss in my upcoming book, those algorithms "read" the text, but they in no real sense of the word understand the text. So it's unsurprising that since Paula wrote the following words:
"In 2016 in Dubrovnik, residents were outraged when the mayor asked them to stay home to avoid the dangerous levels of crowds disembarking from multiple cruise ships. The new mayor, Mato Frankovic, has since capped the number of cruise ships that can dock in the city at two per day, cut souvenir stalls by 80% and cut restaurant seating in public spaces by 30%. But similar issues of overcrowding in Palma de Mallorca, San Sebastián, Prague and Salzburg have brought locals out into the streets in increasingly impassioned protests.
One of the most dramatic was Venice’s 2016 No Grandi Navi (“No Big Ships”) protest, when locals took to the Giudecca Canal in small fishing boats to block the passage of six colossal cruise ships. And, although plans have been announced this year to reroute the largest ships to a new dock in Marghera (still to be built), campaigners still argue for a dock outside the lagoon at the Lido, where heavy cargo ships historically unloaded."
An algorithm decided it was good to embed this advertising in the article:
You couldn't make this stuff up.
So I just read a piece from Medium entitled "I Built a Fake News Detector Using Natural Language Processing and Classification Models: Analyzing open source data from Subreddits r/TheOnion & r/nottheonion".
It pretty much does what it says on the tin: the data scientist who wrote the article used standard machine learning to look at example text that was "fake news" (in this case, text from the classic satirical news site The Onion, or more specifically, text from the part of Reddit that reposts Onion stories) and text that was "real," but absurd news (in this case, text from the part of Reddit called nottheonion).
I've put quotes around "real" because a) I'm unsure about the reliability of the sub-Reddit "nottheonion," and b) sometimes I think there is nothing truer than satire like The Onion. For instance, one of my favourite quotes about the hype around DeepMind's AlphaGo program beating a real, human master of the game Go is the following from The Onion's frequent "man on the street" fake vox-pop American Voices:
“I’m sorry, but this AI stuff scares me to death. It’s only a matter of time until we wake up to find the world overrun with computers playing all sorts of board games.”
DENNIS KALEN • PART-TIME LABORER
In my opinion, truer words were never spoken, Dennis Kalen.
Anyway, the interesting thing in the Medium story is that the titular fake-news-detection algorithm was able to tell The Onion from r/noththeonion 90 per cent of the time!
An impressive number, but as with most algorithmic results, it pays to look at the specifics. In this case, the author included a sorted list of the words that the algorithm found most useful in distinguishing what was satirical from what was merely absurd in the news. Here's the graphic:
And that's right: the words most indicative of a story being from The Onion were "Kavanaugh," "Incredible," and "FTW" (the Internet acronym that means "For The Win"). I suspect that there may be some significant in-sample bias here (that is the say, I think the data may have come from the period when Brett Kavanaugh's confirmation hearings had gone from sad to disturbingly ridiculous).
But much a more amusing algorithmic outcome are the words that are most indicative of "true," but absurd news.
They are "Florida," "Cops," and "Arrested."
Ah, Florida Man, is there nothing you can't fuck up? Even A.I., apparently.
God speed, Florida Man, God speed.
Glad to say that tomorrow night (March 20, 2019) I'm going to join a range of interesting folks at The Marist School (@Marist_School) tomorrow for a variety of interesting talks. Speakers include Johnny Mercer MP, former Army Officer and winner of 'Celebrity Hunted', Marianne Power, author of 'Help Me!', and Rachel Barker, Art Conservator at Tate London.
Since the school is for young women who will soon be entering the workplace, my talk is called #WomenOwnComputing, and it'll reveal the hidden history of how women have been vital to the field we now know as computing, since before we had computers as we now know them. This is something we all need to know, particularly since the percentage of computer science graduates who are female has crashed from around 40 per cent in the late 1980s, to only 18 per cent today!
My upcoming book Rage Inside the Machine: The Prejudice of Algorithms and How to Stop the Internet from Making Bigots of Us All, covers some of this material, but tomorrow night's talk is specific to something I think is vital for less prejudiced computing: getting women back into a field that they historically own!
I'm glad to announce that I'll be doing a talk entitled "You, Me, and 5G: How will this tech change us?” at the upcoming London Smart IoT conference. The event is on March 12th and 13th, 2019, at ExCeL London, and my contribution will be at 14:20 on the 13th (note that at the time of this writing, the site has the talk's time and speaker incorrectly listed). Here's the talk description from the conference site:
5G and Smart IoT have the potential to plunge people's day-to-day lives into an entirely new ecosystem of technologies. What will be the social, political, economic and psychological impacts of these changes? What can we learn from the past to make a more positive world in the hyper-connected future? This session will address those issues in a "fireside chat" between Anastasia Dedyukhina, founder of Consciously Digital and author of "Homo Distractus: Fight for Your Choices and Identity in the Digital Age," and Dr. Robert Elliott Smith Ph.D., an A.I. researcher and consultant with 30 years experience, and author of the upcoming book "Rage Inside the Machine: The Prejudice of Algorithms, and How to Stop the Internet Making Bigots of Us All."
To register for the conference, they've made a "smart" registration link just for me, which you'll find here.