A.I.: Airplanes and Interdependencies by Rob Smith

Image result for machines like me cover

Ian McEwan (award-winning author of such classics as Enduring Love and Atonement) has turned his hand to a sci-fi novel about A.I. in his new book Machines Like Me.

I haven't read it yet, but from interviews, it seems that unlike many writers, McEwan realises that A.I. isn’t something in the future, it’s with us all right now, and it already has dramatic real-world effects. In a Channel 4 News interview, referring to the tragedy of the recent Ethiopian Airlines crash, McEwan called the plane’s control system “a giant brain that decides the aeroplane is stalling,” and further commented:

“the brain thinks that it’s stalling, a child looking out the window can tell you it’s not.”

Whether a child could have told the plane was in trouble or not, the crash (along with an earlier and similar crash of a Lion Air 737 Max), and the subsequent grounding of all of Boeing’s 737 Max planes, is an illustration of how increasingly intelligent software is making critical decisions in human lives. While investigations into what caused the 737 Max crashes are ongoing, the situation certainly highlights the complex interdependencies that arise at the intersections of human and computer decision making, not just in the cockpit, but throughout the development of complex systems like aircraft.

Role of flight in human life has changed dramatically, and there is an ever-evolving market for air travel and aircraft. Consider that after WWII, Boeing numbered its products with 300s and 400s representing conventional prop planes, 500s turbine engines, 600s rockets, and 700s jet aeroplanes. While Boeing’s work on most of those product lines is less well known, it was the Boeing marketing department thought to start and end with seven sounded better as a brand, and so the first Boeing jet, created in 1958, was called 707. Thus, the 737 is only the third jet aircraft that Boeing built, starting in 1964, 55 years ago.

Boeing family in the 70's

Of course, the airline industry has changed massively between then and the creation of the first flight of 737 Max in 2016, for many reasons. First is that the cost of air travel has dropped dramatically (along with massive increases in the numbers of flights, and a corresponding drop in airline profit margins). A second impactful change is that the price of fuel has skyrocketed (along with concerns about the environmental impacts of its consumption). Finally, the competitive landscape for jet aircraft manufacturer has radically transformed, from a large number of international companies making jumbo passenger jets to only two: Boeing and Airbus.

In 2006, Boeing was considering replacing the ageing design of the commercially lucrative 737 with a “clean sheet” design, following along from the high-tech developments that are in its 787 Dreamliner, which first rolled off the assembly line in 2005. However, in 2010, Airbus, Boeing’s only remaining competitor, launched the A320neo series, with a new design that utilised high thrust engines with greater fuel efficiency to better fit into the modern travel market. Boeing was keen not to lose business to this new aircraft, so it decided to move quickly, abandon an entirely redesigned 737, and simply modify the existing 737 design to have new engines. To accommodate this change, the engines were moved upward and forward on the plane.  The combined effect of newer, higher-thrust engines on 737 provided the expectation that Boeing’s plane would be 4 per cent more fuel efficient than the competing Airbus product.

Image result for boeing 737 Max redesign

However, as is always the case with the highly interdependent systems of aircraft, the re-engined design introduced a new problem: on its own, 737 Max no longer had positive static aerodynamic stability. That’s a fancy way of saying that under disturbances, the plane didn’t always return to a nice, level flight attitude. It isn’t uncommon for complicated planes to have this sort of instability.

Image result for wright bicycles

Some say that the reason the Wright Brothers were the first men to fly was that they were bicycle mechanics, and they understood that a human pilot, through intelligent and continuous control actions, can make an unstable system stable. After all, a bike without a rider always falls over, but one with a rider can resist all manner of bumps in the road. Thus, the Wright’s didn’t attempt to create a plane that was stable without a pilot, and this is one of the reasons their designs succeeded where so many others had failed, creating the era of manned flight.

The particular instability of 737 Max was directly dependent on its redesign. Because of higher thrust further forward and higher up, 737 Max wanted to pitch its nose up, which created a danger of the plane stalling. This is where the “A.I.” comes in. Where the Wright Brother’s planes were made stable by the pilot, Boeing fixed the 737 Max’s nosing up by installing a new kind of flight control software. MCAS (The Manoeuvring Characteristics Augmentation System) was created to sense when the plane was pitching up, and autonomously act to pull the nose back down. Thus, new interdependencies were created, between autonomous decision-making software (a simple form of A.I.) and fundamental aerodynamic stability of the aircraft.

Image result for mcas ai

The web of interdependencies goes further than that. MCASrelied on airspeed, and a single angle-of-attack sensor to decide whether tonose the aircraft downward. It remains unclear what role that (possibly faulty)sensor and MCAS played in the crashes of Lion Air Flight 610 and EthiopianAirlines Flight 302 soon after take-off, but MCAS was intended to operate without the pilots being aware of its action to provide positive aerodynamic stability to 737 Max. Boeing explicitly stated that "a pilot should never see the operation of MCAS" in normal flying conditions.MCAS was not described in flight crew operations manual (FCOM), and it has beenreported that Boeing was avoiding “inundating and average pilot with too muchinformation” from additional systems like MCAS. It is possible that the pilotsof the two crashed planes were insufficiently informed regarding the system,how it might fail, and what it was doing during their fatal crashes. It isreported that the EthiopianAir crew followed appropriate procedures, and turned MCAS off, but afterbeing unable to regain control, they turned the system on again, and it put theplane into a dive from which it could not recover.

It will be some time before we understand what happened inthe two 737 Max crashes. However, what is clear is that the plane existed in asystem of interdependencies. Markets affected plane design, plane design affectedaerodynamics, aerodynamics were affected by increasingly autonomous software,and these interdependencies, in turn, could have affected the human system oftraining and emergency reaction when things went catastrophically wrong.

Regardless of his fictional perspective on A.I., McEwan isright that 737 Max is a tragic encounter with autonomous decision software, which can be correctly called A.I. This is why many journalists are talking about the implications of the plane’s troubles for new technologies like self-driving cars, which will inevitably link the commercial, the mechanical, and the human together in complex and vital relationships.

I sincerely believe that increasing interdependency between human systems (from markets to pilots executing emergency procedures) and algorithmic systems (from flight controls to more general purpose A.I.) are only going to have increasingly vital effects on people. It’s one of the things I’m trying to reveal in Rage Inside the Machine, it’s one of the reasons that my company BOXARR is so focused on helping people understand these complex interdependencies. In case anyone wondered, this is how I think these two activities of mine are linked. Because regardless of the promise of A.I., human insight will always be required to ensure that people are made safer and happier in the world of the future.

A.I.rony Strikes a Travel Article by Rob Smith

Image result for cruise ship in venice canal

My wife, Paula Hardy, is a travel writer, and she recently penned an excellent piece for The Guardian, as a part of the "Europe Gets it Right" series, on the problems of overtourism in European cities, and how in Venice, one of the cities most afflicted, grassroots activists and local people are positively addressing the challenges involved.

Since few people pay for newspapers made of real paper anymore, good journalism has to support itself in ways other than print advertising and subscription fees. While many have opted for paywalls, The Guardian uses a unique combination of reader support, inventive strategies, and online ads to fund its good work.

Just like everyone else online, the ads served up with Guardian stories are selected by algorithms. And as I discuss in my upcoming book, those algorithms "read" the text, but they in no real sense of the word understand the text. So it's unsurprising that since Paula wrote the following words:

"In 2016 in Dubrovnik, residents were outraged when the mayor asked them to stay home to avoid the dangerous levels of crowds disembarking from multiple cruise ships. The new mayor, Mato Frankovic, has since capped the number of cruise ships that can dock in the city at two per day, cut souvenir stalls by 80% and cut restaurant seating in public spaces by 30%. But similar issues of overcrowding in Palma de Mallorca, San Sebastián, Prague and Salzburg have brought locals out into the streets in increasingly impassioned protests.

One of the most dramatic was Venice’s 2016 No Grandi Navi (“No Big Ships”) protest, when locals took to the Giudecca Canal in small fishing boats to block the passage of six colossal cruise ships. And, although plans have been announced this year to reroute the largest ships to a new dock in Marghera (still to be built), campaigners still argue for a dock outside the lagoon at the Lido, where heavy cargo ships historically unloaded."

An algorithm decided it was good to embed this advertising in the article:


You couldn't make this stuff up.

Florida Man Defeats A.I. by Rob Smith

So I just read a piece from Medium entitled "I Built a Fake News Detector Using Natural Language Processing and Classification Models: Analyzing open source data from Subreddits r/TheOnion & r/nottheonion".

It pretty much does what it says on the tin: the data scientist who wrote the article used standard machine learning to look at example text that was "fake news" (in this case, text from the classic satirical news site The Onion, or more specifically, text from the part of Reddit that reposts Onion stories) and text that was "real," but absurd news (in this case, text from the part of Reddit called nottheonion).

I've put quotes around "real" because a) I'm unsure about the reliability of the sub-Reddit "nottheonion," and b) sometimes I think there is nothing truer than satire like The Onion. For instance, one of my favourite quotes about the hype around DeepMind's AlphaGo program beating a real, human master of the game Go is the following from The Onion's frequent "man on the street" fake vox-pop American Voices:

“I’m sorry, but this AI stuff scares me to death. It’s only a matter of time until we wake up to find the world overrun with computers playing all sorts of board games.”


In my opinion, truer words were never spoken, Dennis Kalen.

Anyway, the interesting thing in the Medium story is that the titular fake-news-detection algorithm was able to tell The Onion from r/noththeonion 90 per cent of the time!

An impressive number, but as with most algorithmic results, it pays to look at the specifics. In this case, the author included a sorted list of the words that the algorithm found most useful in distinguishing what was satirical from what was merely absurd in the news. Here's the graphic:

And that's right: the words most indicative of a story being from The Onion were "Kavanaugh," "Incredible," and "FTW" (the Internet acronym that means "For The Win"). I suspect that there may be some significant in-sample bias here (that is the say, I think the data may have come from the period when Brett Kavanaugh's confirmation hearings had gone from sad to disturbingly ridiculous).

But much a more amusing algorithmic outcome are the words that are most indicative of "true," but absurd news.

They are "Florida," "Cops," and "Arrested."

Ah, Florida Man, is there nothing you can't fuck up? Even A.I., apparently.

God speed, Florida Man, God speed.

Talk @Marist_School by Rob Smith

Glad to say that tomorrow night (March 20, 2019) I'm going to join a range of interesting folks at The Marist School (@Marist_School) tomorrow for a variety of interesting talks. Speakers include Johnny Mercer MP, former Army Officer and winner of 'Celebrity Hunted', Marianne Power, author of 'Help Me!', and Rachel Barker, Art Conservator at Tate London.

Since the school is for young women who will soon be entering the workplace, my talk is called #WomenOwnComputing, and it'll reveal the hidden history of how women have been vital to the field we now know as computing, since before we had computers as we now know them. This is something we all need to know, particularly since the percentage of computer science graduates who are female has crashed from around 40 per cent in the late 1980s, to only 18 per cent today!

My upcoming book Rage Inside the Machine: The Prejudice of Algorithms and How to Stop the Internet from Making Bigots of Us All, covers some of this material, but tomorrow night's talk is specific to something I think is vital for less prejudiced computing: getting women back into a field that they historically own!

You, Me, & Smart IoT by Rob Smith


I'm glad to announce that I'll be doing a talk entitled "You, Me, and 5G: How will this tech change us?” at the upcoming London Smart IoT conference. The event is on March 12th and 13th, 2019, at ExCeL London, and my contribution will be at 14:20 on the 13th (note that at the time of this writing, the site has the talk's time and speaker incorrectly listed). Here's the talk description from the conference site:

5G and Smart IoT have the potential to plunge people's day-to-day lives into an entirely new ecosystem of technologies. What will be the social, political, economic and psychological impacts of these changes? What can we learn from the past to make a more positive world in the hyper-connected future? This session will address those issues in a "fireside chat" between Anastasia Dedyukhina, founder of Consciously Digital and author of "Homo Distractus: Fight for Your Choices and Identity in the Digital Age," and Dr. Robert Elliott Smith Ph.D., an A.I. researcher and consultant with 30 years experience, and author of the upcoming book "Rage Inside the Machine: The Prejudice of Algorithms, and How to Stop the Internet Making Bigots of Us All."

To register for the conference, they've made a "smart" registration link just for me, which you'll find here.

A.O-C.&A.I. by Rob Smith


I have to share this great video where Congresswoman Alexandria Ocasio-Cortez talks about the dangers of automating bias, in an interview celebrating Dr. Martin Luther King's legacy, at the MLK Now event sponsored by Blackout for Human Rights.

AOC said, “Algorithms are still made by human beings, and those algorithms are still pegged to basic human assumptions. They’re just automated assumptions. And if you don’t fix the bias, then you are just automating the bias.”

I couldn't agree with her more.

I'd really like to get a pre-release copy of my upcoming book "Rage Inside The Machine: The Prejudice of Algorithms, and How to Stop the Internet Making Bigots of Us All" into the Congresswoman's hands. In the book I try to show how the danger isn't just in automating existing human bias, its in the inherent way that algorithms simplify and categorise people, and how those quantifications of people are historically linked to prejudice.

If any of you know a way that I might get AOC a copy, please get in touch!

Beware Trump's A.I. by Rob Smith


On February 11th, 2019, President Trump signed an executive order launching the "American A.I. Initiative" (carrying on from previous White House announcements on A.I.). A White House spokesperson has said the initiative outlines "bold, decisive actions to ensure that AI continues to be fuelled by American ingenuity, reflects American values and is applied for the benefit of the American people."

As you'll read in my upcoming book "Rage Inside the Machine: The Prejudice of Algorithms, and How to Stop the Internet Making Bigots of Us All," algorithms do indeed have values of their own. And we should all be very concerned about A.I.s adopting what the current US Administration thinks are "American Values."

This concern should come from the fact that algorithms are far too easily tied to prejudices against people. Pre-judging explicitly requires the reduction of people to categories and quantities, and as the book discusses, the scientific techniques that have traditionally been tied to simplifying people in this way are the direct ancestors of techniques in the algorithms (and thus A.I.s) of today.

With appropriate human caution, these techniques have been a valuable part of social science, but they have also often drifted into supporting prejudice, intolerance, and hate.

The book is about how appropriate human caution in the use of A.I. is now more critical than ever, and how that caution can take technical form, in creating algorithms that promote true diversity instead of polarization, adaptabilty rather than rigid and simple-minded optimization.

The book is available for pre-order from Amazon now.

The Price of Love by Rob Smith

Last night I went to see The Ex-Boyfriend Yard Sale, a play by Haley McGee, currently showing at the The Camden People's Theatre. I think it's an absolute triumph, and I recommend it to everyone in London. I think it's so good it may come to a venue near you, wherever you are, eventually, and when it does, you should see it. 

The Ex-Boyfriend Yard Sale | Trailer from Haley McGee on Vimeo.

The play is semi-autobiographical, and begins with Haley as a struggling artist, massively in debt, and realising that the only things of value she has to sell to get herself back in the black are gifts from exes. But how valuable are these objects? 

Thus begins an exploration of the nature of value, which spans many theories of economic value and matters of the heart. 

Disclaimer: I'm one of the people Haley mentions in the play, because she interviewed me. I told her about a few theories of value, and a bit about conviction narrative theory, which I think fits nicely with how Haley's stories increase the value of her ex-boyfriend's gifts.

But the play is something far beyond theories. It's hilarious, moving, and thought provoking. Haley holds the audience in the palm of her hand, and takes them on a journey through her head and heart, and I think inspires them to think of their own thoughts and feelings about what's of value in life and love.

I hope lots of you are able to see this wonderful play and enjoy it as much as I did.

Do you still think you can escape, Number Six? by Rob Smith


I am tickled beyond all reasonable comprehension to say that I'm going to be giving a talk entitled "How Tech Works Against Our Better Nature" at the upcoming No. 6 Festival. The talk is a part of Salon No. 6, the contribution of the marvellous people at Salon London to the overall 4-day event (the lineup of which looks fantastic). The festival takes place from the 6th to the 9th of September "in the village" of Portmeirion, North Wales, which was famously the set for the 1960s classic TV series The Prisoner.


I was a huge fan of this show. In fact, while I was watching The Sci-Fi Channel (now known as SyFy) in its early days of existence (via a 6-foot satellite dish in my backyard in Alabama) they did an all-weekend marathon of all 17 episodes, back-to-back, and since I had an extraordinary stomach flu that weekend, I joyfully watched them (between all-too-frequent bathroom breaks). That's right, I myself was The Prisoner of No. 2!


Anyway, the talk should be interesting, as it'll be a bit of a preview of my upcoming book, due out in the Spring. I may even announce the official title there, you never know! 

If you want to know my position in the lineup, all I can say for now is: I am not a number, I am a free man!

Harpers (Bizarre!) by Rob Smith

I had no idea until I heard from friends on Facebook, but In this month's Harper's, there is a mention of the piece I posted back in 2014 on Google's A.I.s "changing history" by altering photos (in my case, to make me and my wife smile at the same time). The Harper's essay, entitled Known Unknowns, is by James Bridle, about ideas on A.I. that will appear in his book, New Dark Age, which is out this month. Looks like an interesting read.I'll be contributing to this conversation in a new book out in the Spring, watch this space for more info!

Save the World With the New Facebook T&Cs by Rob Smith

If you use Facebook, you're going to get an email asking you to accept new T&Cs. This is a chance to not only help yourself, but I think it might help save the world.When you click through, FB make it obscure to actually change the status quo. Your opportunity to deny them personal data they use to categorize what information they serve to you is hidden behind some "Manage Settings" buttons, and a few pages giving examples of how nice it'll be to enable sharing of your religious views, your partner-gender preference, your political inclination, whether to make ads more "relevant" (read targeted) to you, and whether to give FB the right to recognize your face in photos.My recommendation is to say "no" to all of these. And it's not just because I think that's what most people want. There is some preliminary work that a student of mine is leading, which I can't discuss yet in detail, as it is unpublished, but the implications of one of the results is that the best way to fight against echo chambers and the (sometimes hateful) polarization of people online is to have more diverse information shared with everyone.One way to contribute to making that happen is to prevent algorithms from serving you and others a programmatic palette of information. That causes a networked effect of dividing what people see about the world, and thus how they see the world. This is something that I believe we need to work against.And the research seems to show that the most effective way to do that is to share more diverse information more broadly. Enabling FB to categorize and target you and others would seem to have the opposite effect, according to the mathematical models we're looking at.So make the world a better place today: take the time and effort to overcome FB's hiding of your options, and deny them all of your personal information that you can, via those options.It won't save all the problems of profit driven social media companies, but it's what we're able to do now.

With a Capital C that starts out Cortisol and stands for Clickthrough by Rob Smith

Here's a BBC video, entitled The design tricks that get you hooked on your phone. I think it's "neuro-bullshit". Let me explain...

I'd assert that all the talk about cortisol, dopamine, etc., in this piece is based on "science" in the loosest possible sense of that word. I imagine no neural studies of any kind were done in designing the app features discussed, at all. The reality is that our brains release these exceedingly common brain chemicals so continuously and in so many complicated ways and circumstances, that talking about parts of app design as if they are playing some sort of highly tuned game with neurochemicals, and with us, based on some deep understanding of these processes, is beyond exaggeration.

Think of it this way: seeing a frosted glass full of beer suggests to our brain a pending satisfaction, which probably gives us a little corizol. The satisfaction of drinking the liquid certainly gives us a satisfying dopamine hit. However, would it be fair to imply that there has been clever "neurological" design behind these simple signals from a beer advert? Or behind newer versions of the same sorts of temptations (ads on TV, in-app rewards of pretty colors and sounds, etc.) that were created in the era of the brain scan, neurochemical language, and the like?

Of course not, as all we're really talking about is normal and obvious human behaviour, and the way that people can artfully tempt and satisfy other people. That's been going on in human interaction and in design of almost everything for millennia, with no knowledge of impressive sounding neurochemistry. In fact it's a part of *any* designed object or process, and always has been.

And like I said, I'm pretty sure that *no* neuro studies are involved in the design of any of these features. The article's assertions are based  on metaphor and supposition, neurologically. Even if there are


studies, they will likely be of such a child-like obviousness as to render this sort of hype moot. This just follows from the real status of brain understanding today: we understand things in such gross terms that this sort of discussion is almost certainly exaggerated window dressing.

The use of "neurochemical" language here is just an attempt to wrap a veil of science around really relatively obvious things. And to help fund startup businesses like some of those that got name dropped in this story, and who are sure to have provided the journalist with press release and/or interviews to help him formulate this story, and get it on the BBC, so that he could tempt people like us into stressing about not knowing something that might be harming us or our loved ones (cortisol!), so that we click through and watch the video, providing confirming satisfaction (dopamine!) to those who are worried about what technology is doing to us .

This whole thing prompted me to re-watch

this scene from Music Man

, which really is a work of insight and singing genius:

The problem (phone addiction) discussed in the BBC video isn't that science has figured us out (it really hasn't), it's that we are ceeding figuring ourselves out to self-promoting pseudoscience. It's a distraction, to make some people a career and a living.

What's going on with people being addicted to their phones is no different from people's fears of addictions to all sorts of things in the past. What's troubling about this sort of thing is that it treats people as if they are machines, that can be simply manipulated, but that's just not true.

The real problems in the world are much more about the lives we've setup for people in our society, which are self serving and isolating, making a phone screen a vital and necessary source of (plain old normally tempting) stimulation, "likes", and "friends". We (including we scientists) don't have people figured out, but we have retreated away from really figuring out people, choosing instead to increasingly construct a society where we are treated like automata, so we act like automata, and we are exploited like automata.

(if you want to hear me go on like this more, please join me at

Focus Inside this week


I'm a Panel Member at Focus Inside by Rob Smith

Glad to announce that I'll be appearing as a panel member at the Focus Inside pop-up festival in my adopted hometown of London on April 17th (tickets here). The panel will be discussing "How is technology changing us and what can we do about it?" a subject that most people know I've got some pretty passionate opinions about.I think it's going to be a pretty interesting chat with the audience, given that my fellow panel members are Henrietta Bowden-Jones (Founder and Director of the National Problem Gambling Clinic), Kwame Ferreira (CEO at Impossible.com), Anna Radchenko, award-winning artist of the Melancholy Rooms project), and the moderator for the panel, Anastasia Dedyukhina (founder of Consciously Digital, author of Homo Distracticus, TEDx speaker and Huffington Post blogger).I hope to see lots of you there.

Economics of the Feelings We Share by Rob Smith


Work some colleagues of mine and I did with the Bank of England is now available as a BoE working paper. It has some pretty substantial results showing that how we feel about narratives (the stories we tell ourselves and each other) affect economics (markets, productivity, employment, etc.) rather than the other way around.This direction of causality may become even more important. As Nobel Laureate Economist Robert Shiller has observed:

The history of speculative bubbles begins roughly with the advent of newspapers...Although the news media...present themselves as detached observers of market events, they are themselves an integral part of these events. Significant market events generally only occur if there is similar thinking among large groups of people, and the news media are essential vehicles for the spread of ideas

Now that "the news media" is really social media, the speed at which the stories we tell ourselves spread is only getting faster, and the potentially volatile effects on economics may only be greater. Hopefully this paper can, in some small way, help us understand that more. 

Read. This. Book. "Age of Discovery (Revised Edition)" by Rob Smith

Age of Discovery: Navigating the Storms of our Second Renaissance by Ian Goldin and Chris Kutarna may be the definitive historical perspective on Trump, Brexit, and the entire era we live in.Yes, I feel that strongly about it. I disclose gladly that Chris Kutarna is a friend of mine, but when he asked me for a review, I was filled with some dread. What if I didn't like it, or disagreed with it in some substantial way?Boy, was that fear misplaced.The central comparison of the book, between the Renaissance and now, is made carefully and compellingly, backed with data that isn't just useful for that purpose, but useful for the general understanding of the world today. There are figures and charts that anyone arguing about the state of our world should have access to, all wonderfully contextualized in the comparison to the time of Columbus and Da Vinci. After this comparison of facts, the book goes on to compare then and now based on the commonality of flourishing genius and risk between the two eras.A not to miss in the revised edition is chapter 8, Prophets and Bonfires, where Savonarola is shown to have uncanny comparisons to Trump. I'll not comment on the desirability of similar ends, but I will say that the book offers real insight into what we should expect, and what we should do, after Trump, Brexit, and the other great upheavals of the moment. Obviously, the resolution of these symptoms won't be the end of the problems we now face, and the book provides real insight on the why, and the what we can do in response. The book concludes with broader advice on how humanity can win The Contest for the Future.I might disagree with the book on some points: I think it's more optimistic about technology than me. As I see it, the real message of today's complex systems science (whether it be regarding economics, biology, or A.I.) is about the hard limits of human understanding. I could make a Renaissance comparison here: Gerolamo Cardano invented probability in 1564, which has led to our misplaced and arrogant belief that we can algorithmically characterize uncertainty (not to mention, by a largely unseen route that I hope to articulate in my own writing, eugenics, and eventually online algorithmic bias).Thus, I'd add "humility" to the list of virtues that the final chapter of "Age of Discovery" recommends promoting as we navigate the Second Renaissance.  We need humility to realize that our "understanding" of the role of DNA, or A.I., etc., is deeply, inherently, and permanently limited, to move to higher levels of perspective on these elements of highly complex systems. This is the real leap that the new Renaissance can contribute to the future, in my opinion: a new kind of human understanding.The book actually agrees with that point in spirit, I believe. It really is an important read, that I would recommend to absolutely everyone.Here's the book at Amazon USA and Amazon UK. Make sure and get the paperback, which is the Revised Edition. And you can always get it (at a 10% discount) direct from Bloomsbury.  

Is Tech a Feminist Issue? by Rob Smith

I'm glad to announce that I'm going to be talking at another Salon London event at The Hospital Club on October 19th. This time the night's topic is Is Tech a Feminist Issue, and I'm going to be speaking on how algorithms have hidden biases, that reflect reductive, discriminatory thinking on gender (as well as race, religion, etc.), and descend from the history of science in deep and largely hidden ways.The bad news is that the ubiquity of algorithms is making such biases global scale and lightspeed, but the good news is that the obviousness of these biases may lead to their end. I'll talk about the why and how of this critical issue.There will also be talks from Margaret Heffernan, author of Willful Blindness, and Nichi Hodgson, Founder of the Ethical Porn Partnership.It promises to be a fascinating evening, and I hope lots of you are able to come along!

Two Gigs at Blue Dot by Rob Smith

Glad to say I'll be doing two gigs at The Blue Dot Festival (July 7-9 at the marvelous Jodrell Bank Discovery Centre, with the awesome Lovell Telescope). The first (on Saturday the 8th at 11:00AM) will be a second round of the panel discussion on the how A.I. will impact on our future that I participated in at Transmission London (sponsored by Salon London), with Prof Arthur I Miller and Prof Steve FullerThe second (on Sunday the 9th at 15:00) will be a little talk I call The Banality of A.I. (in reference to Hannah Arendt's The Banality of Evil).Should be fun...hope to see you there.

RIP Robert Pirsig... by Rob Smith

It is with much sadness that I today read of the death of Robert Pirsig. I believe his philosophical novel Zen and the Art of Motorcycle Maintenance is the one book that's had the most influence on my life. It's examination on what is meant by quality touches every aspect of my own perspective, on life, my work, and yes even on A.I.I can still pick it up and feel instantly moved by randomly selected pages. I hope that everyone gets a chance to read it someday, and gets as much from it as I did.Rest in Peace, Mr. Pirsig. 

UK Parliament considers what the A.I.s are up to... by Rob Smith

The UK House of Commons Science and Technology Committee has (very appropriately, in my opinion) launched an investigation into the use of algorithms in public decision making. They asked prominent Universities, including UCL, to provide opinions on the matter, and I was glad to put some comments in, which (along with comments from many other UCL authors) resulted in a Parliamentary  Evidence document on the subject, which some of you may find of interest.

Listen to The Transmission... by Rob Smith

For those of you who might be interested, here's a SoundCloud recording of my recent appearance discussing A.I. at Transmission, the year end event from London Salon, on February 2, 2017 at The Hospital Club.As was discussed in a previous post, I was onstage discussing the realities of A.I. with great speakers Prof Arthur I Millerwho believes that computers will be artists someday, and Prof Steve Fullerwho apparently wants to live forever. I was the nay-sayer, as I think that A.I. may likely never be human equivalent in the ways that matter, but despite this, they are already our overlords.