Have you seen the billionaire drama yet? You know, the tweets where Elon Musk noted that he spoke with Mark Zuckerberg about artificial intelligence, and came to the conclusion that his knowledge was “limited on the subject.” Well, in between the two extreme opinions on AI are lots of intermediaries. The vast majority of companies and individual people using AI to enhance their world—and they’re doing it responsibly and with complete regard of the future. We should approach the advent of AI carefully, but not fearfully. Despite the breaking news of Facebook having to shut down its bots as they started speaking their own language, we know the future of AI will be balanced, especially if it’s open source. We’re creating AI to aid us not take over our world.
There’s too much to gain to forego the opportunity out of fear.
As such, we’re starting this series of articles about the role of artificial intelligence. From our C-Suite to the general public, find thought leadership on artificial intelligence with a clear and balanced voice. Hear from officers that are building AI from scratch, harnessing its power, and doing it transparently and responsibly.
To start the series, Steve, our CTO details his opinions on the recent pop culture conversation of AI and its “dangers.” Our company saw the flurry of posts about Facebook having to shut down its bots. Most of our reactions mirrored one another’s; it was an error. They caught it. They learned from it. They will iterate. This sort of situation shouldn’t define an industry and its potential.
From Steve Penrod:
How I Learned to Stop Worrying and Love the AI Future
The other night I was driving and flipping through radio stations when I heard a caller to a talk show referring to the story of Facebook shutting down an AI experiment because the computers started “speaking in their own language that humans couldn’t understand”. She was glad they shut down the program, because she didn’t want HAL to become a reality. It was obvious to her that the Facebook mishap was a step in that direction.
I know talk about “AI” is pervasive in the technology and business worlds now, but this was the first time I realized it’s becoming a serious pop culture discussion. And that discussion is stirring up fear.
I’ll be upfront that I’m an optimist. I expect the best of people and am honestly surprised for the better more often than for the worst. When it comes to AI, I really expect to be surprised more for the better, if not for one single reason—the times I’m usually disappointed by a person is when they act based on emotions. Acting out of anger, spite, jealousy and fear lead to the worst troubles. At their core, a computer—even an AI—simply does not have emotions. They might mimic emotional responses, but it’s an act that we programmed to do.
Moreover, there simply isn’t a reason for a machine to develop emotions that drive its decision making. Even in the wildest runaway AI, mutating genetic algorithm, machine learning gone wrong scenarios, there is no advantage to it. So, I don’t expect this will emerge in any future AI.
On top of that, in my four decades of working with computers I can’t recall a single time when the computers made a mistake. There were mistakes in programs, for sure, but I don’t recall a single time when an actual addition or multiplication, AND or XOR operation was found to be incorrect.
That means every mistake I did see was the result of a “bug” — i.e. a detail missed by a human programmer. Usually those bugs happen more than once, but rarely enough that they get missed. However, in the modern, networked, “big data” and “machine learning” driven world we are creating the ability to catch anomalies in data incredibly efficiently. We certainly have a long ways to go to get there, but self-monitoring systems will certainly become the norm.
Nonetheless, without emotions, the doomsday scenarios fall away. What is left behind is a world where the mundane tasks are tackled by an assistant with incredible attention to detail. AI’s that might make a mistake a few times, but will quickly recognize the error and then never make that same mistake again.
This is why I expect that these future “AIs” will become the most reliable, untiring, unflappable, dedicated assistant’s humanity has ever known. You will have an assistant that will NEVER forget to do what you ask. An assistant that only needs trained once. That isn’t something to fear. It is going to free people of the tasks they really don’t like or want to do, allowing us to focus on what we really enjoy and excel at.
Sounds pretty good to me, I’m not worried.
Join us next week to hear from Joshua Montgomery, Mycroft’s CEO on what AI will ultimately do for our future workforce.
Be sure not to miss it! Subscribe to our newsletter in the footer form below and we’ll send it straight to your inbox.
Alyx works as a business analyst for Mycroft, working with data to shape metrics and the broader marketing strategy. She also writes these blog posts.