In this episode of Foresight Radio Tom speaks with AI veteran and entrepreneur Jack Crawford about the dramatic changes that AI will bring to our businesses and our lives.
Jack Crawford has three decades of experience in leadership in industry, entrepreneurship (formed three companies), and technology consulting. He is currently leading a services and technology platform firm which is delivering cost savings and business process improvements through the application of AI solutions to current enterprise problems.
"TK: Welcome to Foresight Radio. I’m your host Thomas Koulopoulos. On each episode, we explore the many trends that are shaping the way we will work, live, and play in the future. Our focus is on the disruptive and transformational trends that are changing the way the world works in ways that are often invisible. Our objective is simple: give you the knowledge and the insights that you need to better manage the future. Foresight Radio is sponsored by our good friends at Wasabi. You can learn more about them at wasabi.com.
In this episode, we’re going to be talking about the evolution of AI and what has to be one of the most widely discussed topics in technology and in business today. Our guest is Jack Crawford. Jack has been in the technology space for three decades and is the founder of three technology companies. His most recent was Datalog.ai, a company that focused on the development of AI-enabled chatbots. Jack was also a captain in the United States Air Force. My conversation with Jack looked at a broad range of AI topics from its evolution to the fear factor that often surrounds it to the impact it will have on nation states and the balance of power all the way down to the specific ways in which it can be used by your business to create new value and new opportunity. Here’s my conversation with Jack Crawford.
TK: So Jack, you’ve got a long history working with AI. Give us some perspective from your standpoint, having seen the evolution of AI, what is different today?
JC: Let me summarize it first and then maybe go into a little more detail on my viewpoint. That is that we’ve moved from the age of rules to the age of machine learning and us not really understanding how they learn which is closer to human learning than it is to traditional programming.
TK: So that’s a point that gets brought up a lot, this notion of AI doing things without us necessarily understanding why it does those things. I think that scares a lot of people, right? So when you look at the evolution of AI, is part of the challenge the fact that we’re just afraid of it or has it fundamentally been an absence of the right technology to make it happen?
JC: It’s a little bit of both as far as the technology being available. It isn’t related to the lack of understanding of how to make computers learn but the computing power required to create something useful wasn’t available to the everyday business and even large businesses. So there were exceptions in government, exceptions in large organizations that had capital and high return for using algorithms that were training with neural network. This isn’t widely known but you can see the papers going back at least a decade and more about neural networks which is a subcategory of the area of machine learning where basically computers are learning how to do a task or look for a pattern or try to find something within data that’s a needle in the haystack. Many people talk about the game of Go and how Google was able to defeat the best human on the planet.
The reason why I don’t like that example is it’s a very big problem and it involved a lot of computing resources. It dissuades businesses from trying to do something new. What’s really available today with the advent of the fast processors that you can install in a personal computer that anyone can build a machine learning algorithm and seek to automate a task or seek to have something learn how to do a task and how they execute it and that’s what’s changed. It isn’t the idea of AI, it’s the ability to do it.
TK: I had a chance several months ago to listen to Garry Kasparov who was the world’s reigning chess champion in the 1990s talk about his match against IBM’s Deep Blue which was the first computer to win against a grandmaster in 1997 I believe it was. He talked about the fact that Deep Blue really wasn’t AI. At best, it was machine learning but really it was a matter of brute-force computing that could think ahead many more moves than its human opponent could.
TK: Here’s Garry Kasparov at a recent global retail marketing association meeting talking about Deep Blue, the IBM computer that he lost to in 1997 and also commenting on whether Deep Blue was in fact intelligent. Here’s Garry Kasparov.
GK: Deep Blue was phenomenally fast. Even by today’s measurements, it could reach a speed of 200 million positions per second. Again, machine doesn’t solve the game. All it has to do is just to make sure that it will not be making the last mistake. I can tell you Deep Blue was not intelligent at all. It was as intelligent as your alarm clock. A very expensive one, about $10 million worth, but still an alarm clock.
TK: I think for a long time, what has been passed off as AI has been more and more powerful computers. So now we’re being told, wait a minute, there’s a brute-force approach, there’s a machine learning approach, and there’s an AI approach. I think people are genuinely confused between those three categories. Can you clarify that a bit for us? Is there some way to simply understand what the differences are between those three labels that we use to describe levels of intelligence for a machine?
JC: There’s something you said about the master chess player about it not being AI. Where that falls out of the way is that there is not one type of AI. AI doesn’t include only these unsupervised methods to have a machine learn but almost anything to get the job done. At the beginning, that first category or systems that act, so if we think about cruise control in a car, is that artificial intelligence? Not really but it’s pretty smart. It’s able to adjust your acceleration and your brakes based on how fast it’s going. Over time, sensors were used to be able to detect a car in front of you and slow your car down but those were rule-based. Could it learn different traffic conditions and not use rules to be able to do a better job of that, and that’s what we’ve seen lately. So it’s a matter of bringing new methods in and as I said earlier, they’re not new methods. It’s just methods that now we can create in a reasonable amount of time.
So I’d like to think of the human body. We have a head, we have arms, we have legs, and we have organs and other useful parts of our body, the circulatory system for example and the nervous system. Those are what the bodies use to keep the brain alive. It’s a life support system. The five senses remarkably are the sensors that are located in the head right close to the brain. So they have the shortest distance possible to interact with the brain where the processing is occurring. If we lose our brains, our entire function degrades. So if we think about an automobile, those sensors are largely electromechanical. It’s taking data and pulls it in. There's no artificial intelligence about data being captured from a radar or from a camera by sound but those things collectively can feed into algorithms that have been created to give best guesses of what something might be. Then the last part would be applying the rules. For example, don't exceed the speed limit. Well, that means you need to know what the speed limit is. So you either remember or you read it off the sign.
So what's happening with AI? Well, all that data is in Google Maps, what is the speed limit, what kind of road are you on, how fast are the other cars going. That processing requires applying rules to your situation. The decision about how do you react to the situation is where we begin to use the systems that have learned without us really understanding how they have. There's an important point to make about systems that involve human safety: flying an airplane, driving a car, running a nuclear power plant. Those systems require a lot more testing of these advanced forms of AI so that you can take the risk out of the use of them. Now it becomes a risk-adjusted decision where you look at the whether this particular application of artificial intelligence is actually safer than human action.
TK: I think for some time, we will be having a fairly lively debate because although we can prove mathematically that a certain AI-driven device, an autonomous vehicle for example, may be safer than one driven by a human being, there is an emotional attachment to the process of driving and we want to believe that as humans, we simply are better at certain tasks than the machine could ever be. When the machine makes a mistake, we certainly hold it to a much higher standard than we hold ourselves. So while mathematically it might be safer, there may be less risk involved in the machine doing a particular task, we don't want to believe that.
JC: It's just a matter of time I believe. It's not that it won't happen, this change in perception. Just think about horse-drawn carriages. When automobiles, these noisy machines came along that didn't have horses in front of them began to choke on the roads, they scared horses because horses weren’t used to them. So laws were put in place to have automobile drivers turn off their engines when they were at an intersection so all the horses passed. As we see today, unless you're an Amish country, you're not really allowed to have a horse-drawn carriage on the freeway or major public route without some kind of exception. That's the first example I think of. The second one is the one in your book, I don't know if your listeners have had a chance to get this far in your book but the Air France example is one where the humans made a decision despite the warnings that the system was providing. So the existence of an automated system changed nothing.
TK: There is this element of human arrogance that we have to factor in to the equation. I think that the only way, as you said, to outgrow that is over time to have reinforcement from the experience with AI that says to us there is an aspect of how we collaborate with this device or how we allow them to perform certain tasks that is in fact going to be much more effective, much safer, much less risky than if we did it ourselves.
So you've been at this for some time, Jack. Here's a question that I'd love to have you shed some light on. What's the timeline look like? Because we have on the one hand folks like Ray Kurzweil who tell us that we will achieve machine intelligence that is equivalent to the intelligence of the entire human race with the next 50 years, at what point do you think we will accept AI as a necessary part of the world so that we mere mortals are using it in our devices, in our homes, our cars without having these debates around safety and risk, we simply accept it and we look back and wonder how we ever lived without it. Is that a 10-year, 20-year, or 30-year horizon?
JC: I think the timing may not be as important as who adopts it first. So if we go back this recent memory in the last decade was the advent of smartphones. At the beginning, consumers bought it to some degree and it's began to pick up steam. It began to outperform other sales of other types of phones and people were pitching their BlackBerries if they had one or if they had some Nokia phone and get an iPhone. That didn't happen in business. It took another two years for businesses to set rules in place, to provide the mitigation from data escaping by putting in controls like McAfee’s controls on phones. That's what's necessary for business applications where we're talking about AI being applied to improve outcomes for business which means better products for the consumers, lower cost to produce these products, and creating value for everyone in the value chain.
I believe that's possible today but the adoption is slower because it hasn't been made safe yet. It hasn't been made in a way that you could just buy and implement it. This had been true of every computer-based innovation for the last couple decades. Businesses adopt the discontinuous changes more slowly. So while businesses were able to buy large computers in the beginning when consumers couldn't, when the PC came along, it flipped the centralized processing to distributed processing. It took a good 10 years before PCs were used in a meaningful way in the business and in a shared networking environment where if you changed a spreadsheet, other people could see it for example. Even that's fairly recent memory.
So I would see that all the ways that AI are being used in our personal lives like Siri, that’s AI at its core. It has a good, better rule-based components but when you ask it a question, it has to understand how you phrase it and that uses a neural network that’s been developed to understand the intent of your sentence.
TK: So take this to the business listener for us. Let’s look specifically at how AI is going to change the way that we do business. So at the one end of that spectrum, you have Putin saying that whoever figures out AI is going to rule the world, wonderfully dramatic statement from a very interesting source but you have it, it’s out there, and there are others that talk about AI in that sort of an overlord kind of fashion.
TK: It’s worth pointing out here that Putin’s quote is often taken so far out of context that it loses its original meaning. The quote which comes from a speech he gave to a group of students is as follows. “Artificial intelligence is the future not only for Russia but for all humankind. It comes with colossal opportunities but also threats that are difficult to predict. Whoever becomes the leader in this fear will become the ruler of the world. It would not be very desirable that this monopoly be concentrated in someone’s specific hands. That’s why if we become leaders in this area, we will share the knowhow with the entire world, the same way that we share our nuclear technology today.” Hearing that quote in context makes it a bit less draconian but no less profound. Back to our deal with Jack.
TK: At the other end, you have these fairly straightforward applications of AI that seem to be getting traction but minimally like chatbots which I know you have a lot of experience with. Give us a sense for where you see the best applications initially of AI that do exactly what you just described, that reduce risk, that improve the efficacy of the business, where do you think the most traction is to be had?
JC: It’s in anything that gives you foresight, being able to predict behavior. Where we’re moving from systems that do to systems that think, and prediction is part of thinking. You look at all the factors that are presented to you and make a decision about what you think is going to happen. We all do that every day. Let’s take robotic process automation which in its initial use in business was around systems that mimic human actions to process some information. So now you’re not stamping the envelopes and stuffing them, you’re running the campaign that’s fully automated electronic. The next advancement there was could we look at how the emails are being clicked on, are you clicking on the front part of the email or the bottom part of the email and how do we know because we’ve embedded some logic that allows the system to capture that information. Then another email could be sent or another type of response could be triggered and that’s a system that thinks, that applies judgment while it’s doing whatever work it’s doing.
What you’ve been talking about, the ones where systems that will take over the world at some point or systems that learn, that recognize a pattern and apply some logic to execute tasks. That’s the future I think ultimately but before we get there, we need to understand what we should do and that’s where I believe predictive AI, or it used to be in the realm of predictive analytics, is the best opportunity.
TK: Why are we having so much difficulty in creating something as basic as a chatbot?
JC: The ones that are gaining more success are the ones that are not calling them chatbots but they’re integrating them into their websites and mobile applications as an additional way to engage with the brand. So here’s the answer. I believe that because we’re visual creatures that we will have a big step to move away from being visual to having to read again or even to listen because if the response takes a long time to play back, then you’re going to lose part of it and not be able to take action on it. But visual information, if something is displayed in a way that’s easy to read and process, then you’re going to gravitate to that rather than to chat. So it’s one last thing that we really need to do to make chat work and that is humans need to be comfortable with the idea of speaking with a computer rather than a human.
TK: Look, if we’re doing something complex, you and I, I won’t hesitate to pick up the phone rather than try to go into endless texts or emails because I know the conversation will carry more value, we will be able to engage each other and explore the idea and its nuances in ways that we simply could not in written form. When I tried that with the computer however, I’m nowhere near. I’m light years away from that sort of conversational ability. Now, Google recently demonstrated an assistant, a personal assistant that will be able to order a pizza for me or schedule a haircut and it sounded very convincing but I wonder how low the threshold is for complex conversation. I would imagine it’s quite low. Are those the kinds of conversations that you think we should be having or we’ll have to evolve into having with AI for it to finally be pervasive and to really change our lives or can we do it without that level of conversational capability?
JC: I like the Google demo but like many demos, we don’t know what’s going on behind the scenes. We do know that Google has access to information built in their applications they’ve created over the years, Google Maps, Google Business where that information could be used by their automated algorithms on the backend that do many things that other businesses can’t do. Why? Because Google is not sharing. They’re not licensing that to anyone. You need to contact the Google service, make your inquiry, and get the information back so that then your chatbot can display something intelligent. While that may or may not have a fee, you’ve created a dependency on another AI that you have no control over. That’s where interactive virtual assistance, we have a good degree of risk.
TK: There’s a wonderful – I don’t know if you follow Black Mirror at all, the cable series. It’s on Netflix. I’ve watched a number of episodes. It’s a fascinating sort of peek into the future, very well-done, well-produced but there’s an episode of it called Be Right Back in which a woman, her husband or her boyfriend, I can’t remember which, in an accident and she then recreates him through AI. A service is available that will first allow you to use her spouse’s public texts, tweets, chats, what have you, and then his email and ultimately the apex of the episode is that she can actually order a full life-sized replica of her husband. It’s a very disturbing episode because you can see her struggling with the fact that she’s trying to communicate, she can in some ways communicate with this AI in a very human way but she knows it’s not human.
It sort of brings us to this inevitable question about AI which is how do we treat it. Do we treat it as though it were a human entity? We talked about autonomous vehicles that will own themselves someday, do we give them legal rights and legal standing? I know we’re going out on the edge but I’d be fascinated to get sort of your perspective on where you think the long-term trajectory for AI might take us and if in fact these are flights of fancy or if you really see us getting to that point where we’re having at least conversations with AI that are indistinguishable from those we would have with a human being and ultimately are able to create entire personas through AI?
JC: Think about the Mechanical Turk. You walk up to this carnival toy that has a puppet and the puppet would respond to you. But underneath the puppet in a box is a human being. We can use humans to make certain types of AI seem more human and to fill those gaps where the intelligence is missing. It’s not up to me to tell you when. You asked your earlier question, are we 10 years or 20 years away? What I feel is that the intelligence that we’re building into systems, it’s creating systems that can create their own systems. This idea that discovery is the advanced humanity and medical science or aircraft engineering are certainly happening as we speak where things are getting better because the computers we created are discovering new ways of doing things. This is in the realm of systems that create which is another category of AI. We’re going to see more and more of that.
So what’s happening in the robot, human-like conversation is just that other android, the expectation is that it’ll create something new. They won’t just pair it back, what it’s learned through a set of rules. That could be most disturbing and useful. It could be useful if this is a robot that is a companion like the emotional support animal. It doesn’t have to do much. It just has to be there and be happy. Then if it moves up the chain of animal intelligence, it ultimately becomes human intelligence. I believe that’s at some point going to happen but it has really nothing to do with how businesses would use forms of predictive analytics that can be enabled with machine learning. Anomaly detection where you’re looking for those outliers that can be really helpful to know, prevent disasters for example that if something goes outside of a boundary, you want to be able to anticipate it and cope with it. Those are immediately available and are improving at a dramatic pace. So if you don’t implement something today and your competitor does, in six months, you’re behind. You’re worse than behind because now they’re using that to build something even better. This progression in AI is moving so fast that it will catch many people by surprise.
The science fiction aspect of it will take longer. It will take humans longer to accept it. But you don’t need a lot of humans to accept the idea of having their own personal robot to make it successful economically. Moving back to mobile phones, the iPhone didn’t have a lot of sales the first year but today, who can imagine being without either an Android phone or an iPhone or anything that has a big screen? I’m using it and I’m not afraid of it. Some people are. Some people turn off their location awareness because they don’t want the big system in the sky to know where they are. In my case, I get some value out of that. When I use Google Maps, it knows exactly where I am and help me get through my destination. The AI knows where I’ve traveled to in the past, what time of day it is, and perhaps where I might want to go now. It doesn’t disturb me very much that the AI is able to predict that for me because I know a lot about how it was programmed. The way it was programmed is on millions and millions of trips that people have taken.
Google doesn’t need everyone to adopt them to get that information. They only need part of it. They probably only need 100 million to train the model that can do that kind of work. So now that that’s there, it doesn’t need us anymore because it’s learned. Because it’s learned, it can predict behavior for anyone because we fall into a pattern. We’re not altogether different from one another. We’re just different from most of the people we know. The doppelgangers everywhere as far as the machine learning understanding of what you are, we’re just not terribly different across the entire population of the planet.
TK: So you rigged up a fast turning point in talking about the importance of having access to large amounts of data. There’s a critical mass of data at which we can develop some intelligence about a certain task, a process, a group of human beings, devices, natural organisms, and ecosystems, whatever the case might be. However, I’ve also already talked about this, it seems that the companies had access to those volumes of data. I could count on one, maybe two hands at most. Will those be the companies that ultimately are the ones that are able to really leverage AI because they have access to the data? What happens to the small, medium-sized businesses, even the large businesses that don’t have that much data? Are we creating a bifurcated world here which some organizations have the ability to build AI because they’ve got access of data and others simply don’t or they have to license that data from the organizations that do have access to it?
JC: Yes, you’re right. It applies not only to businesses like Google and Amazon, Microsoft, IBM, Apple who have access to a tremendous amount of data. It also applies to nations. Vladimir Putin’s comment is not something that anyone should ignore. He just said it out loud but it’s quite true. Nations have the advantage of being able to – large nations particularly are able to spend immense amounts of money. My view in China where human resources are at much lower wage rate than the US and they have quite a few more to choose from that they’re able to create models that we simply can’t create here. So collaboration between nations’ governments and a company can be very valuable. Here’s the problem though. Google is a multinational company. Apple is a global company as well. Although they may be headquartered here, their tools and models they’ve created may be used elsewhere. So then we go back to something we did with hardware and that’s export controls. This is going to get very interesting and in my view, rather ugly. It’s going to be ugly because the possession of data, if it’s used with algorithms, will provide tremendous leverage for anyone who has them.
TK: I would love some good news [Laughter] because I think the path we’re going down right now has the potential to scare the daylights out of a few people. Good news please.
JC: Okay. So two bits of good news. One bit is that all of these algorithms are being published and they’re not terribly innovative at its core level. So I could create, you could create some algorithm, we do some machine learning prediction for example. Why do I say that? Because there are tools that these large companies have already made available to everyone where you take your data, you plug it in and the next thing you know, you’re getting the predictor. Salesforce.com with their Einstein product, that’s one. Microsoft has a level that makes it very easy for developers to do it. There are a few startups that have even easier ones. They’ve done this because the demand is there. It takes a tremendous amount of mathematical acumen to be able to understand the formulas and better use in these types of mathematical processing. It’s all just math. So having that math expertise or having it programed into a tool that helps really is a great thing. That’s part one. So good news.
Here’s the second part. I read a paper last week and there are others that have come out recently that have created ways to create models with far less data. What I mean by create models with far less data, not only far less data but also much faster. So it might take Google two weeks to run on the large amount of data. You could run in two weeks on a very small amount of data or in one week a small data or one hour depending on the task that you’re seeking to automate. It’s wonderful news. So this encourages me greatly. In my view of the business world, the part that I can contribute to are middle market companies. These are companies that are under a billion in revenue, maybe there are over 100 employees that are willing to invest and spend some money on this, not a lot but now it’s now accessible to them. Most of them have a large amount of transactional data by customers.
If you’ve been in business for 10 years and you’ve been shipping product, you probably have everyone you’ve ever shipped to. You probably have every order date, timestamp, what’s in there, their checkout box of everything you sold if you’re an online retailer or an actual retailer with stores, extremely useful and valuable for predicting behavior for humans. So it’s no longer falling into the realm of just Google or the NSA or whoever is trying to figure us out. You can use these tools today. If you don’t, it’s at your peril if you’re a small business or medium-sized business.
TK: That’s the one I want to pick up on because I think a lot of folks that are listening in to this and who are interested in AI and how it will evolve and how it will change our lives, the way we live, work, and play are asking themselves the question, what will happen to my business? Because we talked about how AI will get rid of 25% to 40% of all jobs that we have as individuals but I think at the same time, it will probably have a similar effect on businesses that those that aren’t ready, that don’t take advantage of this, will quickly find themselves well behind the curve, too far to catch up. So the question that they have is how do I stay at least within striking distance of that curve so that I know I’m doing the right things to be ready when my industry or my customers begin to shift in demand these sorts of AI-driven services from me? So on the one hand, I’m encouraged by what you were saying because there is a democratization as you put it, of AI on the one hand. On the other hand, you need to be doing something as a small, medium-sized, mid-market business to give yourself the opportunity to take advantage of these technologies. I suspect from what I’ve seen, there’s not a whole lot of that going on in medium-sized business. Much of it is happening in very large businesses. So what’s your advice? How do I even begin to create the knowledge base and the tools and the skill sets to be within striking distance of the AI curve?
JC: I don’t think you need to develop all the tools and the skill sets. That’s a technically oriented notion. The tools are getting easier to use. You can find boutique consulting companies that can do this for a small business. You don’t need to be Netflix, right? You’re trying to run a small business, a medium-sized business. It’s something that’s outside of the Fortune 2000. If we walk into any bookstore if there’s still remaining, if there’s a Barnes and Noble out there and go to the business section or if you go on to Amazon and look in the business sections, you’ll find books by authors like you and by Jeffrey Moore and others. I’m looking here on my shelf and one of the titles of a book that Jeffrey Moore wrote beyond just crossing the chasm was Dealing with Darwin. The subtitle is How Great Companies Innovate at Every Phase of Their Evolution. I started reading this book. I’ve read his other books. This one I haven’t yet.
Really, the winning idea is to change your business process, not to worry about the technology that’s necessary to enable change in business process but the hack is this. If you’re bringing new technology, you have to wait for people to use it. If you change the business process, you use automation to enable it, the employees have no choice. They also buy in because they participated in the business process change if it’s a company that’s well-organized and follows principles of management that are good practices, try to understand where the choke points are, where the waste is. Then having that knowledge of what AI can do, look at some type of improvement that can be accomplished to cut out some waste whether it’s waiting time or overuse of resources, improperly trained people, whatever it can be used.
Now it’s not an efficiency play per se. Yes, you get the benefit of the cost saving but the real change is value, value to the customer because the product is better and value to your organization because you have people doing things that are taking advantage of their human training, their value, their ability to process things the computers can’t do instead of doing rote activities. We should move beyond rote activities, let the computers do that and begin to use our minds in ways that make our businesses excellent.
TK: I’ve often heard similar comments which lead me to believe that maybe one of the th"