sort Menu

General AI is Already Here

with Chris Boos

More Info

Show Notes

In this episode, we look at the global aspects of AI and the implications of generalized AI in enterprise settings. Our guest Chris Boos is an AI entrepreneur, angel investor, and advisor to the German government on the implications of digital technology.

About Chris Boos

Chris founded Arago in Germany in 1995, pushing existing boundaries in AI technology to build a general AI. Since then, Chris has led Arago to become a key partner and driver for the established economy, positioning Arago’s AI HIRO™ as a platform for companies to reinvent their business models in the digital age. But his ambitions go far beyond: a strong believer in integrating machine reasoning and machine learning, Chris is constantly challenging current thinking on AI. As a strategic corporate and political advisor, as well as angel investor, Chris’ multifaceted engagement for AI makes him a much respected public speaker and thought-leader on issues of global importance, such as the man-machine-relationship, the way societies deal with information and the future of labor. arago.co

Transcript

"General AI is Already Here (with Chris Boos) TK: Welcome to Foresight Radio. I’m your host, Tom Koulopoulos. On each episode, we explore the many trends that are shaping the way we will work, live, and play in the future. Our focus is on disruptive and transformational trends that are changing the world in ways that are often invisible. Our objective is simple: to give you the knowledge and the insights that you need to better manage the future. Foresight Radio is sponsored by our good friends at Wasabi. You can learn more about them at Wasabi.com. Our guest today is Chris Boos, and we’ll be speaking with him about a wide range of issues involving AI from how it works to its evolution, and its impact on business. Chris’ self-acclaimed mission is empowering human potential by freeing up the time we need to be creative and innovative through the use of artificial intelligence and he has an extraordinary background on which to do that. In 1995, long before the “buzz” around AI, he founded Arago, a German company that pushes the boundaries of AI by working on what’s called “general AI,” something we’ll talk about more during our conversation. Chris was also recently appointed to the German Digital Council by Chancellor Angela Merkel where he’ll advise the government on the future implications of digitalization and artificial intelligence. Chris challenges our current thinking on AI as he paints a vivid picture for its opportunity and the way it will create entirely new business models and experiences. Here’s my interview with Chris Boos. Chris, let’s start with what I think is one of the most important areas to get straight which is this whole notion of, “How do we define what AI is?” There’s so much vocabulary here. We talk about machine learning versus AI. How does AI differ from machine learning? Are there different kinds of AI and machine learning? CB: I would say AI is the overarching science that includes many fields, and machine learning just happens to be one of them. Personally, I believe that this notion that we have from the science side to always pick one algorithm that should solve it all and that one algorithm, at the moment of that algorithm family at the moment seems to be machine learning is actually responsible for the many lows that AI has had over time. I mean, it’s an old science, right? It was created as a field in 1954, and since then, it’s gone through summers and winters, and all of the winters were basically created by the fact that people started running after one technology to solve it all. Machine learning as itself is a great algorithm or a great algorithm set. It basically rebuilds instincts that we have in the animal kingdom or with humans, so by repeating something that gets a positive or negative reward, you train to do more or less of it and that’s behind all the machine learning that we’re doing right now, and it’s just one part of AI. The misunderstanding that I think is in the system is this is the part where we’ve made the most progress because it was very clear that with the algorithms that were available, they would be very much usable once there was enough compute power, and the algorithms haven’t really changed that much in the last five or 10 years even. It’s more the compute power that has changed a lot. This is why all of a sudden, machine learning that was invented in the 70’s became [appliable] to a larger set of problems, and all of a sudden you had results that you previously simply could not compute and that’s why it’s out there. If you look at the field of AI, there’s machine learning, there’s machine reasoning, then there’s natural language processing, there’s semantics. There are many more parts, and they’re totally essential, so I believe that just boiling it down to machine learning is super dangerous because if you try to solve everything with machine learning, you consequently will hit the wall, and the wall for machine learning just to get this in here is super straight. There never is enough data to describe them now. As soon as you hit that problem where you cannot describe whatever you want to decide with data in a timely fashion, you’re going to make stupid decisions. TK: Here is an interesting point Chris, because I often hear conversations that focus on this notion of, “We need more data. With more data, we’ll have more intelligent AI,” but you’re bringing up an interesting point which is that there are situations when you just – you can’t have enough data. As human beings, we rely on intuition, we rely on instinct, we give this all kinds of names – our gut feeling, but we often make decisions without enough data. Can AI go beyond those finite constraints? How does it do that? CB: Absolutely. With AI, we’re trying to simulate the application of human experience if we go further into this general AI realm where we’re trying to simulate human problem solving, so that definitely can do more than just replicate the gut feeling. Actually, I like that you bringing this up because it really is the replication of instinct, and instinct is a very old part in our evolution or development, and this is why the instinct part does not have words. This is why we call it “a gut feeling” because we cannot express our instincts properly. They’re a gut feeling, they don’t have words. They live in the old part of our brains. TK: We often talk about AI in the context of board games like Chess or Go. One of the things I remember hearing many years ago was how Kasparov, when he played against Deep Blue got a little flustered and frustrated because he thought the machine at that point wasn’t AI. It was brute force. He thought it was doing things that were not predictable, that were uncertain, that didn’t make sense within the context of the humans playing each other. Now, today, we have the same thing. When Lee Sedol played against AlphaGo, you heard the same kind of reaction like, “It was exhibiting intuition. The machine was doing things that were very human-like.” Can you give us a sense of what does it mean for an artificial intelligence to have intuition? CB: I’m very sorry to lift the magic out of this. There simply is no intuition in AI and whenever it looks like an AI is intuitive, it just does something that we, for whatever reason don’t do. You could imagine the Go player that did something that the Go pro would never have expected. That’s not a Go move that humans have never made. It’s just experts have never made this move because for some reason, it didn’t occur to them, and then you have to watch how people learn to play Chess and Go, they learned it along certain strategies and those strategies evolved on certain set of mindsets and what a machine can obviously do is mix those mindsets and that’s what it’s doing. I mean, sometimes a beginner in a game of chess can beat a great chess player because he makes a move that the professional chess player thinks is so stupid or just unexpected that his whole learned professional game falls apart, and it’s pretty much that what happens here. It’s not that the machine was intuitive and seamless. It can happen in broader ranges of data. We, for example, played Civilization and we had the machine change the strategy from a normal trade and military strategy to a, “Let’s go leave the planet” strategy very early on in the game and we never understood how it could know that that was the only way to win, but when you really dig deep into the data, it’s simply because of micromanagement. It’s simply because it really evaluated every single city it had on the map, and every single unit it had on the map. That was pure micromanagement. Humans would’ve come to the same conclusions if we give ourselves the time to actually look at everything. Humans are the much better pattern-matchers. We see patterns much better, but that also means we sometimes overlook detail. TK: One of the things that as humans we often do is impose bias on a problem. Sometimes that’s good because the bias comes from experience and you can look at the problem through a certain lens that allows you to solve it much better than someone who has no experience. However, I have seen many cases where the very senior or smart people, where their bias will limit the field within which they ask the question. They’ll leave out certain areas they think are based on their experience not worthy of pursuing. Sometimes, especially in the scenario-based planning, you can leave out some very important scenarios that you should be evaluating, but you’re saying the machine doesn’t have that bias. The machine will evaluate all scenarios with equal objectivity because it can do so. CB: It will not evaluate all the scenarios with equal objectivity, but it will go into the detail. Here’s one of the main parts. We say like, “Machines are very dangerous in being biased.” Sometimes that comes up. Machines reproduce the biases we give them, either through direct interaction or through the datasets that we provide. If you look at it, a machine is just not able to express itself in any politically correct way, meaning that if you want to mirror actually what people are doing and the biases that are within us in our society, AI is a pretty ugly mirror because there’s simply no buffer, no softening in this. It just reproduces the bias in facts that we have in the datasets and in our opinions that we teach the AI. TK: What I know a great deal is, “With more computing power, with more data storage, we will have better AI.” Is that really what it’s about? Is it just the amount of data, the volume of data, and the degree of power that we have in the computer to process that stands between us and the advances in AI or is there more that needs to evolve here to truly get us to the next level of artificial intelligence? CB: There is more to AI than simply that the compute power and the data ability just comes plays into machine learning. There are also other places. For example, AI started with expert systems, right – where computer scientists had this idea that there would be one right answer for every question you ask, one solution to every problem, one exact solution to every problem. That’s a very computer science approach to life. The whole point is that obviously didn’t work because for most problems, there’s more than one answer and mostly, the problem definition is not good enough so you have to keep changing the problem definition and then you get problems in going down that logical decision tree. There, you also see a reflection how chess used to be played, and why Go couldn’t be played with the decision tree. This family of algorithms that were started then is called “machine reasoning” where you simply – you had an argument alongside, and once it was clear that you could not have one answer, one logical answer to every problem, you actually started to have to weigh answers against each other. This is basically what we call “rational thinking.” There are many ways to what you can do next or what you could look at next and what you can think about next, and what is the one that gets you closer to your end-goal. That is basically what is happening inside the reasoning space. The reasoning went from these expert systems to more knowledge graph system where you’ve had multiple ways of reaching an answer but there still was only one answer to more knowledge-driven systems where you actually had multiple different answers to the same problem. Unfortunately, all of those were based on the world being completely logical, and anybody who’s lived a bit, right, and does not completely live in the bubble of the valley knows that the world is not really logical. TK: You once said to me, “We don’t talk about the future, we promise the past.” In some ways, it seems as though AI, inappropriately envisioned could simply be relying on the past and not innovating the way we humans do a new different future. Does AI shackle us in some regard to the past by doing so, by using patterns that are legacy patterns? CB: It definitely does. In the good explanation of AI, you would say AI is applying experiences you’ve already made. I mean, when you’ve learned how to add numbers in school, you can add numbers in any kinds of ways and that gives you something very interesting, but you don’t come up with the idea that you could also divide numbers maybe. That, In AI, it’s inherently non-creative, meaning that it does not create new experiences for itself. It’s an optimizer for how experiences can be used in different circumstances and different contexts, but it does not create new experiences. It’s our job as people to create new experience. That is what we do. TK: What do you do at Arago is focus specifically on general AI. Now, I want to talk about this, Chris, because when we listen to the prognosticators of gloom and doom that talk about how AI someday is going to become our overlord and take over the world and be our last great invention as mankind, what they also say to us is don’t worry about existing yet because narrow AI isn’t a threat. It’s generalized AI that’s a threat, but you’re focusing on generalized AI, so help us develop a more objective view of the future of AI, and maybe bring us back from the brink of the apocalypse just for a few minutes so that we can have some perspective on what’s going on here. Why are people so impassioned about the fact that AI could be the end of civilization as we know it? CB: Good question, but let’s try and answer that. First, I have three categories to define this whole space of AI. I would say on the one end of the category, you have the narrow AIs. That means applying exactly one algorithm to exactly one problem. I jokingly sometimes call this as the programmer’s answer to McKinsey. It’s pretty much what these high-end strategy consultants do, one very extensively optimizing solution to exactly one problem. This is what narrow AIs do. They’re great. They get trained to do exactly one thing, and then they do that one thing over and over. You have to re-train them when the world changes, but otherwise why would you be afraid as a person to use that? It’s just efficiency that’s coming out of it, and most companies have been doing nothing but efficiency programs for a very long time. I mean, look at the innovations that we’ve already made. We’ve become so much more efficient. Not always more effective, that’s sad, but definitely much more efficient. Narrow AI does the same thing. On the other end of the scale, you have these science fiction AIs. I’m not a [dystopias]. Let’s describe those differently. It’s like the robot that will actually say, “I love you,” and understand what it’s saying and mean it, no one has the slightest idea how to start building that. I mean, absolutely no one. There is no one out there who has the slightest idea how to create such a thing because we don’t even know how human consciousness works. It’s certainly not going to happen by accident and we are not anywhere close to rebuilding anything that approximates the brain. Even if Moore’s law holds by 2029, we might be able to reproduce the electrical part of the brain, but what about the chemical part, and there’s most likely quantum – we’re missing whole dimensions of what was needed in nature to create consciousness and it’s certainly not happening by accident. [Music] TK: One of the examples I often use to describe the difference between narrow AI and the general AI that Chris is talking about is that of riding a bicycle. You’d stop and think back to when you first learned to ride a bike. There were myriad uncountable rules that you had to follow. Eventually, your body however adjusted and you figured out intuitively how to ride a bicycle. Then, you had to teach your kids, or your nieces, or nephews, or grandkids how to ride a bike. There’s no way you could possibly have gone down every single rule that you would internalize so deeply when you learn. Through experience, you taught them how to ride a bicycle as well. Narrow AI is understanding a very specific discipline. Even though we may be very complex and very rich, and deep in terms of its ruleset, you can’t take that same ability to ride a bike and then transfer it to driving a car or flying a plane. All of these are separate domains. Truly, generalized AI would be able to learn any of those on its own. Furthermore, it will be able to express this uniquely human ability to be curious - [Music] back to my interview with Chris Boos. CB: The machine that has its own goals might become our enemy because it chooses to be an enemy of humans, I would not say that is never going to happen, but it is not anywhere on the horizon. In between those two, you have more general AI. General AI in this case means that you have a piece of software that is comprised of many algorithms and potentially, one data pool like one semantic data pool to be able to attach itself to the understandings that humans have given it because machine themselves do not understand anything across multiple domains like industries or just context of life. The whole idea of using this one machine, one data pool approach is that you do not need the ramp up time for every problem anew to create a dataset, clean the dataset, train the machine with the dataset, see if you like the results, try and model around with the model and the algorithms until you get desirable results. It actually means that you build on all the experiences you’ve had before in a very close context and a very distant context at the same time because in the end, everything is interconnected. General AI means that you apply one engine and one data pool for all the different types of problems that you’re giving to the AI. Your goal with general AI is to cut off all the ramp up time like the time you need to actually make it productive and do something you like, which in the worst case like starting stuff from scratch is more than a year, maybe two years, and if you wanted to automate all the processes in your company, that would take you centuries otherwise. Obviously, no one has the time for that, so you’ll need these general AIs to actually get through the business problems that we’re presenting to AI today. TK: The fear factor that we hear around general AI, it’s at times hyperbolic admittedly, but does any of that resonate with you? Is there a point at which in the near future we should be especially vigilant of or especially concerned about the application of generalized AI? CB: No, but there are a few factors that we should be extremely careful with and look out for them and also go to our institutions and make sure that that stuff doesn’t happen. The idea of for example putting AI into killing machines and having those machines making a kill decision automatically, that would take the human factor out of war. I believe that there’s nothing more important than having a general to make those decisions and having a general who has nightmares when he made the wrong decisions. I believe that taking the thinking about it well in the context of everything, not just in the context of, “This is my mission parameter, and I need to get it done,” that is super important and we shouldn’t give this to AI. These are the goals that we give to the machine and we simply should not put machines into that field, or if other people do it, find a defense against this. This is really important because we can never guarantee that no one is going to do it, but maybe something we should avoid for the longest time. I very strongly feel that it is so much more dangerous – it’s much more likely that we wipe ourselves off the planet before any AI does that. Even if you get that super intelligence, I don’t know, maybe 200, maybe 500 years from now, why should it – that would be an entirely new philosophical discussion why should it hate us. Why should it hate us? If that’s very unlikely which I believe is, why should it accidentally get rid of us like we step on ants? If that thing really exists, it would be much more likely to simply leave earth, because it’s a machine. It won’t be bound by water and oxygen. If it’s a machine that’s self-conscious, it would develop its own goals, and typically, if you want to achieve your goals, you would need resources and there are way more resources out there in the Kuiper Belt than now on earth. TK: Let me switch topics a little bit if I can. You are recently appointed to a very prestigious council by Chancellor Angela Merkel to look at digitalization in Germany. From a governmental standpoint, tell me a bit about what you’re doing there and the relevance of that, vis-à-vis our conversation today about AI. CB: I happen to believe that while AI is not going to wipe us all out from the planet or take all our jobs in the near future, I very strongly believe that it will turn our complete economic system upside-down. I believe that the introduction of AI seriously into the economy is going to be much more powerful than introducing the steam machine and that was a pretty heavy shift in terms of society and productivity and so on that happened. With AI, we’re most likely going to leave the industrial age towards a knowledge age. It’s going to redefine our systems and it’s going to redefine what we as people do to make a living. There is a tremendous opportunity in this, but on the flipside of that, on the dark side of the coin, that means that all the economies that are out there – and Germany happens to be a fairly big economy – that are so great at this industrialization have a much harder time than a lot of others in changing their models because it has worked for us so, so well. That was my motivation to actually accept the nomination to this council to like, “We do need to change as a country, as an economy, as a society, and it’s going to be hard, so we need people who can actually imagine this and maybe point the right way for these changes.” That is the key when I went in there. The job of this council is to advice the government and maybe tell where things are going wrong, and review things that are happening, and point out the more or less obvious that needs to be done on a short and long-term basis. TK: When we look at companies like Kodak that went out of business even though they’ve built the technology that put them out of business namely, digital photography, what shackled them was not that they didn’t understand the future. Not that they couldn’t even see the potential of the future, but that they were shackled by an industrial engine, and industrial supply chain, and industrial factories and machines that represented too great of an investment to walk away from. Certainly, they couldn’t walk away from it in time to respond. In the same way, successful economies: the German economy, the US economy, many economies around the world which have built their stature on the industrial era model are similarly shackled in many ways. Now, we’d like to believe that we’re growing out of that industrial era into a knowledge era, but it’s part of the risk here that we have done so well in the industrial era that we simply will not have the foresight, or frankly the ability to set aside that investment and move towards a new model. Does that give developing nations some sort of an advantage in the same way that they might have had Eastern [Unintelligible] countries were much faster to move to cellphones because they didn’t have the traditional landline infrastructure? Does something similar apply here, or am I stretching this beyond its boundary here to try to make that analogy? CB: No. Actually, you’re not emphasizing it enough. It is the true danger that if we stick with what we’re doing, we might be out of business very quickly. I mean, [Laughter] if you look at this, 30% of any given economy in the industrial age is logistics. If you change the cost of logistics and go 90% down in the cost of logistics, that means that 28% of cash all of a sudden becomes available in an economy. If one economy does that and the other does not do that, you’re going to change the bounds of power dramatically. If you look at this, if Germany for example did not do this and changing the price of logistics is basically introducing self-driving shared cars or vehicles, deliveries, whatever, then a much smaller economy like Poland who would introduce the self-driving vehicles would all of a sudden have the same free cash flow [Music] inside the economy as Germany and that is crazy. TK: Wow. CB: I would imagine that to an economy like India or China. TK: You’re listening to Foresight Radio. We’re taking a quick break to thank our sponsor of this episode, Wasabi Technologies, the leader in the next generation of cloud storage. Find out more about Wasabi at Wasabi.com. [Audio Presentation] Now, back to my interview with Chris Boos. The change here is so great, Chris, that I wonder to what degree our attention’s being brought to it, because I’ve heard the term “silent industrial revolution.” You’ve used that term in our conversations in the past. With the industrial revolution, with the steam engine we saw the threat. The Luddites took their axes and their sledgehammers to the factory looms and the mills. It’s not as visible. AI is a very invisible factor and it changes things in ways that are not necessarily apparent to us in terms of the threat and the degree of response we should have to that threat. Is that part of what’s happening as well? Is it the invisibility of AI? CB: That is totally correct. It’s not just the invisibility of AI, it’s also our unwillingness to think about the future. If we go into this a little deeper, we have to say there’s a very negative thing and a very positive thing about this whole situation. Let’s start with the negative first and so we can end on a positive note on this question. TK: Good. CB: The negative part here is that we simply don’t talk about the future. For some reason, the future is not an option anymore for a lot of people but everybody can feel that there is a change, and I believe this is why you have the shift to the right basically in the entire developed world because everybody can feel that there’s a change coming, no one is talking about it. This is like little children. When that happened to you with your parents, your parents would do something and you would exactly feel something is in the bush, something is happening here, but no one would talk about it. Mostly, what came out in the end was bad, like grandma had cancer or divorce was looming or something like this. We kind of have the same behavior patterns meaning that, “Something’s coming. No one’s talking about the future. Let’s please go to the guys that promises the past.” I mean promising the past, that is the only thing in history, if you look at it, that has never worked. It never worked to bring back the past. The positive part of this is we already have the future. It’s already there. You can see it. Our established economy is under tremendous pressure from the new platform companies that have come out of Silicon Valley, and if you don’t want to take a global view that have come out of China as well, those companies greatly threaten the models of basically any industry right now, but they exist side by side already. It’s not like, “The factories and the steam machines are so small, they’re unimportant, no one cares, and the rest of the economy can laugh it off for a while, and the Luddites can destroy a few machines until they can’t anymore.” We already have these two models and we should see the warning signs. I mean if you look at the largest five tech companies and their total market cap compared to all the other companies, I think that should tell us something how big they are and how much future we put into their hands financially. [Music] TK: Chris has pointed about the largest companies in the world is one that has gotten a lot of attention, especially as of late. Although these fluctuate, the companies that often come in as the largest based on their market capitalization are Apple, Microsoft, Alphabet, the parent company of Google, Amazon.com, Tencent, which is a Chinese conglomerate that invest primarily in internet-based technologies, Berkshire Hathaway, and Alibaba Group. Also, we’ll throw in there Facebook which often comes up in the top 10. What’s startling is that if you tally up the market cap for the top high-tech companies, Apple, Alphabet, Microsoft, Amazon, Tencent, Alibaba and Facebook, you’ll find that it accounts for about 20% of the market capitalization of all public companies in the US and just about 8% of all companies globally that are publicly listed. That’s an incredible indicator of how technology is dramatically impacting our economy for the future. Back to my conversation with Chris. [Music] CB: Because we already have these companies, I think that the hardest transition in this kind of phases that we have, the hardest part of the transition. The absolutely hardest part is the transition because it means that people have to change, jobs are changing, and a lot of times, that means that the efficiency that a new technology offers is only taken on by the entrepreneurs and by the shareholders and the workers don’t get any of it. That was the case with the steam machine. That was the case in the industrial revolution. That’s what caused a couple of world wars which were definitely not a good experience for anybody. We don’t have that problem today because our established economy is still very big and very strong. If we give a tool like artificial intelligence to the established economy to literally automate every process to a very high degree like, “I should limit this,” every process that is not entirely based on language and maybe we can have that discussion why I make this limitation. Anyway, every process that is not entirely based on language can be automated in the established industry today. That will give them a lot of money back and I don’t think they have the option of taking that money off the table. It will have to reinvest it immediately into people to create that new experience so they can compete with the platforms that are already out there. I think that because these two models already exist side by side, the old model can simply take the money of the efficiency and generate all these jobless people that we have, they have all to be reemployed to build the future for these companies and most of those companies have a very hard will to survive. TK: Who are the companies, or the industries, rather, let’s put it that way, that are most at risk as a result of AI disruption? Which are the ones that you see as needing to change their business model and perhaps their culture the most to be able to survive? CB: I’m not going to say anything really new here. The classification of how impactful AI is going to be or what it means for different industries is A, how accessible is that industry to AI? That, today, still means that how technical is it already? An industry like banking is very technically accessible. An industry like oil and gas where you still have to turn a lot of valves and drill into the ground, that is not completely accessible. Then the question of how much impact can an AI have in an industry also means - it defines how easy it is to deploy AI there. At the top of the disrupted industries are, first of all, computer companies, software, and I mean you’re looking, what’s happening with IBM and also the Indian large IT companies, you probably think that I’m right. The second one is the telecom industry, and you see that since Facebook bought WhatsApp, telecoms have been struggling to move beyond text messages as a general value proposition. Those have the largest impact of AI. With AI in a telco, we’ve done cases where you get 5% network efficiency without building any new cell towers or opening the ground to put new lines in. The banking, the whole financial industry is entirely accessible to AI because it is literally very technical, and we can move down and I’d say, at the end of the line of industries that are accessible is probably the Swiss watchmaker and the very much end of it is the Irish pub because yes, the only IT they have is the cash desk. [Music] TK: [Laughter] That’s great. [Music] One of the things that you said, and I want you to expand on this a bit, is how data should not belong to the collector but should belong to the source. Now, I may be not interpreting this correctly but does that mean that my data should belong to me, not to Facebook, not to "