This conversation between Tom Davenport and Tom Koulopoulos covers the more practical aspects of enterprise AI. Drawing on Davenport's latest book, The AI Advantage, they delve into the how and why of the successes and failures of AI.
Tom Davenport is the President’s Distinguished Professor of Information Technology and Management at Babson College, the co-founder of the International Institute for Analytics, a Fellow of the MIT Initiative on the Digital Economy, and a Senior Advisor to Deloitte Analytics. He teaches analytics and big data in executive programs at Babson, Harvard Business School, MIT Sloan School, and Boston University. He pioneered the concept of “competing on analytics” with his best-selling 2006 Harvard Business Review article (and his 2007 book by the same name).
"TK: Welcome to Foresight Radio. I’m your host, Tom Koulopoulos. On each episode, we explore the many trends that are shaping the way we will work, live, and play in the future. Our focus is on the disruptive and transformational trends that are changing the world in ways that are often invisible. Our objective is simple: to give you the knowledge and the insights that you need to better manage the future. Foresight Radio is sponsored by our good friends at Wasabi. You can learn more about them at Wasabi.com. In this episode, we’re going to be talking to a long-time friend and colleague, Tom Davenport. Tom is the President’s Distinguished Professor of Information Technology and Management at Babson College, the co-founder of the International Institute for Analytics, and the Fellow of the MIT Initiative on the Digital Economy. He’s also a senior advisor to Deloitte Analytics. He’s recently authored his 15th book, “The AI Advantage: How to Put the Artificial Intelligence Revolution to Work”. Although AI seems to be everywhere, our conversation focused specifically on how AI is helping the enterprise. Rather than go for the moonshot such as curing cancer or synthesizing all human knowledge, we looked at how companies can harvest the low-hanging fruit to make their organization more efficient. Here’s my conversation with Tom Davenport.
Tom, congratulations on the latest book, “The AI Advantage”. What fascinates me about this is that you’re taking a very practical look at this hyperbolic topic, and you’re applying it to the enterprise, which is very novel. Why the enterprise? Why not talk about autonomous vehicles and robot servants and all these wonderful stuff that we hear so much about when we hear about AI?
Davenport: As you know, there’s no shortage of content in that space already, and I thought there was a shortage about the enterprise. Usually, what I do - I did it for knowledge management, I did it for analytics, I did it for big data - is talk about its use in the enterprise. I would have normally done that as my first book in the space, but four or five years ago, when I started looking at the rebirth of AI, there was no enterprise activity to speak of. So, I wrote a book with Julia Kirby on what it means for jobs and skills called “Only Humans Need Apply”. Then the enterprise has really gotten hot in terms of activity, so I started working on that a couple of years ago.
TK: In that book, in “Only Humans Need Apply,” you talked about augmentation as the “A” versus artificial. There’s a lot of debate around what the “A” should stand for. Why augmentation? Are you still talking about augmentation, or have you changed that tune a bit?
Davenport: I never really said that we should call it “augmented intelligence”. I was really contrasting augmentation with large-scale automation. My thinking changed as I wrote that book and I talked with Julia who’s influential in the thinking process. Automation, once you automate something, it makes it programmed in. It’s hard to innovate after that. Everybody’s margins end up falling. Costs fall, profits fall, and so on, and I wasn’t seeing much of it happen. I interviewed a bunch of people who were already augmenting a smart machine or a smart machine was augmenting them, and I think at the time it was one of the few books that was talking about augmentation, but now I think that’s almost become conventional wisdom, that we’re not going to see this massive dislocation at least in the next 10 years or so.
TK: You bring up a really good point there, which I want to touch on just a little bit more. When we automate something, we go through these cycles every few decades or so. ERP was the last huge automation cycle. We do something, which is really interesting and you said it yourself, we stifle innovation to some degree. So, we begin with this head of steam, we innovate and change the process, and then we reduce it to rules and processes that make it virtually impossible to then change that process. So in many ways, the same technology that allows us to innovate then eventually stifles and contains the innovation. Why will AI not do that as well, or will it?
Davenport: It’s an interesting idea. I guess the only good news in that regard is that AI at least now is very narrow. It’s very task-oriented, ERP was much broader. So, at least if you’re automating a task in such a way that it stifles innovation, it will be a relatively small thing that it gets stifled, but people talk about AI as if it will continue to learn and get better and so on. It’s a bunch of crap, frankly. Machine learning learns but it only learns continuously if you continuously redo it and give a new training data and so on. So, it’s only going to keep learning if you keep creating new models, basically. If you look at technologies like robotic process automation and some of the other automation-oriented technology service now and so on, I think there is a possibility of what you’re saying will happen, that it will - it’s like pouring concrete around the way you do your business, albeit not like cobalt concrete but somewhat more easily changed, ready mix, I guess. [Laughter]
TK: I love the visual that that gives us. We definitely tend to create rigidity around processes when we automate. To some degree, we’ve been trying to create more agility rather than rigidity. Have we been achieving that from what you can see in enterprises, or are enterprises inherently attracted to rigid processes?
Davenport: I think people want it both ways. They want the economics of large-scale automation and doing things the same way, and at the same time, they want to be agile and be able to change quickly. I’m not sure you can really have both of those things. ERP was a very challenging technology to implement and everything was related and it’s God’s gift to consulting, as we all know. These newer technologies are much easier. They’re visual. The rules are pretty straightforward. One of the complaints that I hear from my consulting friends is they don’t really require a whole lot of consulting help, and the challenge becomes more than if you’re doing thousands of these little software robots, how do they relate to each other, and if you’re changing the underlying system with which they interact, that creates a whole set of problems. It’s like retraining thousands of users or something like that.
TK: In the book, you talk about the difference between the evolutionary and the revolutionary approach, and we’re certainly fond - in the technology space, we’re talking about revolutionary technologies. Every technology has to be revolutionary to be marketable to some degree. Your point is that it will be in the near term evolutionary. In the long term, it will be revolutionary. Can you give us maybe examples around that? What’s the evolution going to look like as opposed to the revolution?
Davenport: I started the book with three examples of attempts at revolution. There’s the MD Anderson cancer treatment case study using Watson or attempting to use Watson, and you’re probably familiar with that. [Pause]
TK: Tom talked a great deal about Watson, which is an IBM AI machine that is both software and hardware. It was named after IBM’s founder, Thomas Watson, and built specifically to help answer complex human life questions. Its most notable and widely publicized use was on the TV game show Jeopardy where Watson competed against human opponents, which were among Jeopardy’s best players. Watson of course ended up winning first prize. Less well-known is its use at hospitals such as Memorial Sloan Kettering Cancer Center and the Cleveland Clinic where it was intended to help with cancer treatment recommendations, yet the hopes for Watson’s advances in bringing AI to oncology have fallen far short of what was anticipated. MD Anderson, for example, canceled its use of Watson after investing over $60 million. Much of that was blamed on the inability of Watson to fully comprehend the many human nuances of language that are used to describe medical conditions in patient records. If anything, Watson illustrates how far AI still has to go to take on the most complex and pressing problems the humans are far better equipped to handle perhaps with an assist from AI. It all comes back to a running theme in our conversation that collaboration between humans and AI is far better as an objective than simply handing off every human task in the hopes that the AI will somehow replace in its entirety the human element. Back to my conversation with Tom Davenport. [Pause]
Davenport: Sixty-two million down the drain and never a single patient treated with it.
TK: That was one of the showcase examples. I think it was held out there by IBM as to how Watson could significantly change an industry. You don’t hear much about it anymore.
Davenport: [Laughter] No, because the University of Texas auditors shut it down and said, “We’re not going to waste any more money on it,” and the money for that project came from this guy who’s now being pursued as a global crook, this guy named Jho Low from Malaysia. There’s a new book about him in how he gave away all sorts of money that wasn’t really his to give away, [Laughter] so, there was that. Then I talked about DBS Bank in Singapore that was trying to use Watson for a robo-adviser investment advice thing, and that didn’t work. Then I even talked about Amazon, which most people would put at the very top of technical capability, and even they say, “Yes, we’re trying some really ambitious things.” Amazon Go store, they stumbled a bit there. It’s a cool store but it’s been hard to do and they only have one of them still. Bezos said in a 2017 letter to shareholders that the vast bulk of what we do with AI will – I think his phrase was “be quietly but invisibly improving core operations.” At those other two companies, MD Anderson and DBS Bank, they were also at the same time in other parts of the business doing these not showy low-hanging fruit kinds of applications that all worked pretty well. So, I think it’s a little bit less sexy to read about and think about, but I think that’s what AI is going to be like, a lot of those things that quietly but meaningfully improve core operations.
TK: The term “low-hanging fruit” that we’re referring to has to do with the ability to use AI to get value out of fairly straightforward applications of the technology. For example, I’m at DFW Airport right now. One of the major advances, as I look around that’s occurred in aviation over the last decade, has been the use of predictive analytics and now AI to better understand the behavior of aircraft systems, especially their engines. Taking aircraft offline for even a few days can result in hundreds of thousands of dollars of lost revenue. Being able to use AI to predict when an engine might fail or might require maintenance or otherwise take it offline can have a significant benefit. That’s the low-hanging fruit. It’s not a moonshot. It’s not using AI to create something extraordinary but rather using it in a process that’s fairly ordinary and typical. Back to more conversation with Tom Davenport.
It really is almost the opposite of what we expect to see when we talk about AI because the image we have is of the robot, something visual, tactile, an innovation that will be in your face that will somehow change us in very overt ways, but what you’re describing, this invisible AI goes unnoticed. It’s part of the fiber of the business or the process.
Davenport: Yes, and a lot of it has been around for a while, as you know. If we think of machine learning as you do a statistical model and then you do some scoring with it, FICO scores were generated that way 20 or 30 years ago. It’s been around for a while. Even rule-based systems, which a lot of people think are totally dead, my research and some surveys that I did with Deloitte suggest that over half of large companies say that they’re getting value from rule-based systems and they have them in place. The old stuff is still with us. The new stuff is exciting but probably not in itself going to change the world.
TK: When I listen to you talk about AI and I read what you’ve written in the book, the sense I get is that perhaps we don’t really understand AI, that we tend to build a mythology around it, and that mythology doesn’t serve us very well. So, when you walk in to speak with executives or you give a presentation, you’re trying to get senior level people to understand AI in the most practical, meaningful way. How do you describe what it is to them? I’m not a technologist. I’m a businessperson. Give me an understanding of AI that I can grasp on that will be meaningful and relevant for building my organization.
Davenport: The classical definition, which I think is pretty decent, is that it does things that previously required a human brain to do. Obviously, we’ve been doing that for a while. In some cases, it does things that human brains could never have done, like chew through a vast amount of data to decide what digital ad ought to go onto a publisher’s site in 200 milliseconds. No human could ever do that. Then you pretty quickly have to get down to the underlying technologies. What’s statistical machine learning, what’s deep learning, what’s natural language processing, what’s robotic process automation, et cetera, and that I think helps a lot when people see it’s a combination of related technologies, not just one thing.
TK: Does the threat of AI play a role in this adoption as well? Because we hear a lot of talk about how AI is going to be the last wave of invention of mankind, is one way I’ve heard it said, how it will put us out of business, out of jobs, and that threat factor certainly plays well in the media. Do you see it actually playing out that way in enterprises though? Are people being put out of work right now, or are they being elevated and put in different jobs? What do you see personally?
Davenport: The vast bulk of what I hear, as I say, is more augmentation-oriented. We’re not firing people. We’re not having AI replace entire jobs. It only performs individual tasks, and almost all of us do multiple tasks in our jobs. I anticipate on the margins there will be some job loss. Maybe we’ll need nine lawyers instead of every 10, which some might argue is not a tragedy, but I was just looking at some new data that we just did a survey with Deloitte of managers who really understand AI and how it’s being used in their companies, and it’s a little bit scary because 64% agreed that we’re going to try to automate as many jobs as possible with AI. That was the most surprising because I’ve seen a lot of other data suggesting that they wanted to enhance the work of humans and they wanted to let humans do more valuable kinds of activities, but that was a pretty scary number. So, I basically think we can’t be complacent about it as individuals who hold jobs, and we really have to work hard at figuring out how we add value to these machines, but in general, I don’t think it’s going to happen dramatically and quickly, certainly, at least until we get to the point of singularity where AI is smarter than us in all areas at once, and then I think all bets are off.
TK: Maybe we’ll come back to singularity and the revolutionary thing in just a minute. In the book, you advised people to take a position on the jobs issue. Do you think that companies need to take a political position on it, knows how the story to tell about what AI will do to jobs to sort of assuage the concerns of their workers? Want to explain that a little bit?
Davenport: I think that people are nervous about it as you suggested, and because I believe augmentation is both more desirable and more likely, I think most companies would be better off if they said, “Look, we’re not going to have wholesale elimination of jobs because of AI.” We may lose a little bit on the margins, we may not replace some jobs that lead to attrition, but in general, we think this is something that you can work with as a colleague in many ways, and you should try to do that. You should think about how you can add value. If we don’t, then I think you won’t have a whole lot of cooperation with the introduction of these technologies, and people will hold back their knowledge and won’t play nicely with these machines. So, I think we’re far better off saying we’re going to go for augmentation. For the most part, if you work hard and try to learn new stuff, you’ll be fine. I read a little piece not too long ago with Keith Dreyer who’s the Chief Data Scientist at Partners Healthcare here in Boston. He’s a radiologist and a PhD in AI. It’s a pretty rare combination. He said, “The only radiologist who are going to lose their jobs because of AI are the ones who refuse to work with AI in their jobs,” and I think that will be true of a lot of things.
TK: That’s a really insightful way to look at this. We become collaborators with AI. That in and of itself requires a different skill set in terms of we approach our job in our work. If I were to say to someone today, “You will collaborate with AI,” it will probably be one of the hardest things for them to get their head around because they haven’t done it before. If I bring onboard a new employee, you have a sense of what it means to socialize, to learn how someone works to get a sense of building around their quirks and to find some common ground. I’m not sure anyone has a clue as to how to do that with AI. Any examples you can give us of what that look – what does that collaboration look like, rather?
Davenport: Sure, I can because that’s what this book “Only Humans Need Apply” was about, and we found a number of examples. In teaching, for example, I think AI can do a lot of things in the education space that would benefit that sector. There are tools, for example, these adaptive learning tools that will figure what you know in terms of digital content anyway, and if you’re learning faster than everybody else, it will give you more difficult content. If you’re learning slower, it will give you less difficult content. It personalizes the educational process, and of course, we all know there’s a lot of online content, Khan Academy-type stuff all over the place.
In Brooklyn, they have this thing, New York City schools called the “School of One” that was set up from the beginning to use those kinds of technologies, so I talked to a teacher who’s at the School of One, and I said, “What do you do if all of that’s being done by machine,” and he said, “There are still a lot of things, there are a lot of different tools. We help kids figure out which tool would help them the most. We work closely with vendors of the new stuff, and of course there are all the social learning things that kids do that AI is not very good at.” So, I think almost every sphere, whether it’s radiology or investment advice or whatever, there are still things for humans to do, and it’s different in every case.
TK: One of the things that you’ve talked about before is the importance of getting people’s attention. You actually focused for some time on the attention [Crosstalk].
Davenport: Yes. I wrote a book on that. It came out on September 10th of 2001. Was that bad timing or what? [Laughter]
TK: It was great timing.
Davenport: Somehow the world’s attention was elsewhere, Tom.
TK: Something else was getting our attention. I was thinking the same exact thing. It’s still the issue though. A lot of the difficulties nowadays is because the amount of data that we’re surrounded by - media, content, whatever context you want to think about - is overwhelming. We can’t dig through it all. You talked about Partners Healthcare. Years ago when I was speaking with them, they described to me how this new longitudinal approach they want to take to looking at their data would help them to discover diseases that no clinician would be able to foresee or understand by looking at that same data. So, we’re at this interesting point in time where we have too much data to sift through as human beings. AI can help us, can augment that task to help us look through that data and find those patterns. Have you seen that done in practice?
Davenport: Yes. It’s hard still and one of the pieces of advice I gave in the book that was passed along to me by this guy. I don’t know, maybe you know him, Irving Wladawsky-Berger. He worked at IBM for 37 years as a kind of a chief scientist and still hangs around MIT, and he said, “One of the most valuable things you can do in this book is tell people that this shit is hard.”
TK: [Laughter] That’s a great quote.
Davenport: I was a little more polite about it in the book, but if you look at cancer care, for example, I think IBM made a big mistake by focusing early on that because it’s just a really hard problem, but if you talk to the people at Memorial Sloan Kettering and so on that are partnering with them, they’d say, “It’s impossible to cure cancer without a machine like this.” With over 400 types of cancer now, it’s all mixed up with your genomics and your proteomics and your biomics and so on. There’s no way a human can keep track of all of that, but it’s so complex and so difficult that it’s going to take a really long time.
TK: What you’re describing is a phenomenon that I think we’ve seen play itself out over and over again through the course of at least modern history, which is that every so often we come across a tool that is necessary for our survival to move us to the next level of humanity’s existence. With agriculture, we saw this. With a plough horse, you would never be able to feed seven billion people. Yet by drastically reducing the number of people and increasing the amount of automation and the technology involved in agriculture, we now feed so many more people with so fewer workers. To a large degree, AI sounds, the way you’re describing it, like a survival mechanism. We simply will not be able to deal with the complexity of the world going forward without this tool set.
Davenport: Yes. I think that’s true. It’s certainly true in healthcare. You don’t have to think very long and hard about it to know that in getting around in cities today we’re going to need autonomous vehicles and we probably needed them for a while. It’s taking a lot longer to develop these things than we might have hoped, but I think eventually we’ll get there in business. Everybody’s very excited about data and analytics, but you can’t really do it without some help from AI or from machine learning and so on. We can’t use the same old artisanal methods for exploring data that we used to, so we have these automated machine learning approaches now. I think that’s true. It is necessary to use these tools to get to the next phase, and I think we need to look at the positive side of it as well as the negative side. People like Elon Musk and so on worry that the robots are going to kill us all. I don’t think that’s true, but we certainly need to be thinking and talking about it.
TK: Tom, in the book, you described a bit of the timeline that you see taking place. There’s a lot of discussion about individual aspects of AI. I’ve heard, for example, autonomous vehicles we got a 15 to 20-year horizon depending on who you talk to until we reach a critical mass of autonomous vehicles. What’s the kind of timeline that you see and that you talk about in the book for AI?
Davenport: I think it depends on the particular AI capability. If you’re looking at machine vision, it turns out deep learning is quite good at that in identifying images much more accurately than we were able to have previous versions of machine vision. So, that’s here now in many ways. You read every day that it can identify potentially cancerous lesions on an X-ray or an MRI or something like that as well or better than a human radiologist. If you’re talking about language, it’s still a ways to go. In terms of just recognizing language, we’re at the 95% level, which is better than we’ve ever had. Still I think if we’re going to turn our customers over to natural language processing, we probably got to get to 99% or 99.5%, and that’s probably within five years.
Part of the challenges, if you’re talking about these really big systems like autonomous vehicles, you have to put a lot together. It’s machine vision. It’s very rapid decision-making. It’s communicating from one car to another over a broad network, and you have to worry about what the humans are going to do in all of this because pedestrians are still going to be crossing streets and some people are not going to have autonomous vehicles and they’re going to be driving their cars. So, that combination of human understanding and all the visual and sensor data and so on, I do think it’s 20 years on average. I was at a conference at MIT recently where they had a bunch of AI people from MIT and Carnegie Mellon and so on, they were all in that 20-year range until half of vehicles on the road would be autonomous. It’s at least 20 years. [Pause]
TK: You’re listening to Foresight Radio, and we’re taking a quick break to thank our sponsor of this episode, Wasabi Technologies, the leader in the next generation of cloud storage. Find out more about Wasabi at Wasabi.com. [Audio Presentation] Now back to my conversation with Tom Davenport. [Pause]
Is there a leading stumbling point that we have to get rid of? Is it the amount of data that we don’t have yet? Is it the connectivity? Is it bandwidth? Is there something singular, which you think is going to be a watershed moment in moving towards that revolutionary type of AI?
Davenport: I think the single biggest thing is the task-oriented nature of AI, the narrowness of it. If we had a technology that could not just play the game of Go better than a human but could also play Monopoly and tiddlywinks and chess and checkers and so on, that would be quite overwhelming. Some people would call that the singularity, I guess, but we’re not at that point at all. AI does narrow things and I think you can almost get it to do anything you want, but it only does one thing. So, that’s a big issue. Having enough training data is a nagging problem. We don’t have that much labeled image data to teach deep learning systems, how to recognize anything other than a cat on the internet. We’ve done pretty well with that, [Laughter] but in radiology, there aren’t enough labeled images around, for example, and different people own them and so on. So, that’s a bit of a challenge.
TK: You’ve seen so much of AI being used. I’m curious as to what your largest, your biggest aha moment may have been where you saw AI in an application, you said, “I never would have thought of that, that’s pretty amazing”? Have you had one of those?
Davenport: Like everybody else, I guess I was somewhat dazzled by the Jeopardy win of Watson, and I didn’t realize at the time how much effort had been put into it, but it did make me think that if there’s a task that we want to accomplish, we could pretty much do it. Jeopardy is a bizarre sort of game. It has all sorts of wordplay, and they give you answers and you provide it with questions. It’s intended to trick up humans, and for it to destroy the best human players, I think great marketing stunt on IBM’s part. It turned out that other businesses who tried to implement it didn’t have as much success generally, and we let the marketing get out ahead of the technology in many ways.
TK: You talked about that earlier. It’s curious at the very least that a computer can be that adept in something which is that human. The game of Jeopardy is clearly a very human game. We all get engaged in it because it appeals to our intelligence and our curiosity, and those who are more curious are the ones who stand the best chance of winning at Jeopardy and have the intellectual capacity as well, of course. Yet you take the highly sophisticated intelligent machine and you apply it to some other domain and it falls very short of the goal of being truly intelligent. Why? In a pedestrian way, can you explain that? Because I have a tough time understanding.
Davenport: Take cancer, which was the thing that they really focused it on most aggressively after Jeopardy. One, the state of knowledge is not as far long in cancer as it is – the factual kinds of responses to Jeopardy. Two, the head of the project at Memorial Sloan Kettering told me, “The idea behind Watson is that you take this knowledge in medical research articles and you ingest it in Watson and that turned out to be a lot more difficult than anybody thought.” You had to have a huge amount of human intervention. The cancer research is progressing so rapidly these days that a lot of the treatments that people are using aren’t even in articles yet. All these new immunotherapy stuff, it’s being invented day to day. So, it wasn’t really all laid out formally in such a way that it would be easily transferred to a machine. I think that’s a big problem. There are lots of smart oncologists who could help with that problem, but they have other jobs to do. So, Memorial Sloan Kettering has been working on their system with Watson now for I think six or seven years. They’re not finished yet. I don’t know how long it’s going to take, but they are still optimistic that something great will come out of it eventually.
TK: This is what years ago we would’ve called a knowledge management.
Davenport: Yes, exactly. One of my friends, Seth Earley, here in the Boston area as a consultant says, “There is no AI without IA.” You can’t do artificial intelligence without an information architecture. We knew that was true in knowledge management, but we somehow forgot about it when we got to the AI space.
TK: This is book number 20, and I have to ask, over the many years that you’ve been observing and researching and writing and working with clients on various technology topics, is AI that revolutionary? Is this like a quantum leap, or is it yet one more step that we hyperbolize because for so long we’ve been talking about the robots and artificial intelligence in so many different ways that we get so wrapped in the story and the mythology of it.
Davenport: Yes. I was looking at something recently from a singularity institute, which claims that everything is revolutionary, and I really believe AI is a linear technology. Linear is still good, but it doesn’t improve at a logarithmic rate. We’ve been talking, a woman at Carnegie Mellon told me, she’s head of machine learning there. She went there in 1983, and she said as soon as she got there people were saying, “Autonomous vehicle is just around the corner,” in 1983.
TK: Those were flying cars.
Davenport: [Laughter] Yes, exactly, and we’re still wrestling with that. It’s getting better, but it’s linear. It’s going to take a while before it cures cancer or takes humans out of the driver seat or whatever.
TK: I have to confess that I’ve used a lot of logarithmic charts myself recently. It seems as though everyone wants to believe that things are increasing exponentially, logarithmically by orders of magnitude each time we turn around. Human beings, simply don’t move that quickly. We don’t absorb technology that fast. There are cultural issues, organizational issues, very human issues that we have to contend with as well, not the least of what you’ve just said earlier, is how do we collaborate with the tool and see it as a member of our team rather than just a technology?
Davenport: Yes, exactly. When you think about - people are often saying to me, “Why are you so conservative about this?” I read every day that an AI system can detect cancerous tumors better than a human being. So, I wrote about this with Keith Dreyer at Partners, and apparently, every – he knows a lot more about this than I do, but every task is basically coming up with a slightly different angle on the problem. Some detect particular types of lesions, and some would say how big they are, and others would say how deeply embedded in the skin they are and so on. So, they all attack a different problem. A practicing radiologist is not going to be able to make sense of this for a long"