Announcing AI Workshop in San Francisco

We are excited to announce that we are hosting a workshop called “Using AI Now”, on February 14, the day before Gigaom AI Now begins.

As an executive, you are probably trying to make sense of all the possibilities with AI and dizzying array of competing platforms. Join us at “Using AI Now” to learn how to apply AI in your business and what all the various AI tools can do for you today.

In conjunction with the Gigaom AI Now conference, we are holding a special half-day workshop the day before on February 14, 2017 from 1pm-5pm to give executives an opportunity to build real-world AI solutions using the latest technologies from Amazon Echo, IBM Watson, Google, Salesforce Einstein, Microsoft, and open source options. Can you imagine a better way to spend Valentine’s Day?

Leading the event is Chris Mohritz. Chris is a seasoned technologist with several successful startups under his belt. He also has more than 15 years of experience designing, managing, and securing information systems. He is currently focused on deep integration of machine learning into businesses.

Chris will lead sessions on these topics:

  • Predictive Customer/Lead Engagement – Really powerful use case for AI in operations.This demo highlights some of IBM Watson’s most useful APIs (Language Alchemy, Personality Insights). This functionality can also be applied across a range of media: social media, email, and content comments.
  • Predictive Sales – Learn how to score leads and effectively follow-up using Salesforce Einstein.
  • Voice Control of IoT Devices – Voice interface combined with AI technology will make the complex queries easier. See business applications for Amazon Echo devices.
  • Image Recognition – An extremely powerful application for nearly every industry. See Google’s current capabilities for facial recognition, emotion recognition, text recognition, damage identification, context awareness.

Be prepared and make the most of your Gigaom AI Now visit by attending this pre-conference workshop.

Space is very limited to keep the workshop highly personal. How can you attend? Two ways:

    Register here for $495.

    Or, for a limited time, we are offering the workshop for free for those who buy a VIP pass to Gigaom AI Now for $3295. The VIP pass includes admission to Gigaom AI Now (Feb 15-16 at the same venue as the workshop), copies of the six AI research reports we are developing, a one-year membership to Gigaom Research, and a ticket to two different VIP dinners. This is over $10,000 in value for just $3295.

Hope to see you in San Francisco.

Source: GigaOm

Data Centric Computing is Driving the Next Revolution in Cloud Storage Alternatives

 

As the march towards data-centric computing continues unabated, many organizations are finding that the traditional ideologies of the public cloud are no longer appropriate. The biggest issue is that the public cloud may not offer the level of security and speed needed by organizations that are transforming their intellectual property into actionable data sets.  Simply put, there are enterprises that share a common problem, they cannot or will not send their proprietary data into a public cloud.

Factors such as compliance, privacy, and speed to market have driven many organizations to attempt to build internal data lakes, instead of turning to the cloud. However, in many cases, data lakes become far too complex to manage and increasingly resource intensive. Nowhere is that truer than in the burgeoning IoT market, where sensors and devices are providing constant streams of data.

To get a better feel for the problems facing businesses adopting a data-centric approach, GigaOm spoke with the founders of Igneous Systems, a company created to bring IaaS (Infrastructure as a Service) capabilities to businesses seeking to eschew public clouds for their sensitive data streams.

Kiran Bhageshpur and Steve Pao (Igneous’ respective CEO and CMO) offered observations that the two trends are related in that the falling price of storage has been encouraging even greater desires to store and retain data, rather than to throw it away. That one-two punch of increased data generation growing faster than dropping storage costs translates to something that seems unthinkable; the expenditures for storing data in-house and in the public cloud are actually rising as a whole.

Pao offered that the economic challenges of higher expenditures make some new storage ideologies possible, ones that combine the elasticity and ease of use offered by the cloud, paired with the locality, control and security offered by on-premises storage solutions. However, it takes more than just economic pressure to create a storage paradigm shift, and that is exactly where Igneous Systems comes into the picture.

Bhageshpur said, “Our customers had a common problem, one where they could not use the cloud, either because their data sizes are so large that the latency of the internet created problems, or their data was proprietary and a core part of their company IP.”

Pao added, “Another issue is that much of this critical data is being curated by line of business functions and not by enterprise IT groups normally responsible for managing a complex IT infrastructure, creating a situation where data can easily be managed outside of enterprise policy controls  . Those problems have an even broader impact on other industries. For example, we are hearing that in the scientific computing segment, biologists, chemists, and physicists are struggling to deal with the large data sets generated by their equipment.”

Those realizations drove the creation of the Igneous Data Service, which is akin to building an on-premises, private cloud that acts as a scalable content store for large unstructured data. Bhageshpur said, “The Igneous Data Service offers true cloud for local data and supports the S3 API, which is rapidly becoming the de facto standard for cloud-based object storage.”

The advantages of adopting the S3 API are many, starting with the fact that many application developers are already familiar with developing applications for S3, and numerous applications are readily available.  What’s more, the Igneous Data Service offers much of the same experience as the public cloud, in that Igneous handles all monitoring, maintenance, software updates and troubleshooting of on-premises equipment, all for the price of an annual subscription, which is based on installed capacity. That creates a zero-touch experience, which is one of the major benefits a public cloud service may offer.

From a technology perspective, Igneous Systems incorporated four key attributes into the Igneous Data Service which it considers as “first principles” to enable cost-effective deployment of an on-premises Infrastructure-as-a-Service (IaaS) offering.

  • On-premises data with cloud management. While the data always stays on the customer’s networks.  Igneous continually monitors, maintains and troubleshoots its fleet of inventory using automated, software-based cloud management techniques employed in today’s hyperscale cloud environments.
  • RatioPerfect™ architecture. Behind the Igneous architecture is a nano-server design where each disk has its own dedicated CPU and Ethernet connection.  This design enables a highly-distributed architecture that provides the scalability and resiliency of the public cloud but scaled to run on a customer premises.
  • Extensible Data Path. Large data sets typically require inline processing of data before uploads or downloads “complete.”  With its extensible data path, higher level operations (such as auto-tagging for search) can be performed on a mirror of incoming and outgoing data streams without slowing down low-level system functions.
  • Cloud native services. To enable new application workflows, the Igneous architecture is built utilizing a modern microservices approach and incorporates stream processing, an event-driven framework, and container services.

“We started Igneous three years ago to deliver the type of distributed architectures enterprises need that will provide the public cloud with its scalability and resiliency,” said Kiran Bhageshpur.  “We see data as a rapidly growing asset base, and we look forward to helping our enterprise customers curate that data to speed decision making and support new business models.”

One thing is certain; there is change in the air, fueled by the adoption of data-centric computing and enterprises need to think long and hard about their options, before drowning in a data lake.

Source: GigaOm

Case Study: London Theatre Direct, Tibco Mashery and the power of the API

A recent meeting I had with the theatre ticketing company London Theatre Direct (LTD) was a timely reminder that not all organisations are operating at the bleeding, or even the leading edge of technology. That’s not LTD itself, a customer of TIBCO Software’s Mashery API management solution and therefore one already walking the walk. The theatres are a different story, however. Most still operate turnkey ticketing solutions of various flavours, making LTD’s main challenge one of creating customised connectors for each.

That work is now done, at least for London theatres, with the most obvious beneficiary being the theatre-going punter. “Customers could never find the tickets they wanted — they didn’t have much choice and there was limited flexibility on price” explains LTD’s eCommerce head, Mark Bower. “With APIs in place, we can access millions of tickets. Every ticket is available, right up until show time.” As a result, more tickets are being sold, to the equal delight of producers and venues. Jersey Boys saw a 600% uplift in sales when LTD was plugged in, for example.

LTD haven’t just created a more straight forward booking facility however. This is the API economy, in which everything is a platform — so third parties, such as hotels and transport companies, can also plug into LTD’s service. These are early days but such tie-ups are inevitable. “30% of people coming to London will want to go to the theatre,” says Mark. “We can plug our service directly into in-room systems, avoiding the dark art of the concierge booking on a customer’s behalf.” And indeed, charging a premium to do so.

So far so good, but LTD believe that something that could be seen as simply online ticketing is actually far more profound. A theatre production is at its core a creative act, with no guarantees of success at the outset. “Theatre is not a one size fits all,” says Anne Ewart, marketing director at LTD. “You can’t walk into the ticketing industry and say, ‘I want a show to do this,’ that’s not how it works.” Rather, there needs to be a balance between the aspirations of the producer and the hard-nosed realities of getting punters in through the door and taking their money in return for their entertainment.

The world of theatre is not very forgiving. “Venue owners want bar sales and rent, and the minute the rent and incremental sales fall below a certain level, they are able to give a few weeks’ notice to a show and they are out,” says Mark. Such is the case for many celebrated and critically acclaimed productions. The ability, therefore to generate higher demand for tickets is of huge importance, as is reaching out to previously untapped demographics such as younger audiences who would tend to purchase the less accessible, cheaper tickets.

Better ticketing doesn’t just mean an uplift in sales therefore, it also means that producers and venues are able to put on shows that might previously be seen as higher-risk. This is all before even thinking about the nuggets of insight that will lie inside the ticketing data itself — who is going to what kind of show, when, using what form of transport and so on. As we discussed this, I was reminded of how farmers are taking soil samples so they know how to target fertilisers more accurately — I couldn’t help wondering if the same principle applied to incentivisation of theatre goes to ensure all seats could be filled.

Perhaps the takeaway is that the ticket itself is a consequence of past models, which worked as well as they could in the analogue world. Even as our interactions become more digital, we have an opportunity to make them more about the very human relationship between producer, customer and venue, all of whom are looking to gain from the deal. The opportunity exists to move beyond the blunt instrument of the paper ticket and towards deepened relationship, manifested for example as event-led packages, loyalty programmes or even patronage models.

In the world of theatre and in many other sectors, technology enables us to move above and beyond the dark arts. Of course, the opportunity for abusing such tools also exists — there we face an ancient choice. But the stage is set (oh, yes) for a more direct, transparent relationships between participants. Cue applause.

Source: GigaOm

Machine Learning Proves Key to Privileged Account Protection

 

Behavioral analytics is quickly becoming the cornerstone of most every Infosec technology. However, it takes a lot more than simply analyzing user activity with rules and statistics, it takes applying ML (Machine Learning) to access and activity data, as well as employing AI (Artificial Intelligence) to reduce false positives and accurately risk score. Two critical capabilities that a multitude of security vendors have yet to address in their products to enable automated risk response. Those lacking machine-based cognitive abilities have come to rely on static pattern definitions, signatures and policies for a legacy world of known good and bad.  Today, we must assume compromise and assess risk, even more importantly for privileged accounts with the access keys to IT environments.

That said, some vendors have come to grasp the ideologies of applying the concepts of big data analytics to access and activity data to better judge the validity by risk scoring. An approach that leverages the latest in ML and AI capabilities, and requires the vendor to innovate and have a keen understanding of behavioral and predictive algorithms to deliver predictive security analytics to identify access risks and unknown threats. Nowhere is this truer than with controlling access to enterprise resources using privileged accounts and entitlements. Something that remains a potential hazard for businesses of any size leveraging cloud and on-premises IT resources.

At 11/29/16 Gartner Identity & Access Management Summit held in Las Vegas, privileged account management proved to be a hot topic. Gartner revealed that “Identifying all systems and the corresponding privileged accounts is important, because every privileged account is a potential source of risk. However, this is a major challenge, as it is easy for privileged or default system accounts to be forgotten and left out. This is exacerbated by virtualization and hybrid environments that include cloud infrastructure. In such a dynamic environment, systems and accounts can easily fall through the cracks of privileged access management.”

Simply put, Gartner is expressing that some better methodologies must be adopted to prevent potential breaches from occurring due to improperly audited and secured privileged accounts and entitlements, something that Infosec vendor Gurucul is keenly aware of.

GigaOM had the opportunity to discuss privileged account concerns with Gurucul’s CTO, Nilesh Dherange, who revealed that accounting for privileged accounts is only one of the security issues facing enterprises today. Dherange said “Although many organizations are deploying privileged access management products to vault accounts with high risk entitlements, these tools may only perform discovery at the account level which creates blind spots for unknown privilege entitlements and exposes companies to unknown security risks.”

Dherange makes a good point, it is those unknown security risks that prove to be the most troublesome for enterprises today, especially as systems become more complex and additional administration and application accounts are created, all with an increasing ability to “touch” critical IT systems. What’s more, many enterprises rely on spreadsheets or other notes to maintain an inventory of privileged accounts and those accounts are rarely audited.

Dherange added “In a typical enterprise, the scope of privileged access discovery is manually unfeasible. For example, an organization with 10,000 identities each having 10 accounts with 10 entitlements would equal 1 million entitlements. This often results in rubber-stamping certifications and cloning user access rights. Overtime an entitlement may become privileged and remain hidden in these cycles.”

Looking at that issue from a management perspective, it becomes clear that manually maintaining and auditing privileged account entitlements is far beyond the scope of most any organization. In other words, enterprises will have to rely on machine learning intelligence for risk-based approach to manage all of the moving parts involved.

“On average, Gurucul customers addressing privileged access risks have discovered that more than 50% of privileged access, including application privileges, are unknown to them and exist outside privileged access lists and vaults,” added Dherange.

According to Dherange, Gurucul takes a different approach to privileged access intelligence on large enterprise networks. “Gurucul is applying identity analytics and machine learning to discover privileged access that poses a security risk to the organization so that undocumented and unnecessary permissions can be eliminated or identified for monitoring with behavior analytics,” claimed Dherange.

Gurucul demonstrated its Access Analytics Platform (AAP) and Gurucul Risk Analytics (GRA) with privileged access discovery capabilities at the Gartner Identity & Access Management Summit, showing that it has added new capabilities to its Access Analytics Platform (AAP) and Gurucul Risk Analytics (GRA) that eliminate blind spots associated with privileged access.

Gurucul also announced the closed loop integration of identity and access management (IAM) solutions into AAP, which forwards accounts and entitlements with high access risk scores to IAM solutions for owner/manager certification.  Dherange said “When an account and or entitlement is revoked, the IAM system sends an update to Gurucul which removes the risk and re-scores the machine-learning models.  Several of our customers have implemented Closed Loop IAM integration using Gurucul with Oracle Identity Manager (OIM) to automate the detection and remediation of access outlier risks.”

With Gartner claiming that “Privileged access is increasingly recognized as one of the most significant risks that organizations are facing, driving them to pivot from compliance-based to risk-aware strategies.” It is becoming very clear that enterprises today will need to turn to ML and AI based technologies, backed by the context of big data, to truly get a handle on what may quickly become one of the top security issues facing enterprises.

Source: GigaOm

Author Jerry Kaplan talks Artificial Intelligence with Gigaom

jerrykaplan1bwJerry Kaplan is widely known as an Artificial Intelligence expert, technical innovator, serial entrepreneur, and bestselling author. He is currently a Fellow at The Center for Legal Informatics at Stanford University and a visiting lecturer in the computer science department, where he teaches social and economic impact of Artificial Intelligence. Kaplan founded several technology companies over his 35-year career, two of which became public companies. As an inventor and entrepreneur, he was a key contributor to the creation of numerous familiar technologies including tablet computers, smart phones, online auctions, and social computer games. Kaplan is the author of three books: the best-selling classic Startup: A Silicon Valley Adventure; Humans Need Not Apply: A Guide to Wealth and Work in the Age of Artificial Intelligence (2015); and Artificial Intelligence: What Everyone Needs to Know (2016). In 1998, Kaplan received the Ernst & Young Emerging Entrepreneur of the Year Award, Northern California. He has been profiled in The New York Times, The Wall Street Journal and Forbes, among others. He received a BA degree from the University of Chicago and a PhD in Computer and Information Science from the University of Pennsylvania.

Jerry will be speaking at the Gigaom AI Now in San Francisco, February 15-16th. In anticipation of that, I caught up with him to ask a few questions.


Byron Reese: Do you remember when you first heard about AI or when you first got interested in it?

Jerry Kaplan: The first time I heard about artificial intelligence was in 1968 when I saw the movie “2001: A Space Odyssey.”

And it was HAL that got your mind thinking.

That’s right.

When did you start writing about artificial intelligence?

Well, I went into the field, so I am not sure that counts. That was thirty-something odd years ago. Most recently I started writing about the field for general audiences, I should say, as opposed to technical papers, probably about four years ago.

And how would you describe the state of the art? Where are we in this great arc of artificial intelligence development? How would you sum it up?

That’s a very good question. I think there is a myth that we are on an arc, which is to say that we are somehow building increasingly intelligent machines that are ever-more general and are making their way up some kind of hypothetical ladder of intelligence towards human intelligence, and that is really not the case. Artificial intelligence is a collection of tools and techniques for solving certain classes of problems, and for a variety of reasons, as with other areas of computer science, we are constantly expanding the class of problems and solving new and different types of problems using those techniques.

Do you believe that we are heading towards building an AGI?

I see no compelling evidence that we are on the path toward building machines that have what you correctly called “artificial general intelligence.” I think this is a wild extrapolation from the current state of the art and there is no real reason to believe that the current set of techniques are going to get us there. With that said, there is a great deal of value in the utility of the problems that are currently being solved by the current generation of AI technology and it will have a significant impact in improving our lives and increasing automation and affecting people’s employment and raising a lot of interesting issues for what kinds of ethical and social controls we want to put on the deployment of this technology.

Well, that’s interesting. So if someone posed to you the question that Alan Turing posed to himself, which is “Can a machine think?” What would you say to that?

In the human sense, I would say no. I encourage you to read the original paper by Turing entitled “Computing Machinery and Intelligence”, because it is really interesting,. It’s not a technical paper. It was mostly him just speculating a little bit about a couple of far-out ideas. It is very readable, and it is quite interesting to see his point of view and his analysis on this particular subject, but what he said is, “I regard the question that can machines think to be too meaningless to deserve serious discussion.” That is actually what he said in the paper. He goes on to say, “However, I believe that in fifty years time we will be comfortable using words like ‘thinking’ or ‘intelligence’ as applied to machines.” So what he was talking about was the use of language, and interestingly enough, he was very close to being correct. He proposed a Turing test and he said, “I believe that when machines can pass this Turing test, that is when people will be willing to use this kind of terminology to describe them,” but he was not saying that the machines were intelligent. He was really just talking about the way that we would describe the devices that we were creating.

But doesn’t he go on in the same paper to say to the effect that we are going to have to broaden our understanding of what the word thinking even means because a computer may do something radically different than the way we do it, but we still in all fairness should say that it is thinking. Would you agree with that?

Yes, we are violently agreeing, I think. You said he was expanding the use of the term. That’s exactly right. He was not saying that it would mean in fifty years that machines would have achieved human-level intelligence. He just meant that we would expand the use of the term, and that is true. Historically, there are many examples of this type of expansion of the use of language. It is perfectly normal. The most interesting one that I have run across is the expansion of what music means. Before the invention of the phonograph, like Thomas Edison, people believed that music was meant something that a person created and played, like an instrument. That was making music. When the phonograph came around, that was considered an odd curiosity, but many people did not consider it to be music. It was something new and different, and obviously over time, you and I talk about listening to music and it seems almost silly to say that is not music, but that is another example of the same kind of expansion of the use of the term. The thing that I think we need to be careful not to confuse is to think that those people back then were wrong or that they didn’t mean something by what they said than is different by what we mean by it. The general public today, particularly on this issue of, “Can machines think,” believe that what Turing predicted was that machines would be thinking in a human sense and that when they passed the test, they were intelligent. That is really not at all what he was saying. Just what you said is exactly correct. He thought it was a corollary or similar kind of behavior or activity that machines are engaged in and we probably just expand the use of the term because it was the closest description for those ideas that would be at hand at the time, and I think that is true.

You wrote a very well-received book called “Humans Need Not Apply.” What was your thesis in that book?

My basic reason for writing that book is that the advances in artificial intelligence are going to create certain kinds of social problems or make them worse, and at least when I wrote the book, nobody was talking about those social problems. Today they definitely are, and I wanted to point out that AI was a force that would make these particular social problems worse and that we needed to think about policy issues and social issues to address those concerns at the timeThe two issues are technological unemployment and income inequality. So I explained what artificial intelligence was and I argued at length that the effects of that technology would be to make these two problems worse and that we needed to have more thoughtful policy approaches in how to address them.

So Keynes is the one who came up with the term “technological unemployment,” and for the most part, we have had growing economic growth in the west and full employment for two hundred years. So to argue that there is going to be some kind of change in that, you would have to make the case that somehow this time it’s different. Do you agree with that, and if so, what is different this time around?

I think that ever since writing that book that I have come around much more to the position that you mentioned. I think AI is a force in that direction, but when you look at all the forces and sum them up, what we are seeing is a continuation of a process that has gone on continuously since the start of the industrial revolution. Perhaps it will accelerate somewhat by virtue of new technology and artificial intelligence in particular, but the dominant forces suggest that it is probably not going to be the kind of labor apocalypse that some people think about or write about. The threats are that fifty percent of all human jobs might go away in the next thirty years, pick your number. It sounds terrible until you really understand the way labor markets work and realize that it is probably true that fifty percent of the work they did thirty or forty years ago has gone away, or at least some significant percentage. What happens is that we automate certain tasks and that either makes some people more productive or that puts other people out of work in given professions. In some professions, it puts a large number of people or almost everybody out of work, but the result is usually due to elastic demand for products and services, a significant increase in demands for those products and services that either compensates in that industry or more often compensates in other industries. To put that more plainly, people have got more money to spend, and so what they do is they buy things and that increases employment in other areas. So what is likely to happen in the future are two things. One is that as these automation technologies come into use, we are going to see an increase in demand for other kinds of products and services that will employ other people in other industries. So you are going to see increases in employment in certain areas as a result, and in addition, we are going to see new kinds of professions come up that didn’t exist previously, and those will employ people as well. When you layer that on top of basic demographic trends in the workforce, at least here in the United States, it is unlikely that we are going to have a very big problem. Now, with all of that said, there are some people who are going to lose their jobs. This is a lot of what drove the recent election. People that are under-employed or are not as well-off as their parents, and we need to have good means for supporting them while they are retrained or otherwise retire them in some fashion from the labor force.

Is your prognosis for the next twenty years in the United States we will see falling wages and rising unemployment or the opposite, full employment at good wages?

The next twenty years in the United States? The problem is going to keep shifting. When I wrote the book that you mentioned, “Humans Need Not Apply,” unemployment was very high, it was a big problem in the United States, and it wasn’t budging, even though the economy was moving ahead. So there was a lot of blame on technology, which I think was logically the case. Today, I think by all reasonable measures that we have full employment. Now, the skills of the work force don’t necessarily match well with the jobs that are available. A lot of jobs are going begging and a lot of people can’t get jobs or they are under-employed, meaning it is driving down wages. The answer to your question is really going to be answered by our public policies for economics and growth and all of that, so it is very hard to project this. I would have given you one answer two weeks ago and a different answer today because of the election. Whether or not we will see falling wages and unemployment or rising wages and more employment is more a function of government economic policy than it is anything to do with artificial intelligence or technology.

What would you say is the belief that you have about AI that is the most controversial or the most uncommon?

One that is most surprising is I believe there is a very large disconnect or gulf between the public perception of what artificial intelligence is and what it means and the reality that is occurring actually inside the field. To put that in a quick summary, AI has a big PR problem, and this is potentially going to cause trouble and we need to do something about it, but there is nobody on point to try and recognize and fix this particular problem. So it is a bit of a replay of the tragedy of the commons. Everybody wants attention and accolades to their work, and the way you get that is by reinforcing and supporting a lot of wacky and crazy ideas, that we are summoning the beast and we are building these machines that are somehow going to reach human level intelligence, look at us eye to eye, and then maybe decide they are going to kill us. That is the public perception, and it is difficult for me to exaggerate how universal I find this opinion. I just did an AMA (ask me anything) on Reddit and that’s mostly what you get questions about. That is the universal view, and it is being driven by a series of forces in the press and the entertainment media and pundits who benefit from promoting this ridiculous proposition, and it is simply not the case. So the evidence for it is negligible at best and I think it is misleading people. People are concerned about ethics of self-driving cars and maybe we should put controls on these developments before they somehow come alive and take over. I mean, this is all misguided. It is sucking the oxygen out of the real discussions we should have, which is what you and I were just talking about. How does this affect labor? How does this affect unemployment? What does it mean for income inequality? We haven’t talked about that, but I think that is a real factor. Those are the things that are going to make a difference in our lives. The rest of it are just flights of fancy.

I would say the reason people are concerned is that you have incredibly smart people, Elon Musk, Bill Gates, and Stephen Hawking, giving dire, catastrophic warnings. I mean, you paraphrased Musk when he was talking about summoning the demon.

Well, therein lies the problem. Let me give you the real truth. These are very smart guys, there is no question about it. But none of them are experts in this field, and like the whole fake news problem that you are probably very well aware of, they are repeating, I assume with good intentions, the questionable warnings that they are reading about and seeing from other people. So they don’t have any direct involvement in this or any significant deep understanding of the technology. They are just pointing out and are reflecting things that other people have said. The problem is that like the fake news, these statements get far, far more attention than they deserve, so it just reinforces the idea and how could they possibly be wrong? We have this idea in our culture, of course, that anybody who is really rich or really famous doesn’t make mistakes. Well, I am sorry, but this is an idea that does not stand up to scrutiny any more than the idea that global warming is a hoax perpetrated by China. So I respect all three of the gentleman that you mentioned, but in this case, I respectfully disagree. Now, if you spend time with workers in the field, or go to a university and you ask this question, “Do you think these things are correct,” it is really surprising. You can walk from office to office through the artificial intelligence lab at Stanford and ask this question and you will get almost universally the following answer: “Well, I read that stuff. I don’t see how it relates to what I am doing. Personally, I don’t see it, but they are smart guys. Maybe they know something I don’t.” So I think this is a question within the field of the silent majority thinking that this is nonsense or at least there is no real concern or genuine support for it.

I have watched this movie for thirty years. That’s why I feel pretty confident in expressing some of these opinions. I have seen two previous waves of AI technology where it was exactly the same pattern. You had a couple of widely-quoted prominent people making over-reaching claims for what was going on in the field and what would happen based upon the dominant technology of the day, and none of it came to pass, and in fact, the technology today is significantly different. The approaches that the people thought would be the basis of generally intelligent machines has largely been discredited. So we have got the same words, “artificial intelligence,” for a whole bunch of different technologies. So right there on the face of it the idea that we are making progress is silly. The basic problem, and you are probably well aware of this, is people overgeneralize from a series of what are very different examples. So every time you read the press, “God, now a machine can do this. Now a machine can do that.” The analogy in people’s minds is that this is like a child growing up. Now he learned to ride a bike. Now he can eat with a spoon. Now he can do this, but it is not the same technology that is being used to solve all of those different problems. It is a little bit like drawing the conclusion that we are going to have a home kitchen robot that can do anything from the fact that I have got a toaster and an oven and a microwave and refrigerator. It’s like, “Oh my God, what will technology do next and conjuring up Rosie the robot?” It is just not the case. So that is the main point that I try to communicate is that we have to try to sober up about this. There is real value in what is going on, but it is not that we are making ever-more general versions of the programs that we had five or ten or twenty years ago. That is not at all what is happening on a technological level.

So talk to me about social issues for a minute. We have had a period over the last sixteen years since 2000 of stagnant wages and rising corporate profits. Certainly the distribution of the financial benefits from technology have been accrued primarily to the wealthy. What do you believe are the mechanisms that bring that about? Why does that happen?

Well, I will give you a theory that I think is possibly pretty strong, and I do cover this in both of my books. Automation is the substitution of capital for labor. So Karl Marx was fundamentally correct when he said that in the struggle between capital and labor, capital has got the upper hand, ultimately. So the people with the money are the ones that can afford to build the automation. Therefore, they are the ones who will gain the benefits of that automation. So the rich get richer and everybody else gets left behind. I see this in detail in my own life all the time. That is why it is just not some kind of philosophical or theoretical thing. I mean, I can point to stuff that goes on in my own life and why it is that the people that I deal with and you deal with continue to get wealthy while “disrupting industries,” which basically means putting people out of work. Now, there is nothing inherently wrong with that if it is layered within a system that distributes the benefits more widely. That is just not what we have got, so I really get worried. Let me give you a very current example of this. Everybody agrees that our country needs to invest in infrastructure. We have got decaying infrastructure, and we need new infrastructure, and we have starved it for a couple of decades. Alright, that is in the past. We do need to fix it. Everybody on both sides of the proverbial aisle agrees that needs to happen. Now, the latest proposal that I have seen, and this is just of course people floating stuff in the paper is that they want to privatize this. The idea is to privatize it and get people to actually get tax breaks for improving it. Now, that sounds good to an committed capitalist, but it doesn’t actually solve the problem for two reasons. A lot of the infrastructure that we need to invest in does not have a return on capital, so nobody is going to bother to take up that challenge. It has much more distributed benefits that are real and economic, but there is no way to capture the return on that in terms of a specific return on capital. The second thing is that the infrastructure winds up being owned by private interests, and that is a bad thing because we lose control over it and it won’t necessarily be managed in a way that benefits society or can be used by everybody to help spread equal opportunity and improve their lives.

What would you suggest for a policy remedy? You are using aspirational statements like share better and improve instructional, but how do you think we should do it?

I don’t want to pretend that I know the one true answer. I don’t, and I could easily be wrong, but I do study these issues, and I expect that you do, too. There is a difference between fact and fiction, practical policies and a bunch of abstract theory, and there are two basic problems that I see. We don’t make a distinction or public discussion between government investment and government spending, but there is a very big difference between those two things just as there is in your personal life. A bigger budget doesn’t necessarily mean that the government is giving away money on current consumption. It may mean that we are doing things which have significant economic benefits in the future. So the first thing is that we need to get that into the national conversation. It is perfectly reasonable for the government to run major deficits and to spend and to borrow to build infrastructure, as long as we are doing it hopefully in a reasonably smart way. Now, the second thing is that today interest rates are near zero. It makes no sense not to borrow. We should be borrowing hand over fist and running up the deficit precisely because the problem isn’t the deficit, the problem is the cost of servicing that deficit, and that is also missing from the public discussion. We shouldn’t be looking at how much money the country owes. We should be looking at what is the cost to service our debt and how is that likely to change in the future? So there are sensible policies that can be put in place, which will really have positive benefits for society. Everybody agrees on what we want to do, there just is not a sensible public discussion about the techniques for doing that.

So tell us about the new book.

My new book is potentially relevant to a wide swath of your audience. It is called “Artificial Intelligence: What Everyone Needs to Know.” It is part of a series from Oxford University Press called “What Everyone Needs to Know.” The book is a concise explanation of most of the questions and issues around artificial intelligence in a straightforward FAQ (Frequently Asked Questions) format. So it is a set of questions and answers that people often ask about artificial intelligence. It covers the nature of the technology, the intellectual history of the field, what the ideas are and how they developed, a lot about current applications, and then covers many of the ways that AI will affect people in terms of labor, the economy, Etc. It also covers subjects such as how AI will affect legal theory and thinking and the administration of justice. Then I get into exploring many of the common myths that people hear about, like, “Will I be able to upload my mind into a machine in the future?” “Is there going to be a singularity?” All the highly-visible issues that people tend to be concerned about are in the book. So if you want to get a brief, easy to read introduction to many or most of the key issues both technological and societal that surround the field of artificial intelligence. This is a quick and easy way to do it. You should be able to read the book in less than two hours.

It is written for an intelligent reader. This is not dumbed down, but it is not technical and it requires no particular background. If you can read the New York Times or The Economist, you can read this book, and I hope that you will come away with a much better, and frankly, more sober understanding of the values and the risks associated with artificial intelligence technology.

But I assume, when you say risks, you are optimistic about the technology. You are just concerned about peoples’ irrational fears of it.

Well, I think that irrational fears are overblown. However, there are real risks because the technology is so powerful that there are going to be areas where we are going to want to put controls in place that are very real in order to avoid some highly negative outcomes.

Such as?

Well, for example, artificial intelligence can enable new classes of very efficient killing machines for war, and just like the invention of chariots and machine guns and bombs dropped from airplanes, this may transform the nature of military conflict. Unfortunately, in ways that don’t necessarily advantage wealthy societies like the United States. ISIS might not have a nuclear weapon, but they may very well be able to build a machine that simply shoots every living thing in sight and pop that up in the middle of a shopping mall in such a way that it is extremely difficult to disable. Now, I am not talking about The Terminator. That is not the image I am talking about, but there is technology in your cell phone combined with some fairly straightforward robotics to operate and basically to shoot bullets that could devastate a public space, and organizations that today can not afford these kinds of destructive capabilities will be able to do so in ways that we really can’t foresee today. So that is an example of how AI technology may transform warfare. There are many others.

We may need to place controls on when and how AI systems can act as an individual’s agent. Today, for example, if you ever try to buy tickets to a popular concert from a service like Ticketmaster, you may notice that the minute they go on sale everything is snapped up and you have got two seats up in the rafters when you are willing to hit the button as fast as you could and you wanted to get better seats. Well, the reason that is happening is not that there are thousands of other people buying tickets, but there are thousands of other robots buying tickets in many instances. If people could see that they were fighting against machines to get these tickets against robotic devices that are working on behalf of ticket scalpers, they would be up in arms and the entire practice would be outlawed. Now, as we move to a future of what I might call “flexible robotics,” this is going to be far more visible. If our sidewalks are crowded with little gadgets making deliveries so that you can’t safely walk or your self-driving car is parking itself and stealing parking spaces from cars that have people in them, these are significant social issues that are going to come to the forefront and are going to cause us to engage in various types of regulation or controls over the deployment and the use of this technology.

The big challenge for artificial intelligence for the next few decades is how to ensure that the systems and machines that we build will integrate with human society and abide by the commonly accepted social conventions.

Join us at Gigaom AI Now in San Francisco, February 15-16th where Jerry Kaplan will speak more on the subject of AI.

Source: GigaOm