By Frans Joakim Titulaer
Artificial intelligence has in many ways failed as an idea, despite the huge attention it continues to receive. We constantly believe that the technology is about to bring about change of epic proportions, and possibly redefine what it means to be human in the process. In many ways, the popular construct of AI is an expression of something that has been much harder to define than commonly thought. It is caught up in a battle for status between the philosopher and the scientist – largely leaving unexamined the technologies themselves, and the role they play in society.
With all new technologies, it can be difficult to separate fact from hype, but let’s aim straight for the gut of this issue. The issue with our conception about AI is that we are not talking about a specific technology, but instead about the borderline between what is mechanical and what is conscious. Alan Turing famously defined the test of AI as whether or not a computer could convincingly pose as a human. That is why the persistent problem with our way of conceptualizing AI has been that we disregard each attempt at it the minute we can see through the principles by which the machine operates.
Take for example IBM’s Deep Blue, which in 1997 was the first computer to beat a chess world champion under regular time controls. While this was an impressive feat, anyone who has played a chess bot would know that the interaction is not much different from one you would have with a hand-held calculator. Even though the mathematical principles that allow answers to appear instantaneously are genius, the technology itself isn’t – and it doesn’t capture our imagination.
The year of AI
2016 has been hailed as the year in which the public woke up to the realities of AI technology. In the business world especially, AI has become a major buzzword. Firms are selling products that help other businesses improve their marketing and communication, project management, human resource management and much more, all through so-called AI technologies. But what does it mean to say that AI exists today, and how does it differ from our ordinary computing technology?
We have grown used to the idea that machines are built to last and to persist in the form and function in which they were designed. The entire idea of man overcoming nature builds on the premise that we build enclaves in which the degenerative effects of the cyclical nature of life are shut out. But the computer is different: it is meant to change over the course of our interactions with it. With our evolving use of computers today, no one could build or design one that would be expected to last. The ordinary modern-day computer already straddles the dichotomy between natural and mechanic. It is meant to degenerate, albeit within the confines of its own system – computers already have programs installed that make this process automatic.
The idea of machine learning is based on the premise that these forms of automatic regenerative processes could incorporate processes of change that separate them from the previous design of the computer, or other designs installed and maintained by human machinists. It would do this based on its interaction with the user, and its general relationship with the outside world. The AI technologies that are currently hitting the market are highly specialised. Modern machines can automate a large range of complex tasks. However, this is not the same as saying that any one machine could perform all these tasks.
Moreover, it is the combination of tasks that make these computations hard. The dream of an artificial intelligence that makes a machine able to deal with ‘the task at hand’, whatever that might be, is still out of reach.
Rather than approaching this kind of ‘general AI’, recent AI breakthroughs have come by connecting machines to large data-sets and letting them specialise on one task within a huge array of ‘situations’. While these AIs can hardly be ascribed autonomous agency, this is not to say that these tools cannot be extremely useful. The breakthroughs of the last few years have come from the release of a few highly prominent technical developments among the giants of the digital landscape. IBM Watson is already being taken in use by nurses across the world to answer difficult questions which earlier required a specialized doctor to answer. What is more, IBM Watson will be used as a principle agent for companies to adapt their ‘connection to the cloud’ in ways that single companies cannot afford to develop by themselves. Companies like IBM are selling technologies that allow these firms to go into alliances with each other, send their data through some database and over time benefit from the algorithm that is coproduced through information economies of scale.
The creature beyond the myth beyond the hype
Our culture has evolved an almost instinctive understanding of the simple principles that guide our so-called ‘smart’ devices, and public discourse has developed rapidly in the last few years in preparation for the next technological revolution. A few years ago, this was generally referred to as «Big Data», but it is now mutating into something akin to AI. The term is already useful to pick up at this stage, because the rapid spread and combined force of bots, open/monetised data (i.e. blockchains) and augmentation design (like virtual reality or interactive infographics) will, in all likelihood, usher in the next generation of the internet. (For those keeping track, our current generation is «internet 2.0,» which is characterised by sharing technologies such as YouTube and Facebook.)
The next-gen version, internet 3.0, has for some time already been envisaged by Tim Berners Lee, the inventor of the World Wide Web. He proclaims that the internet contains the ingredients to live up to its «original» vision of being a liberating «free flow of information» among all users. However, as Lee now also argues, it is becoming clear that despite the fact that it is notoriously hard to eliminate a package of information once it is dispersed throughout the system, this does not mean that all users can benefit from its existence, and thus general access to these utilities are being constrained.
The original version of the WWW was designed for the sharing of research documents among scientists working within the early stages of CERN’s Large Hadron Collider. These were very large data sets, but not Big Data, nothing like ‘humongous data’. The sheer size of the internet today means that we are evermore desperate to have the data contained in scientific articles, as well as different kinds of valuable knowledge objects, given to us directly. «Linking the data,» as Lee calls it, means giving the internet more of a unifying body in which to hold the information. This is one reason why we could think of the future of the internet as ‘semantic’ – or adapting to the question (and ultimately the situation) of the user. Everyone that can access this body could access the information. In reality, it nevertheless comes down to implementing a design standard at a level ‘deeper down’ than the web pages that we now are used to accessing. This process is a constant battle between those who work to regulate the construction of this highly dispersed object (such as the W3C who proposes the HTML standard) so as for it to live up to its potential, and those who seek to benefit from the autonomy of certain channels of access.
If 2016 was the year when our idea of artificial intelligence met with reality, it would be high time for us to move beyond the dichotomies between machine/human, and mechanical/living. Let the ‘purist’ philosophers step aside, retire the popular gimmick of the Turing Test and yield its place to distinctions between specialised and generalised technologies. Let’s not ask how easily we could get fooled by it, but let’s ask how difficult it is to split it apart and contain it.