Playing with robots: When toys become intelligent

Kate Pashevich


Je suis de mon enfance comme d’un pays. – Antoine de Saint Exupéry

We all are formed by our childhood. As well as our bodies, our character and behaviour develop during the first years of life. A huge part of our early lives takes place in formal learning institutions like comprehensive schools, various clubs and thematic schools. What we often don’t realize is how much we learn outside of those institutions, informally. Usually it happens in a form of play, where we learn to communicate with the surrounding world and with other intelligent creatures. People are social animals, and social intelligence is crucial to develop in order to live a happy life. But what if social intelligence can be created artificially, for example, in “intelligent” toys?

Why toys?

What is the role of toys in our lives? It is inextricably linked with play, which is a very important activity, and not only for children. Huizinga even argued that play remains relatively the same among the majority of mammals, some birds and insects. In the human society, some forms of play are later transformed into our cultural and social institutes, like science, judicial system or medicine. Children don’t really need toys. In fact, there were no toys (or for that matter any objects specifically designed for children) before the late 18th century. Children used to play with each other, but he modern industrial society left children steadily more alone. This is when toys came to the scene: they help children learn and develop their imagination in the absence of adults and playmates.

Children tend to develop strong emotional bonds with some of their toys. Scholars talk about “transitional objects” – toys we take with us from childhood to the adult life. The majority of us don’t like to admit they still have that teddy bear, let alone acknowledge that somewhere deep inside, against all the rational arguments, we know: that bear is alive. Children psychologists heavily argue about what kind of toys children should have: Should they be simple or highly technological? Should we allow children to play with guns? And how would these toys influence children’s behaviour in the future? Probably, we should just ask children, what they want.

The market of AI toys

Today the market of children’s toys is full of “smart” or “intelligent” toys that can walk, talk, move objects and answer children’s questions. They are engaging and useful. However, there are several big questions regarding how these “intelligent” toys are being designed and regulated. Some of them are on the surface: privacy and security issues. Norwegian Consumer Council (Forbrukerrådet) last year launched a campaign against “interactive” toys (“My Friend Cayla” and “i-Que Intelligent Robot”), pointing out they’re not very well thought through design: these toys recorded children’s talk, and there were no clear guidelines regarding how this information is being stored and transmitted. As a result of the #ToyFail-campaign, the doll “My Friend Cayla” was taken off the market in Germany in 2017. This is a good example of how industry is always moving faster than the regulation. At the same time, strict regulation can hinder innovation. In order to prevent such events from happening, designers should consider not only their commercial interests, but also the interests of their customers. In this case, children. What is good for them?

Simulated intelligence

Robotic toys are now even able to possess certain personalities, like the Cozmo robot, which sometimes refuses to do his tasks and gets angry. While gaining more and more social intelligence these toys can lure children into thinking they are communicating with a living thing. Sherry Turkle in her book “Alone together” raises a question of how differently children perceive objects with artificial social intelligence. She writes about her daughter for whom a robotic animal was just as “real” as a living one. The experiment at MIT Media Lab, where children were observed and interviewed while playing with different “intelligent” toys, showed that children perceived these toys as “friends” or “teachers”, and thought of them as having various personalities.

What does it take for children and for us to be tricked into thinking a machine has intelligence? One of the pioneers of the field of artificial intelligence, Alan Turing, was occupied with this problem. He suggested a test or the “Imitation game” (later called the Turing test), where a machine, without being seen, only by answering the questions, needed to trick the human interrogator into thinking that it was a human too. Today, there is a whole field in computer science studying the design of socially intelligent agents (SIA). By “agents” they mean algorithms that are not just passive “tools”, but which possess a certain “agency”. The authors of the book “Socially Intelligent Agents. Creating Relationships with Computers and Robots”, Dautenhahn, Bond, Cañamero and Edmonds say that, since there exists no known “objective intelligence” outside of the experience of a human observer, the task for the designer is more to make it “seem” intelligent rather than to recreate an actual intelligence. Which makes the task much easier, but creates a major ethical concern: when we interact with SIAs, are we always aware that they simply simulate intelligence? And, in our case, do children understand that while playing with such a toy?

Effect on children

How do children perceive, interact and communicate with toys that steadily become more intelligent? The current research shows that children still prefer playing with their human friends to playing with any kinds of toys. Can this change? Will we come to a point where “intelligent” toys will replace living playmates? Another important question with such toys is: what kind of effect they have on children’s learning about the world? These toys are permanently connected to internet and can retrieve any recorded knowledge, but how do they deliver this information to children? Again, a question to their designers. Last but not least, how will the presence of robots from the early days of our lives influence our perception of real and simulated feelings and emotions? Play is a tricky thing: when we play we do realize that it is just a play, yet we are serious about it. If we don’t take the play seriously, it easily falls apart. With “intelligent” toys, will children still be able to draw a distinct line between play and real life?