Employment in a Robotised Future

 Ana Gviniashvili
TIK MA Student

Society today does not suffer from unemployment caused by too effective weaving machines.

The battle between humans and technologies counts centuries. Technological progress no longer means just machines doing repetitive work and drudgery, but AI technologies that are able to think and mimic cognitive behaviour. One day in the not too distant future, there might be a robot writing this kind of article, or driving our cars – to some degree, both of these examples are already happening. AI is already able to enter the job market in areas considered less likely to be automated in the past. As technologies are getting smarter and more humanlike at the same time as focus on automation amongst public and private sector increases, there is a rising concern regarding the future job market. Firstly, is automatization a positive or negative thing for humans in the workforce, and secondly, will technologies actually replace human labour altogether?

Technological progress has always had a direct impact on employment and was a driving force in shifting jobs from agriculture to manufacturing, and onwards to service and management occupations. Despite the unemployment the technological development has caused while these transitions were happening amidst protests and anxiety, technologies had a positive rather than negative effect on long term wealth and social well-being. According to McKinsey & Company’s report published in January 2017, it is estimated that between 1850-1910, productivity grew with 0.3% annually after implementing the steam engine. The same thing happened when early robotic and IT technologies were introduced and the resulting productivity growth reached 0.4% and 0.6% respectively. The report predicts that the same thing will happen after the adoption of AI and robotics; by 2065 we’re predicted to have an annual economic growth between 0.8% and 1.4%.  In addition, while productivity grows, working hours are reduced by nearly 57%.

However, implementation of new technologies in companies is linked to unemployment. Due to the growth of artificial intelligence, both skilled and unskilled occupations are at risk of automation. Machine learning gives AI technology the ability to learn from experience without every step of the process being explicitly programmed. Thus, with enormous amounts of data, AI can enrich its experience way faster than humans, and can potentially carry out work that requires decision making, analysis or even creativity (eventually). According to McKinsey, in Norway, near 42% of working hours can be automated with today’s technologies.

The potential of automation differs across activities and sectors. Frey and Osborne (2013) concluded that near 33% of jobs in Norway will be automated in a decade or two. Predictable, repetitive tasks, jobs with lower wages, and education are at higher risks. Examples include manufacturing, food service, transportation, accounting, bookkeeping professionals and general office clerks. Less likely to be automated are jobs with higher education, wage and specialisation – such as the people determining firm strategy, human resource functions, marketing, educational services, software developers, financial analysts, etc.

When considering what the job market will look like in a decade or two, it is important to note that automation of jobs does not necessarily mean that whole occupations will disappear, but rather that some tasks within an occupation will be automated. For instance, fraud detection specialists in finance have changed their focus from looking through physical transaction records, and now work with training intelligent machines and designing increasingly fraud-safe processes. Their jobs still exist, but AI has freed up their time for other, value-creating activities.

However, we have jobs that might disappear entirely, while new jobs arise in their place. For example, when self-driving cars are implemented there will be no need for drivers, but new jobs are created in design, development, monitoring, implementation of AI and robotics.

In general, it is difficult to measure exactly the difference between eliminated jobs and new occupations, but we have seen this kind of process before: Industrialisation, computerisation and now AI and robotisation. Every time we have invented technologies to take over human labour, people have shared the same fear of technologies taking over their niche in society. Regardless, what we have seen so far is that due to technological development, efficiency and productivity has grown, which has resulted in a better economy and higher standard of living. The people displaced with technological change eventually adjusted to the new reality after change had come about – the social unrest and short-term unemployment was largely affected by their ability to adapt and find a new role in society. Therefore, when we meet the disruption of robots and artificial intelligence, our focus should not be constraining development, but ensuring that the human part of the workforce has a high degree of flexibility and ability to adapt.

In this process, governmental and educational systems will play an important role and need to adjust in order to provide people with the opportunity to requalify themselves during their careers. Take the example of Singapore, which promotes lifelong learning and offers $345 credit to citizens over 25 that can be used to take a course at approved universities or online in order to remain competitive in a changeable, technologized job market.

In the end, the same question remains – will the robots take our jobs? I do not think so, but evidently, AI technologies and robotics challenge the traditional ways of employment, and being good at one thing no longer guarantees long-term employment. Nevertheless, history has taught us that smashing the weaving machines was not a good solution to unemployment, indicating that flexibility and openness to learning new skills will be key when facing the future labour market.

1. “Retraining low-skilled workers.” The Economist. (accessed November 24, 2017)

Photo: © Sergey/Adobe Stock

What doesn’t kill you…?

By Nora Vilde Agaard

We have all felt it.
The uncomfortable feeling when something does not turn out the way we hoped. The gut wrenching feeling of failure, lingering in our bodies and disturbing our sleep. I still find myself awake some nights thinking about that one time in second grade I called my teacher “mom”. Failures may range from ever so innocent human errors like locking the car keys in the trunk, to more serious incidents like missing a deadline at work or failing an exam. Left with a bruised self-esteem, we are told the clichés that we should “learn from our failures,” and “what doesn’t kill us makes us stronger.”

There has been an enormous increase in startups applying for funding from Innovation Norway the last couple of years. Almost 3000 companies and individuals applied for startup funding in 2015, which is a 120% increase from 2013. Clearly a lot of people wish to create their own workplace, but far from everyone succeeds. Who can predict what will work and what will not? Perhaps no one, so maybe learning from our failures is the best way to succeed?

In 2008, Cassandra Phillips found herself pondering this exact question when she was about to launch her first startup. Having friends in the startup industry, she attended conference after conference where successful entrepreneurs kept going on and on about how successful they had been. She noticed that the only startup founders who were invited to speak at conferences were, quite naturally, those who now harvested the fruits of their success. This realization led to the birth of FailCon. In San Francisco, 2009, the first FailCon was held. FailCon turned into a one-day conference where technology entrepreneurs, investors, developers and designers were invited to have a free and non-judgmental space to discuss the bumps in the road and challenges they had met. FailCon, ironically, turned out to be a success. Over 400 people attended the first conference, and the number of attendees have continued to increase over the past nine years. FailCon has also expanded geographically, being held among other places in Oslo, Tel Aviv, Lyon and Barcelona. There are tentative events scheduled for late 2017 in Mongolia and London. The response to past events has according to Phillips been exclusively positive. But regardless of its success, FailCon is about to change course.

Phillips points out how the focus on failure as a crucial learning experience has increased over the last couple of years. The Internet is flourishing with blog posts and experiences of failure, and how, if possible, to avoid the more serious ones. One quick google search provides almost 600,00 articles and blog posts more or less connected to entrepreneurial failures. With both the growing awareness of the positive aspects of failures and feedback from FailCons audience in mind, Phillips and other founders have new plans for FailCon. The conversations will be moved into more regular and intimate environments – places where people can iterate and collaborate on solutions, not just share problems. Evidently, the focus on failures not as a taboo and embarrassing human phenomenon, but as an important learning experience, is greater than ever. At least if we ought to follow the catchy (if slightly clichéd) slogan of FailCon: “Stop being afraid of failure and start embracing!”

So, even without attending a conference about primary school failures, what did I learn from that one time I called my teacher mom? Perhaps not very much, but I sure did know never to make that mistake ever again.

Gramps and his pet robot

By Eili Skrivervik

Sociable robots entering health care; a necessity and an ethical issue.

Sociable robots are raising many questions about how we take care of our elders, and how we plan to take care of them in the future. For a lot of people, it seems first and foremost to be a question about feelings – not the feelings of the elderly themselves, but the feelings of relatives and caregivers. Should we feel guilty about the sight of gramps caring affectionately for his pet robot?

Paro the sociable robot

Paro is a seal-like sociable robot designed to interact with humans and encourage emotional attachment. Targeted towards elderly in nursing homes, Paro has been proven to calm the distraught and depressed. Still, many seem worried about this development. The term sociable robot implies a robot that is able to “understand” people through communication and interaction. With a growing number of elderly patients to care for, and with labour constraints of cost, efficiency and lack of human personnel, introducing robot assistance to care homes and hospitals seems inevitable. How should we do this, considering politics and ethical concerns?

The demands of an aging population

According to the World Health Organization, the number of people aged 60 and over will double by 2030. Thanks to developments in medicine, technology and a generally safer world, global life expectancy has seen an unprecedented increase in recent decades, challenging our current health system capacities. How we take care of the growing elderly community on a global basis is an increasingly important task.

Artificial care and ethics

One of the major concerns when it comes to inviting robots into elderly care is the ethical aspects that arise. Are the robots just poor substitutes for human health personnel, or are they the only way forward? Are the elderly left dehumanized as a result of robot care? Is it ethical to let emotionless machines take care of emotional humans? Loneliness makes people ill, and part of the goal with sociable robots is preventing loneliness. If we succeed in creating a viable substitute for human care personnel, making our elders happier and healthier, does it matter how we do it? I think the reason we raise so many questions around the issue of robotics in care is due to fear, uncertainty and doomsday stories about technology. Technology can get hacked, and robots perform tasks without contextual understanding or knowledge, and without affection. There is something dubious about a device showing affection without really possessing it.

Looking ahead

Many agree that the use of robots in elderly care could be a necessary next step. With rapid advancements in technology, robots of the future may still be emotionless, but they may not appear to be. Does the prospect of this make us feel better, or worse? If a robot could be built to appear as sensitive and caring as a human, would that make it ok? The current options are not really viable; the number of personnel required in the coming decades to take care of the growing number of elderly is beyond what is likely or even possible from the global community. Besides, economically it doesn’t add up. So, maybe the question we ought to ask should be how we can possibly let our elders go without the best care available?

While gramps is petting his robot, we are left with questions relating our own vulnerability and inadequacy. Is it perhaps time to ask what the elderly want?

Insecure Cayla

By Anne Waldemarsen

Have you heard about My Friend Cayla?
She – or more precisely it – is a smart-doll produced by the American toy company Genesis, a company that in addition to producing an innovative and highly popular doll for children continuously receives much international attention due to undesirable characteristics in the doll. Lurking in the shadow of Cayla’s bright smile lies a threat to children’s rights to personal privacy, rendering them vulnerable to criminal activities.

The doll has an appealing design represented with big, bright eyes, shiny hair and neat clothes. Her core purpose: to become friends with your child. However, there is a hidden agenda in Cayla that enables a type of social-interference with children, far beyond what any parent prefers. White hat hackers and consumer groups discovered that this sweet little doll could be converted into something rather sinister and creepy, like a haunted doll from a horror movie. Cayla´s alluring appearance and obliging nature is designed to be highly desired. The doll is only one example of an increasingly prevalent phenomenon: Electronic devices given interactive qualities, connectable to both the internet and other objects, and to some degree intelligent, becoming a part of the Internet of Things. According to the company’s website, Cayla can talk and interact with children, tell stories, play games and share photos.

Smart doll

The doll is equipped with a microphone and connects to an app through Bluetooth in order to access the internet. It transmits audio recordings and exchanges them with a third-party software company. Cayla has two modes: offline and online. When online, Cayla can answer almost all sorts of questions; she can give you information from Wikipedia, tell you about the weather (because kids these days are known for being weatherphiles), or tell the time. In addition, Cayla is an educational toy, helping solve mathematical problems and spelling words correctly. Even in offline mode – she (I keep referring to it as a person) can answer over a thousand questions, though only about herself. It is easy to see the appeal. As a kid’s toy, My Friend Cayla seems incredible, and compared to the Furbies of only two decades ago, she is. That being said, the amazing characteristics of such products rarely come without a down side.

“Private” toys

It has been revealed that total strangers can exploit the technological traits and lack of barriers within the device; this has been demonstrated in several videos available online. The toy does not require Bluetooth authentication, leaving the doll vulnerable if the hacker is within a range of approximately ten meters. Even though Cayla is installed with filters to prevent uttering or displaying of content that is inappropriate for children, this filter also proved to be hackable. Imagine having strangers listen in on what the doll is recording, or use it as a speaker, to talk to the children. This is not the only alarming aspect regarding children’s safety and privacy: Children’s conversations are recorded through the microphone and sent to a third-party company, Nuance which specialises in voice recognition. It is uncertain how this data can be used, which is especially concerning considering that users are encouraged to reveal sensitive information when setting up the doll, spurring questions about the company’s motives. It is not just the dolls’ weaknesses that can be exploited by hackers and pedophiles. Can we blame the manufacturer´s for having bad intentions, or was it “just” heedless action? In December 2016, the Norwegian Consumer Council conducted a technical test on Cayla, claiming that the manufacturers violated both the Personal Data Act and the Marketing Control Act. They also concluded that the doll is vulnerable to be hacked and exploited, and they pushed for a closure on the sales in toy stores located in Norway. The Consumer Council points to Germany and the handling of the doll there. The German Federal Network Agency banned the doll in 2017, and demanded that parents destroy the doll held in their possession, calling it a device for disguised espionage. Instead of interacting with German children, My Friend Cayla now sits in a glass case in The Spy Museum in Berlin as a warning to visiting parents.

Children Toys in the Internet of Things

The case with Cayla is neither new nor unique. Microphone and camera recording is becoming a common feature in modern toys, along with the applications they interface with, eventually making a subsection of the Internet of Things – the Internet of Toys. When it comes to safety in toys, our focus has traditionally been to worry about loose parts or poisonous fabrics. So far, we can read a clear message from the international handling of My Friend Cayla: Personal security in digital devices must be prioritized. In order to keep up with technological advancements and benefit from them in all possible ways, we need to establish a firm and internationally standardized framework of regulations, as well as norms in informatics education concerning potential flaws in interactive technologies. After all, privacy is a right – not a privilege.

The Toys Yet to Come

Removing harmful items from the store as a response to public demand is comforting, and it is an example of how our societies do have a say in technological development. Yet, banning the doll is not the same as removing the problem. There will be other Cayla’s. Just think, such products can legally enter the market, our homes, and not to mention, our children’s bedrooms. It is a testimony of a society that has failed to focus on humans, human rights and human values in a world with cool, science fiction-like characteristics.

Who will help the carless driver?

By Eirik Venberget

Most of the things you own, and certainly all of what is for sale in any supermarket or store, has been on a truck. It is such a crucial part of our economic system that it has become an integral part of our lives. We seldom stop to think about the importance the truck – and the driver – plays for us to obtain all the things we so deeply desire and rely on.

On the other side, there is something that we have known for a long time; that the driverless car is coming. In fact, since the early 1980s car manufacturers have been researching and developing autonomous cars, intended to make our lives more pleasant, productive and safe as we commute or travel. The technology of today is sufficiently sophisticated to start rolling out autonomous cars. However, legislation and perhaps overly cautious scepticism has prevented this from happening on a large scale.

Few have taken into consideration the revolution this will eventually lead to for career drivers. If you think Uber is a real challenge to the taxi industry, wait until the ubers are made wholly redundant as you drunkenly instruct your car to take you home after a night out. If you think Amazon and their custom-made jumbo jet revolutionised parcel delivery, wait until the book you just ordered is dropped off in bubble wrap by an autonomous drone. And if you think cheap labour from foreign countries took work off local lorry drivers, wait until our roads are filled with self-driving trucks, stopping only to drop off our electronics, clothes, and food.

According to Statistics Norway, there were 62 000 occupational drivers in Norway in 2014. Or should we perhaps refer to them as future unemployed Norwegians? In the US, there are 3.5 million truck drivers. They make up the biggest job sector in 29 out of the 50 states. Needless to say, their future labour displacement will significantly impact the American economy. Many or most of these people will not have the necessary education and experience that is required to easily transition into another occupation.

We have failed to prepare for this inevitable change. And make no mistake, it will have serious ramifications for our society as a larger part of our workforce becomes long-term unemployed. Much media attention has been given to the loss of manufacturing jobs from the western world to developing countries. However, manufacturing in developed countries has been steadily declining since the 1950s. This next robotic revolution might happen at a much faster pace, and with even more serious ramifications.

The pace and severity of this development will be determined by suppliers, consumers and regulators. On the supply side, a plethora of technology companies, car producers and research institutions are already knee-deep in the mass-production of driver-assisting tech and hardware, including Google, Tesla, Apple, BMW, Audi, Toyota, GM, Nvidia, and VW. Traditionally, the suppliers have been ahead of the curve in autonomy, as seen with the sudden surprise when Tesla suddenly introduced its driver-assistance software update to the Model S. It is difficult to accurately predict how consumers will react to self-driving cars, but from a business point-of-view one can easily imagine that general managers in the logistics industry see the benefits of making an investment in self-driving trucks. After all, self-driving trucks will never ask for raises, pensions, days off, or health insurance.

Precisely this is what should worry our legislatures. Business is fundamentally rational; if it can save on expenses, it will. It is the surplus of labour that will come from automation we need to focus on – much more than technical, legal aspects such as who is to blame if there is an accident. Until now, the latter has been given prominence. It must be said that autonomous vehicles will play an important part in solving many of the problems we face in the world today. But if we fail to act on the side-effects now, they might come back to haunt us later – or at least the truck drivers.

Moving beyond artificial intelligence?

By Frans Joakim Titulaer

Artificial intelligence has in many ways failed as an idea, despite the huge attention it continues to receive. We constantly believe that the technology is about to bring about change of epic proportions, and possibly redefine what it means to be human in the process. In many ways, the popular construct of AI is an expression of something that has been much harder to define than commonly thought. It is caught up in a battle for status between the philosopher and the scientist – largely leaving unexamined the technologies themselves, and the role they play in society.

With all new technologies, it can be difficult to separate fact from hype, but let’s aim straight for the gut of this issue. The issue with our conception about AI is that we are not talking about a specific technology, but instead about the borderline between what is mechanical and what is conscious. Alan Turing famously defined the test of AI as whether or not a computer could convincingly pose as a human. That is why the persistent problem with our way of conceptualizing AI has been that we disregard each attempt at it the minute we can see through the principles by which the machine operates.

Take for example IBM’s Deep Blue, which in 1997 was the first computer to beat a chess world champion under regular time controls. While this was an impressive feat, anyone who has played a chess bot would know that the interaction is not much different from one you would have with a hand-held calculator. Even though the mathematical principles that allow answers to appear instantaneously are genius, the technology itself isn’t – and it doesn’t capture our imagination.

The year of AI

2016 has been hailed as the year in which the public woke up to the realities of AI technology. In the business world especially, AI has become a major buzzword. Firms are selling products that help other businesses improve their marketing and communication, project management, human resource management and much more, all through so-called AI technologies. But what does it mean to say that AI exists today, and how does it differ from our ordinary computing technology?

We have grown used to the idea that machines are built to last and to persist in the form and function in which they were designed. The entire idea of man overcoming nature builds on the premise that we build enclaves in which the degenerative effects of the cyclical nature of life are shut out. But the computer is different: it is meant to change over the course of our interactions with it. With our evolving use of computers today, no one could build or design one that would be expected to last. The ordinary modern-day computer already straddles the dichotomy between natural and mechanic. It is meant to degenerate, albeit within the confines of its own system – computers already have programs installed that make this process automatic.

The idea of machine learning is based on the premise that these forms of automatic regenerative processes could incorporate processes of change that separate them from the previous design of the computer, or other designs installed and maintained by human machinists. It would do this based on its interaction with the user, and its general relationship with the outside world. The AI technologies that are currently hitting the market are highly specialised. Modern machines can automate a large range of complex tasks. However, this is not the same as saying that any one machine could perform all these tasks.

Moreover, it is the combination of tasks that make these computations hard. The dream of an artificial intelligence that makes a machine able to deal with ‘the task at hand’, whatever that might be, is still out of reach.

Rather than approaching this kind of ‘general AI’, recent AI breakthroughs have come by connecting machines to large data-sets and letting them specialise on one task within a huge array of ‘situations’. While these AIs can hardly be ascribed autonomous agency, this is not to say that these tools cannot be extremely useful. The breakthroughs of the last few years have come from the release of a few highly prominent technical developments among the giants of the digital landscape. IBM Watson is already being taken in use by nurses across the world to answer difficult questions which earlier required a specialized doctor to answer. What is more, IBM Watson will be used as a principle agent for companies to adapt their ‘connection to the cloud’ in ways that single companies cannot afford to develop by themselves. Companies like IBM are selling technologies that allow these firms to go into alliances with each other, send their data through some database and over time benefit from the algorithm that is coproduced through information economies of scale.

The creature beyond the myth beyond the hype

Our culture has evolved an almost instinctive understanding of the simple principles that guide our so-called ‘smart’ devices, and public discourse has developed rapidly in the last few years in preparation for the next technological revolution. A few years ago, this was generally referred to as «Big Data», but it is now mutating into something akin to AI. The term is already useful to pick up at this stage, because the rapid spread and combined force of bots, open/monetised data (i.e. blockchains) and augmentation design (like virtual reality or interactive infographics) will, in all likelihood, usher in the next generation of the internet. (For those keeping track, our current generation is «internet 2.0,» which is characterised by sharing technologies such as YouTube and Facebook.)

The next-gen version, internet 3.0, has for some time already been envisaged by Tim Berners Lee, the inventor of the World Wide Web. He proclaims that the internet contains the ingredients to live up to its «original» vision of being a liberating «free flow of information» among all users. However, as Lee now also argues, it is becoming clear that despite the fact that it is notoriously hard to eliminate a package of information once it is dispersed throughout the system, this does not mean that all users can benefit from its existence, and thus general access to these utilities are being constrained.

The original version of the WWW was designed for the sharing of research documents among scientists working within the early stages of CERN’s Large Hadron Collider. These were very large data sets, but not Big Data, nothing like ‘humongous data’. The sheer size of the internet today means that we are evermore desperate to have the data contained in scientific articles, as well as different kinds of valuable knowledge objects, given to us directly. «Linking the data,» as Lee calls it, means giving the internet more of a unifying body in which to hold the information. This is one reason why we could think of the future of the internet as ‘semantic’ – or adapting to the question (and ultimately the situation) of the user. Everyone that can access this body could access the information. In reality, it nevertheless comes down to implementing a design standard at a level ‘deeper down’ than the web pages that we now are used to accessing. This process is a constant battle between those who work to regulate the construction of this highly dispersed object (such as the W3C who proposes the HTML standard) so as for it to live up to its potential, and those who seek to benefit from the autonomy of certain channels of access.

If 2016 was the year when our idea of artificial intelligence met with reality, it would be high time for us to move beyond the dichotomies between machine/human, and mechanical/living. Let the ‘purist’ philosophers step aside, retire the popular gimmick of the Turing Test and yield its place to distinctions between specialised and generalised technologies. Let’s not ask how easily we could get fooled by it, but let’s ask how difficult it is to split it apart and contain it.

Packaging in the 21st century

By Eili Skrivervik
This article is written in collaboration with Grønt Punkt.

About one third of the food produced never makes it from farm to fork, according to the UN’s Food and Agriculture Organisation. Food waste is a growing and global issue, plastic packaging is just one part of it. Plastic is an important material when it comes to food packaging both due to low weight and good protection qualities. However, it isn’t very sustainable.

Squishy water

Skipping Rocks Lab is on a mission to eliminate plastic water bottles. Their first product, Ooho!, is a biodegradable and edible capsule for water made from seaweed. The gelatinous packaging is compostable and edible, and meant to be peeled off like fruit. The process they are currently developing allows for them to be made on the spot, just before consumption, eliminating the need for trucking bottled water long distances. Ooho! tastes and hydrates the same as water, yet it doesn’t pollute and puts an end to the ugly footprint plastic bottles leave on the environment. They are currently being trialled at events as an alternative to plastic bottles. Whether people are ready for drinking from a squishy, jellyfish-like blob remains the question.

Food waste turns packaging

Food ends up as waste for a number of reasons. Badly stored produce, produce that goes bad or that gets damaged under transport, items that never get picked off supermarket shelves, food that is marked with wrong expiration dates or that goes out of date, and improper packaged items all ends up in landfills. On top of that come all the resources that go into producing the food and transporting it. With changing values and increased awareness, many companies and organisations are making honest efforts in reducing food waste. Researchers at Egypt’s Nile University are doing similar things with shrimp shells. They buy shrimp shells discarded by restaurants, supermarkets and local fishermen at low prices, and use it to make plastic. Chitosan is the key component from the shrimp shells to creating eco-plastic. After being dissolved and dried, the plastic can be used to make anything, including packaging. The plastic also has antibacterial properties. Estimates today suggest Egypt imports around 3,500 tonnes of shrimp annually, which produces about 1,000 tonnes of shells as waste. Making a sustainable product from the waste is a step forward in green packaging and in the circular economy. Although the biodegradable plastic bags aren’t commercially available yet, the project does have the potential for large-scale industrial production.

The packaging issue reports that production of plastic is nearly 300 million tons annually, half of which is for single use. Packaging is the largest end use market segment accounting for just over 40% of total plastic usage. IKEA is one of the companies that are looking to swap plastic packaging for smarter and more environmental solutions. And so, out goes polystyrene and in goes fungus. The fungi packaging, developed by US based firm Ecovative, will be a huge step forward for the furniture giant – moving from polystyrene, which is tricky to recycle, to a biodegradable option. When a retailer the size of IKEA is making a clear effort to reduce their use of plastic, to satisfy customers wishes and to have a greener footprint, it indicates a major shift.

We are still a long way from making packaging waste disappear. But dude. We’re working on it.

Image Courtesy:


By Siv Helen Gjerstad

I went to the library the other day, looking for a new book on environmental psychology that I wanted to read. When I discovered that neither of the two copies of the book were checked out, I was happy that I could pick it up right away. But as I kept thinking about it, the fact that nobody else was reading the book was just disappointing. Why don’t we care more about the psychological aspects of sustainable transitions?

Eighty percent of Norwegians believed that climate change is caused by human activity in 2015. So most of us do admit that we are the source of environmental changes. Yet we don’t feel much personal responsibility to do anything about it, nor are we willing to make significant sacrifices to make a change.

At the Center for Technology, Innovation and Culture (TIK) we talk as if climate change and global warming is something that will have significant impact on our lives if we don’t take serious action. We talk about the policy changes that we need to make in order to be able to adapt our fossil fuel-based lifestyle to the goals of sustainable development. But do we know enough about how people react to and act upon this information about climate change, and the policies that are put in place to counter it?

Research on environmental psychology and behaviour is not a large academic field, but it is growing. The interesting thing about exploring these aspects of human mind and behaviour is that they are often less intuitive than we might think. One would assume that people who care about the environment would act more environmentally friendly, having a lifestyle with less carbon emissions than those who do not care. However, recent studies suggest that there is no link between the desire to project an environmentally friendly image and environmentally friendly behaviour. Neither energy use in travel nor consumption with significant impact on the environment seems to be affected by people’s attitudes. Denial and irrationality are important aspects of the human mind, and so we need to address them.

Another example I find interesting is how studies indicate negative spillover effects from one environmental activity to another. Festinger’s popular theory of cognitive dissonance would predict a catalyst effect between one environmentally friendly behaviour and another, but on the contrary people can actually be less prone to do more environmentally friendly measures when already doing some. It seems like instead of being motivated to adopt a more environmentally friendly behaviour profile, people feel like they contributed enough to the ‘common good’ already. Being a student at TIK and a cautious optimist I do believe that technology plays a significant role in successful sustainable transitions. That being said, it turns out that those of us who believe in new technology as the key to the reduction of carbon emissions are less willing to change our own consumption behaviour.

Most people acknowledge the fact that our behaviour is involved in climate change, yet we worried less about the greenhouse effect in 2013 than in 1989. That is a strange discovery, considering how scientific evidence has been significantly strengthened the last two decades. Psychologist and economist Per Espen Stoknes describes how information about risk, like the risk of global warming, is too abstract and too distant for us to actually be frightened, and how that impacts our willingness to adjust our behaviour.

These are just a few examples, but they illustrate how human environmental behaviour is very complex, and that it is difficult to anticipate people’s perceptions and behaviour. Stoknes elaborates on the paradox of being aware of global warming, and yet keeping up what can be considered self-destructive behaviour. He puts it very accurately: We pretend to be rational, while behaving irrationally. We need to understand the relevant psychological processes and the barriers in our ways of thinking before attempting to modify human actions. Certain measures might even have negative effects on carbon emissions, thus not knowing anything about environmental psychology may lead to ineffective measures and policy.

Though I am loath to admit it, an army of environmental psychologists is not going to save us from destroying our planet. Nevertheless, I think it is important that we take people’s behaviour and the underlying processes of that behaviour more into consideration. After all, it is nothing but human behaviour that will determine whether we succeed or fail in stopping global warming.

Life after graduating

Vegard worked as a research assistant before he started working
in a market analytics firm. Martin has travelled and been working
at the Norwegian Embassy in Havana, Cuba.


MA specialization: Innovation Studies
Bachelor’s degree: North-American Studies

What did you write about in your master thesis?

My master thesis was about how a company (Telenor Norway) can use customer feedback in their innovation processes. I analyzed around 50 000 free form text messages from Telenor customers, 5000 tweets written about the company, as well as interviewing employees working with innovation in the company. I found that customers can be a source of smaller innovations and inform major decisions that lead to larger organizational innovations.

What have you done since graduating from TIK?

After graduating, I first spent a little over a year working with research at the TIK Centre. First on a paper about organizational innovations, and second on a project on how ICT affects happiness. Now I work as a Junior Consultant at Opinion, a market analytics firm.

In what ways has your TIK education been useful for your career?

Without the TIK education, I could not have had any of the jobs I’ve had. I have benefitted from practical skills that I use every day such as interviewing people, structuring a text properly, writing better and presenting better. Perhaps most importantly, my TIK education made me sit down for a whole year and focus on a problem, banging my head against the wall until something good enough came out. I know that is a skill I will always need going forward.


MA specialization: Innovation Studies
Bachelor’s degree: Social Anthropology

What did you write about in your master thesis?

My thesis was about how the Norwegian hospitals works to follow up new and possibly system changing ideas by hospital employees to enhance and facilitate improvements. The innovation literature emphasizes the importance of innovation actors and contributors within hospitals, but at the same time, there is lack of qualitative knowledge on how these processes occur and develop. I wanted to contribute to this literature by studying eight different ideas and innovations at Sunnaas sykehus, a Norwegian hospital.

What have you done since graduating from TIK?

This last year I have been living abroad. First in Argentina where I did some post-studying, but also some traveling. I have learned the importance of a good, heart-breaking and sexy tango and the incredible atmosphere at an Argentinian football match. Since Christmas, I have been working at the Norwegian embassy in Havana, Cuba, an extremely interesting and special country to live in.

In what ways has your TIK education been useful for your career?

To be quite honest, this question comes a little early in my career. At this stage, I am not even sure I can call it a career? Nevertheless, what I have experienced is that there is an interest in persons with our particular background, and our expertise is relevant for a wide range of working fields.

Bias-debasing Bayes

By Jørgen Tresse


“Prediction is very difficult, especially if it’s about the future.” – Niels Bohr.

We, as individuals and as a species, make predictions about future events all the time. Yet we keep getting many of them wrong, and it often seems like we’re unable to improve our predictive abilities. I present here a roadmap to uncertainty, risk and the failure of predictions, hoping to leave us all a bit wiser regarding this everyday activity.

Cognitive biases

First a word on the difference between risk and uncertainty. A risk is something you take when you know the probability of different outcomes. Uncertainty is what you have when you don’t know the odds of different outcomes. Walking home from work today, you take a chance of dying in a traffic related accident. However, you know that the risk of this happening is low, so you deem it an acceptable risk for walking home. Compare this to the fear many have of flying, or of terrorism. These risks are certainly much lower, but there are so many uncertainties involved, that the perceived risk is higher. This leads us to adopt anti-terrorism measures much more quickly than traffic safety measures. This in turn says something about how we perceive the probability of future events happening – in other words, our predictions about the future.

In addition to perceived risk, we have other cognitive biases – failures, if you will – that affect our ability to clearly and accurately predict outcomes of events. Take for example availability bias. Many studies suggest that we have an easier time remembering and drawing upon things that we are more exposed for, in uncertain situations. This makes sense, but it skews our predictions. Survivorship bias is another one, which is a sampling bias based on only looking at the survivors of an event. During World War II, Abraham Wald famously helped the American Navy build sturdier airplanes. Before Wald, they were reinforcing planes based on where returning planes had been hit, not realizing that they were reinforcing them where a plane could sustain damage and still survive the trip. Wald recognized that their samples consisted only of survivors, and they had hence failed to consider where fallen planes had been hit. Reinforcing the planes instead where the survivors had not been hit, drastically reduced the number of fatalities.

Furthermore, we have the gambler’s fallacy – the belief that just because something has had the same outcome many times in a row, the outcome is bound to soon change. The chance of a coin toss coming up heads is 50%, regardless of how many times you have thrown heads in a row. When simulating coin tosses – that is, writing down what is considered a reasonable result of coin tosses without actually tossing them – people have been found to write down too short streaks. After maybe four or five heads, they feel that they have to switch to tails, even though with real coin tosses you can easily get eight or more heads before tails show up. Humans are very good at finding patterns in data, and reacting accordingly, which gives us many evolutionary advantages. As many a gambler will have experienced, however, being good at finding signal in the noise does not always work to our advantage.

The unknown unknowns

All these biases and pitfalls can be summed up as snap judgements – cognitive heuristics, or shortcuts, that allow us to make decisions quickly, without having to stop and consciously process all the information we are bombarded with everyday. The Nobel laureate Daniel Kahneman, along with his collaborator Amos Tversky, is one of the best-known scientists within the field of heuristics and biases. In his 2011 book, Thinking, fast and slow, he lays out the differences between two “systems” we all have – system 1, which calls all the snap decisions, based on heuristics we already have discussed, and system 2, which consciously processes information before acting on it. When it comes to making predictions about future events, we could all benefit from slowing down, recognizing our blind spots, and putting system 2 in charge.

Of course, recognizing our blind spots requires us to be aware of them in the first place – so-called known unknowns. Former US Secretary of Defence Donald Rumsfeld also gave us two other “knowns”: known knowns – which is simply what we’re aware of that we know – and unknown unknowns, which is the real kicker. You can’t correct a prediction for unknown unknowns, because, well, you don’t know what to correct for. Yet being aware of the fact that there are unknown unknowns, is a giant leap for making more accurate predictions. A famous study by Philip Tetlock found that experts were often no better than amateurs at predicting future events, even though they stated their predictions with great confidence. This is in part based on the wisdom of the crowd, which experts almost by definition try to stand out from, but also because experts like to make predictions in others’ fields (catchily named ultracrepidarianism). This leads many experts to disregard common sense, in a way inadvertently creating their own blind spots, without even realizing it. Just being aware of your blind spots, so to speak, allows us to counteract some of their effect. My proposition? Think probabilistically.

Predicting the unpredictable

One of the great proponents for increasing the accuracy of predictions is Nate Silver. He started the political website FiveThirtyEight, and correctly predicted the results in 49 out of 50 states in the 2008 US presidential election. This improved to 50 out of 50 in the 2012 election, solidifying Silver and his team as top forecasters in the game. In statistics, the reigning paradigm for more than a half century has been testing a null-hypothesis (which posits that there is no relation between the variables you are examining), and disregarding it if a certain value passes a critical threshold, thereby strengthening your belief in there being a relationship between said variables. (All statistics nerds, please disregard my oversimplification of the method.) While a mathematically sound way of finding correlations, it has a few shortcomings.

Firstly, the critical value may in some cases seem arbitrary, and indeed it is. The critical value simply represents how accepting you are of being wrong. Second, your results really only tell you something about your sample of the population. If we could do the impossible task of testing every individual in the population, there would be no need for the prediction in the first place, so you are always operating with a part of the whole. Thirdly, as the more savvy of you will have noticed, I use the word “correlation” instead of “causation” for a reason. A correlation does not ensure a causation, and that leads me to the final point: statistical generalization is good for saying something about the, but not always a good predictor of the future. For predictions we’re better off using another tool, which has gained a lot of traction lately: Bayesian statistics.

Bayesian statistics, named for Thomas Bayes, encourages looking at the world through Bayesian probabilities, which is simply the act of assessing the chance of an event based on the chance of it occurring and your prior expectation of said event occurring. If said event occurs, you update your prior to fit the new data. Sounds intuitive? Bayes reportedly thought so little of his findings that he didn’t even find it worthwhile publishing. Imagine you test positive for a rare disease. Your doctor tells you that it’s correct 99% of the time. So how likely do you think it is that you have the disease? This depends on your prior, which is how likely you thought it was that you had the disease in the first place. If the disease is sufficiently rare, even a 1% error margin will amount to many people getting a false positive, so maybe you don’t need to be so worried. Of course, if you have a second, independent test that also turns up positive, you can be quite sure that you indeed have the disease. Your prior in this case is not the chance of having the rare disease (say 1/1000), but the chance of having the disease and having been tested positive for it before.

Updating your probabilities for future events after an event happens seems like a no-brainer. As they say: once bitten, twice shy. However, this may lead us to premature conclusions. The most difficult part of prediction is figuring out your priors, which is hard to do post-event. This leads us back to our cognitive blind spots. After being hit by lightning you might never go out during a thunderstorm again, even though the chance of being hit is small. Your experience trumps your prior, and suddenly a one-off event defines how you interact with the world. Keeping in mind that events have a probability of happening, and that the happening of an event should only make you update your belief in the probability of it happening, we might all make both more accurate forecasts, and keep the door open for a discussion about events, the future, and the truth.

So how did Silver’s FiveThirtyEight do in the 2016 election? Well, they missed the fact that Trump would win, but they gave him a much larger chance of winning than most other forecasters. As election day rolled around they had Trump at around a 30% chance of winning. This would imply him winning three out of every ten elections, which is far from an assured loss. Having a grasp of probabilities and error margins, this led to the team at FiveThirtyEight to not be surprised by his victory. Understanding probabilities like these is something we do everyday, even if we’re unaware of it. If you predicted that three out of every ten times you went outside, you would get hit by a car, you would probably start staying indoors.

But hey, what do I know? I’m no expert. And perhaps that’s for the best.