Forever a Pornstar, Software Says

Jørgen Tresse
TIK MA student

Judging by any benchmark, Julie is a normal teenager in school. During the course of one day, a series of unfortunate events left her social and private life in ruins. These events also led to her transferring schools and suffering from severe psychological trauma. What happened?

I made up the previous paragraph, but it is nonetheless based on true stories. Unfortunately, this is something that does happen.

It should come as no surprise that teenagers engage in sexual activities. However, in the twenty-first century, these intimate activities are not necessarily acts that stay exclusive only to the people involved. With the advent of social media and an increasing norm of sharing all the details of your life on the Internet, acts that may have been poorly thought through – or at the very least meant solely to be private – can be filmed or photographed and shared with hundreds of people within minutes. As Aftenposten has shed light on through a series of articles in the fall of 2017, it is not uncommon that youths share photographs and videos of sexual activities involving their peers. It even seems to be happening regularly, and involves a wide variety of young people. A common thread is that the persons exposed — often young girls pressured into an act — are unaware of this sharing. It is also striking how the common reactions boys and girls receive are as opposite as can possibly be: their peers praise the boy as a man, while labeling the girl a slut. The photo or video in question can be shared with the whole school within a day, and can even spread further, possibly ruining a person’s social life through crowd judgement in the process.

There are popular porn niches devoted to material where you know the identities of the persons engaging in sexual acts. The allure of recognising an actress is not lost on the Internet, with finding out the identity of people in pornographic videos or gifs being a hobby and skill several people pride themselves on. For example, you can find communities on the social and media aggregation website Reddit where users help each other to determine the name of an actress from a specific pornographic clip, or services like where you can reverse image search porn stars. Recently, Pornhub – as of October 2017 ranked the 20th most visited Internet site in the US – announced that they are piloting an AI-based software that identifies specific porn actresses in clips. This is supposedly so that users can more easily find their favourite actresses and fetishes, and Pornhub claims that they will only use the software on professional actresses. However, as several privacy enthusiasts have pointed out, these new features should be worrying.

Teenagers sharing videos and pictures of each other is devastating for those involved, but unfortunately it is far from the only non-consensual sharing occurring. “Revenge porn” is a category of porn where jilted exes share intimate and private content without their previous partner’s consent, and with the intent of shaming or hurting them in some way. This does not just affect an unlucky few – some surveys have revealed that as many as 23% of respondents, overwhelmingly women, have experienced being the victim of revenge porn, with pictures and videos being spread on an estimated 2,000 websites worldwide dedicated to this genre. Often, this is accompanied by doxxing, or the release of private information such as full name, address, telephone number and more, opening the door for widespread abuse. While there are efforts underway to limit the damage from incidents like these — Twitter, for example, is banning profiles which engage in these activities, and the American Congress is considering making doxxing a federal offence — it is easy to see how software such as Pornhub’s may exacerbate the problem.

It is common that a technology which is developed for a certain use, gets applied in other areas or by actors with other needs. Viagra, for example, was originally intended as a heart medicine, Listerine as a cure for gonorrhea, and the Frisbee was a pie container, but none of these uses are what they are best known for. Serendipitous discoveries happen a lot in the fields of science and technology, but one does not always stumble upon a new use; actors with malicious intent can actively search for ways to warp a technology to fit their needs.

If the facial recognition AI is used on private videos as discussed here, it can connect those videos to a person’s full digital profile, making personal information all the more accessible. Aftenposten focused on youth culture where actions have led to the courtroom, but for Julie, the girl who had to transfer schools, this may be a small consolation. Starting over somewhere new or waiting until content is forgotten, is hard enough as it is, without making the content easier to find and tagging it to a person so that it can follow them through their whole life.

Common sense tells us that anything shared on the Internet is on the Internet forever. The least we can hope for is privacy through hiding in the massive overflow of content that is out there.

Photo: © Andrii Zastrozhnov/Adobe Stock

Apocalyptic Blindness and the Atomic Bomb

Hannah Monsrud Sandvik
ESST MA Student

The mere existence of the atomic bomb carries with it the possibility of the complete annihilation of all forms of life. Through an investigation of the nature of the bomb, we can better understand the relation between technology and the effects machines have on our lives.

Technology is persistently praised for its ability to connect and unite us. In perhaps no case is this more apparent than with regards to the atomic bomb, which in an absolutely inclusive sense affects us all simply by existing. The increasing power struggle between the US and North Korea, and recent reports that the latter has successfully tested hydrogen bombs, only serves to underline the fact that the current atomic situation should be our greatest worry.

Few have written as extensively and profoundly about the atomic bomb as the Austrian philosopher Günther Anders (1902-1992). For Anders, the dropping of the atomic bomb on Hiroshima on August 6th, 1945, marked the beginning of an era where the entire world at any moment could be turned into post-nuclear ashes. The atomic bomb is more than a weapon of mass destruction: because the bomb makes it possible to obliterate all life on earth, we are confronted with a new existential condition. As Anders writes, “the possibility of our final destruction is, even if it never happens, the final destruction of our possibilities.” (My translation.)

In the 1960s, Anders started a correspondence with Claude Eatherly, the American reconnaissance pilot who declared the weather conditions satisfactory to drop the bomb. Their writings were subsequently published in the book Burning Conscience, a collection of letters reflecting upon the human condition in the atomic age1. Eatherly was the living example of everything Anders thought about the bomb. After Hiroshima, Eatherly was celebrated as a war hero, but he struggled to come to terms with his role in the bombings. Subsequently he attempted suicide, went through a divorce and performed several armed robberies, though never actually stealing anything. In Anders’ view, these were acts of repentance: a way of seeking a punishment Eatherly felt he deserved but didn’t get.

The reason why the Eatherly case is so interesting is that it shows how technology turns us into cogs in large machineries and removes us from the relation between cause and effect. Anders calls the gap between our ability to imagine something and our ability to produce it the promethean gap2. The fact that I push the button seems unrelated to the fact that millions of people die as a direct result of this. It is paradoxical how pushing a button is l easier than killing one single person, but this is the case because the larger the possible effect of a certain act, the more difficult it becomes to imagine the effect. Adolf Eichmann, one of the lead organizers of Holocaust, used this line of argument to make the case that he was not guilty for the role he played in murdering thousands of Jews – he was merely following his superiors’ orders. In the Eatherly letters, Anders turns the argument around. Morally speaking, Anders argues, there is no such thing as ‘mere co-acting’ – whatever we’re partaking in doing, promoting or provoking is being done by us, and using Eichmann’s excuse is the same as abolishing the freedom of moral decision and the freedom of conscience. Eatherly’s feeling of guilt, therefore, was an entirely appropriate response. Fortsett å lese Apocalyptic Blindness and the Atomic Bomb

Employment in a Robotised Future

 Ana Gviniashvili
TIK MA Student

Society today does not suffer from unemployment caused by too effective weaving machines.

The battle between humans and technologies counts centuries. Technological progress no longer means just machines doing repetitive work and drudgery, but AI technologies that are able to think and mimic cognitive behaviour. One day in the not too distant future, there might be a robot writing this kind of article, or driving our cars – to some degree, both of these examples are already happening. AI is already able to enter the job market in areas considered less likely to be automated in the past. As technologies are getting smarter and more humanlike at the same time as focus on automation amongst public and private sector increases, there is a rising concern regarding the future job market. Firstly, is automatization a positive or negative thing for humans in the workforce, and secondly, will technologies actually replace human labour altogether?

Technological progress has always had a direct impact on employment and was a driving force in shifting jobs from agriculture to manufacturing, and onwards to service and management occupations. Despite the unemployment the technological development has caused while these transitions were happening amidst protests and anxiety, technologies had a positive rather than negative effect on long term wealth and social well-being. According to McKinsey & Company’s report published in January 2017, it is estimated that between 1850-1910, productivity grew with 0.3% annually after implementing the steam engine. The same thing happened when early robotic and IT technologies were introduced and the resulting productivity growth reached 0.4% and 0.6% respectively. The report predicts that the same thing will happen after the adoption of AI and robotics; by 2065 we’re predicted to have an annual economic growth between 0.8% and 1.4%.  In addition, while productivity grows, working hours are reduced by nearly 57%.

However, implementation of new technologies in companies is linked to unemployment. Due to the growth of artificial intelligence, both skilled and unskilled occupations are at risk of automation. Machine learning gives AI technology the ability to learn from experience without every step of the process being explicitly programmed. Thus, with enormous amounts of data, AI can enrich its experience way faster than humans, and can potentially carry out work that requires decision making, analysis or even creativity (eventually). According to McKinsey, in Norway, near 42% of working hours can be automated with today’s technologies.

The potential of automation differs across activities and sectors. Frey and Osborne (2013) concluded that near 33% of jobs in Norway will be automated in a decade or two. Predictable, repetitive tasks, jobs with lower wages, and education are at higher risks. Examples include manufacturing, food service, transportation, accounting, bookkeeping professionals and general office clerks. Less likely to be automated are jobs with higher education, wage and specialisation – such as the people determining firm strategy, human resource functions, marketing, educational services, software developers, financial analysts, etc.

When considering what the job market will look like in a decade or two, it is important to note that automation of jobs does not necessarily mean that whole occupations will disappear, but rather that some tasks within an occupation will be automated. For instance, fraud detection specialists in finance have changed their focus from looking through physical transaction records, and now work with training intelligent machines and designing increasingly fraud-safe processes. Their jobs still exist, but AI has freed up their time for other, value-creating activities.

However, we have jobs that might disappear entirely, while new jobs arise in their place. For example, when self-driving cars are implemented there will be no need for drivers, but new jobs are created in design, development, monitoring, implementation of AI and robotics.

In general, it is difficult to measure exactly the difference between eliminated jobs and new occupations, but we have seen this kind of process before: Industrialisation, computerisation and now AI and robotisation. Every time we have invented technologies to take over human labour, people have shared the same fear of technologies taking over their niche in society. Regardless, what we have seen so far is that due to technological development, efficiency and productivity has grown, which has resulted in a better economy and higher standard of living. The people displaced with technological change eventually adjusted to the new reality after change had come about – the social unrest and short-term unemployment was largely affected by their ability to adapt and find a new role in society. Therefore, when we meet the disruption of robots and artificial intelligence, our focus should not be constraining development, but ensuring that the human part of the workforce has a high degree of flexibility and ability to adapt.

In this process, governmental and educational systems will play an important role and need to adjust in order to provide people with the opportunity to requalify themselves during their careers. Take the example of Singapore, which promotes lifelong learning and offers $345 credit to citizens over 25 that can be used to take a course at approved universities or online in order to remain competitive in a changeable, technologized job market.

In the end, the same question remains – will the robots take our jobs? I do not think so, but evidently, AI technologies and robotics challenge the traditional ways of employment, and being good at one thing no longer guarantees long-term employment. Nevertheless, history has taught us that smashing the weaving machines was not a good solution to unemployment, indicating that flexibility and openness to learning new skills will be key when facing the future labour market.

1. “Retraining low-skilled workers.” The Economist. (accessed November 24, 2017)

Photo: © Sergey/Adobe Stock

What doesn’t kill you…?

By Nora Vilde Agaard

We have all felt it.
The uncomfortable feeling when something does not turn out the way we hoped. The gut wrenching feeling of failure, lingering in our bodies and disturbing our sleep. I still find myself awake some nights thinking about that one time in second grade I called my teacher “mom”. Failures may range from ever so innocent human errors like locking the car keys in the trunk, to more serious incidents like missing a deadline at work or failing an exam. Left with a bruised self-esteem, we are told the clichés that we should “learn from our failures,” and “what doesn’t kill us makes us stronger.”

There has been an enormous increase in startups applying for funding from Innovation Norway the last couple of years. Almost 3000 companies and individuals applied for startup funding in 2015, which is a 120% increase from 2013. Clearly a lot of people wish to create their own workplace, but far from everyone succeeds. Who can predict what will work and what will not? Perhaps no one, so maybe learning from our failures is the best way to succeed?

In 2008, Cassandra Phillips found herself pondering this exact question when she was about to launch her first startup. Having friends in the startup industry, she attended conference after conference where successful entrepreneurs kept going on and on about how successful they had been. She noticed that the only startup founders who were invited to speak at conferences were, quite naturally, those who now harvested the fruits of their success. This realization led to the birth of FailCon. In San Francisco, 2009, the first FailCon was held. FailCon turned into a one-day conference where technology entrepreneurs, investors, developers and designers were invited to have a free and non-judgmental space to discuss the bumps in the road and challenges they had met. FailCon, ironically, turned out to be a success. Over 400 people attended the first conference, and the number of attendees have continued to increase over the past nine years. FailCon has also expanded geographically, being held among other places in Oslo, Tel Aviv, Lyon and Barcelona. There are tentative events scheduled for late 2017 in Mongolia and London. The response to past events has according to Phillips been exclusively positive. But regardless of its success, FailCon is about to change course.

Phillips points out how the focus on failure as a crucial learning experience has increased over the last couple of years. The Internet is flourishing with blog posts and experiences of failure, and how, if possible, to avoid the more serious ones. One quick google search provides almost 600,00 articles and blog posts more or less connected to entrepreneurial failures. With both the growing awareness of the positive aspects of failures and feedback from FailCons audience in mind, Phillips and other founders have new plans for FailCon. The conversations will be moved into more regular and intimate environments – places where people can iterate and collaborate on solutions, not just share problems. Evidently, the focus on failures not as a taboo and embarrassing human phenomenon, but as an important learning experience, is greater than ever. At least if we ought to follow the catchy (if slightly clichéd) slogan of FailCon: “Stop being afraid of failure and start embracing!”

So, even without attending a conference about primary school failures, what did I learn from that one time I called my teacher mom? Perhaps not very much, but I sure did know never to make that mistake ever again.

Gramps and his pet robot

By Eili Skrivervik

Sociable robots entering health care; a necessity and an ethical issue.

Sociable robots are raising many questions about how we take care of our elders, and how we plan to take care of them in the future. For a lot of people, it seems first and foremost to be a question about feelings – not the feelings of the elderly themselves, but the feelings of relatives and caregivers. Should we feel guilty about the sight of gramps caring affectionately for his pet robot?

Paro the sociable robot

Paro is a seal-like sociable robot designed to interact with humans and encourage emotional attachment. Targeted towards elderly in nursing homes, Paro has been proven to calm the distraught and depressed. Still, many seem worried about this development. The term sociable robot implies a robot that is able to “understand” people through communication and interaction. With a growing number of elderly patients to care for, and with labour constraints of cost, efficiency and lack of human personnel, introducing robot assistance to care homes and hospitals seems inevitable. How should we do this, considering politics and ethical concerns?

The demands of an aging population

According to the World Health Organization, the number of people aged 60 and over will double by 2030. Thanks to developments in medicine, technology and a generally safer world, global life expectancy has seen an unprecedented increase in recent decades, challenging our current health system capacities. How we take care of the growing elderly community on a global basis is an increasingly important task.

Artificial care and ethics

One of the major concerns when it comes to inviting robots into elderly care is the ethical aspects that arise. Are the robots just poor substitutes for human health personnel, or are they the only way forward? Are the elderly left dehumanized as a result of robot care? Is it ethical to let emotionless machines take care of emotional humans? Loneliness makes people ill, and part of the goal with sociable robots is preventing loneliness. If we succeed in creating a viable substitute for human care personnel, making our elders happier and healthier, does it matter how we do it? I think the reason we raise so many questions around the issue of robotics in care is due to fear, uncertainty and doomsday stories about technology. Technology can get hacked, and robots perform tasks without contextual understanding or knowledge, and without affection. There is something dubious about a device showing affection without really possessing it.

Looking ahead

Many agree that the use of robots in elderly care could be a necessary next step. With rapid advancements in technology, robots of the future may still be emotionless, but they may not appear to be. Does the prospect of this make us feel better, or worse? If a robot could be built to appear as sensitive and caring as a human, would that make it ok? The current options are not really viable; the number of personnel required in the coming decades to take care of the growing number of elderly is beyond what is likely or even possible from the global community. Besides, economically it doesn’t add up. So, maybe the question we ought to ask should be how we can possibly let our elders go without the best care available?

While gramps is petting his robot, we are left with questions relating our own vulnerability and inadequacy. Is it perhaps time to ask what the elderly want?

Insecure Cayla

By Anne Waldemarsen

Have you heard about My Friend Cayla?
She – or more precisely it – is a smart-doll produced by the American toy company Genesis, a company that in addition to producing an innovative and highly popular doll for children continuously receives much international attention due to undesirable characteristics in the doll. Lurking in the shadow of Cayla’s bright smile lies a threat to children’s rights to personal privacy, rendering them vulnerable to criminal activities.

The doll has an appealing design represented with big, bright eyes, shiny hair and neat clothes. Her core purpose: to become friends with your child. However, there is a hidden agenda in Cayla that enables a type of social-interference with children, far beyond what any parent prefers. White hat hackers and consumer groups discovered that this sweet little doll could be converted into something rather sinister and creepy, like a haunted doll from a horror movie. Cayla´s alluring appearance and obliging nature is designed to be highly desired. The doll is only one example of an increasingly prevalent phenomenon: Electronic devices given interactive qualities, connectable to both the internet and other objects, and to some degree intelligent, becoming a part of the Internet of Things. According to the company’s website, Cayla can talk and interact with children, tell stories, play games and share photos.

Smart doll

The doll is equipped with a microphone and connects to an app through Bluetooth in order to access the internet. It transmits audio recordings and exchanges them with a third-party software company. Cayla has two modes: offline and online. When online, Cayla can answer almost all sorts of questions; she can give you information from Wikipedia, tell you about the weather (because kids these days are known for being weatherphiles), or tell the time. In addition, Cayla is an educational toy, helping solve mathematical problems and spelling words correctly. Even in offline mode – she (I keep referring to it as a person) can answer over a thousand questions, though only about herself. It is easy to see the appeal. As a kid’s toy, My Friend Cayla seems incredible, and compared to the Furbies of only two decades ago, she is. That being said, the amazing characteristics of such products rarely come without a down side.

“Private” toys

It has been revealed that total strangers can exploit the technological traits and lack of barriers within the device; this has been demonstrated in several videos available online. The toy does not require Bluetooth authentication, leaving the doll vulnerable if the hacker is within a range of approximately ten meters. Even though Cayla is installed with filters to prevent uttering or displaying of content that is inappropriate for children, this filter also proved to be hackable. Imagine having strangers listen in on what the doll is recording, or use it as a speaker, to talk to the children. This is not the only alarming aspect regarding children’s safety and privacy: Children’s conversations are recorded through the microphone and sent to a third-party company, Nuance which specialises in voice recognition. It is uncertain how this data can be used, which is especially concerning considering that users are encouraged to reveal sensitive information when setting up the doll, spurring questions about the company’s motives. It is not just the dolls’ weaknesses that can be exploited by hackers and pedophiles. Can we blame the manufacturer´s for having bad intentions, or was it “just” heedless action? In December 2016, the Norwegian Consumer Council conducted a technical test on Cayla, claiming that the manufacturers violated both the Personal Data Act and the Marketing Control Act. They also concluded that the doll is vulnerable to be hacked and exploited, and they pushed for a closure on the sales in toy stores located in Norway. The Consumer Council points to Germany and the handling of the doll there. The German Federal Network Agency banned the doll in 2017, and demanded that parents destroy the doll held in their possession, calling it a device for disguised espionage. Instead of interacting with German children, My Friend Cayla now sits in a glass case in The Spy Museum in Berlin as a warning to visiting parents.

Children Toys in the Internet of Things

The case with Cayla is neither new nor unique. Microphone and camera recording is becoming a common feature in modern toys, along with the applications they interface with, eventually making a subsection of the Internet of Things – the Internet of Toys. When it comes to safety in toys, our focus has traditionally been to worry about loose parts or poisonous fabrics. So far, we can read a clear message from the international handling of My Friend Cayla: Personal security in digital devices must be prioritized. In order to keep up with technological advancements and benefit from them in all possible ways, we need to establish a firm and internationally standardized framework of regulations, as well as norms in informatics education concerning potential flaws in interactive technologies. After all, privacy is a right – not a privilege.

The Toys Yet to Come

Removing harmful items from the store as a response to public demand is comforting, and it is an example of how our societies do have a say in technological development. Yet, banning the doll is not the same as removing the problem. There will be other Cayla’s. Just think, such products can legally enter the market, our homes, and not to mention, our children’s bedrooms. It is a testimony of a society that has failed to focus on humans, human rights and human values in a world with cool, science fiction-like characteristics.

Who will help the carless driver?

By Eirik Venberget

Most of the things you own, and certainly all of what is for sale in any supermarket or store, has been on a truck. It is such a crucial part of our economic system that it has become an integral part of our lives. We seldom stop to think about the importance the truck – and the driver – plays for us to obtain all the things we so deeply desire and rely on.

On the other side, there is something that we have known for a long time; that the driverless car is coming. In fact, since the early 1980s car manufacturers have been researching and developing autonomous cars, intended to make our lives more pleasant, productive and safe as we commute or travel. The technology of today is sufficiently sophisticated to start rolling out autonomous cars. However, legislation and perhaps overly cautious scepticism has prevented this from happening on a large scale.

Few have taken into consideration the revolution this will eventually lead to for career drivers. If you think Uber is a real challenge to the taxi industry, wait until the ubers are made wholly redundant as you drunkenly instruct your car to take you home after a night out. If you think Amazon and their custom-made jumbo jet revolutionised parcel delivery, wait until the book you just ordered is dropped off in bubble wrap by an autonomous drone. And if you think cheap labour from foreign countries took work off local lorry drivers, wait until our roads are filled with self-driving trucks, stopping only to drop off our electronics, clothes, and food.

According to Statistics Norway, there were 62 000 occupational drivers in Norway in 2014. Or should we perhaps refer to them as future unemployed Norwegians? In the US, there are 3.5 million truck drivers. They make up the biggest job sector in 29 out of the 50 states. Needless to say, their future labour displacement will significantly impact the American economy. Many or most of these people will not have the necessary education and experience that is required to easily transition into another occupation.

We have failed to prepare for this inevitable change. And make no mistake, it will have serious ramifications for our society as a larger part of our workforce becomes long-term unemployed. Much media attention has been given to the loss of manufacturing jobs from the western world to developing countries. However, manufacturing in developed countries has been steadily declining since the 1950s. This next robotic revolution might happen at a much faster pace, and with even more serious ramifications.

The pace and severity of this development will be determined by suppliers, consumers and regulators. On the supply side, a plethora of technology companies, car producers and research institutions are already knee-deep in the mass-production of driver-assisting tech and hardware, including Google, Tesla, Apple, BMW, Audi, Toyota, GM, Nvidia, and VW. Traditionally, the suppliers have been ahead of the curve in autonomy, as seen with the sudden surprise when Tesla suddenly introduced its driver-assistance software update to the Model S. It is difficult to accurately predict how consumers will react to self-driving cars, but from a business point-of-view one can easily imagine that general managers in the logistics industry see the benefits of making an investment in self-driving trucks. After all, self-driving trucks will never ask for raises, pensions, days off, or health insurance.

Precisely this is what should worry our legislatures. Business is fundamentally rational; if it can save on expenses, it will. It is the surplus of labour that will come from automation we need to focus on – much more than technical, legal aspects such as who is to blame if there is an accident. Until now, the latter has been given prominence. It must be said that autonomous vehicles will play an important part in solving many of the problems we face in the world today. But if we fail to act on the side-effects now, they might come back to haunt us later – or at least the truck drivers.

Moving beyond artificial intelligence?

By Frans Joakim Titulaer

Artificial intelligence has in many ways failed as an idea, despite the huge attention it continues to receive. We constantly believe that the technology is about to bring about change of epic proportions, and possibly redefine what it means to be human in the process. In many ways, the popular construct of AI is an expression of something that has been much harder to define than commonly thought. It is caught up in a battle for status between the philosopher and the scientist – largely leaving unexamined the technologies themselves, and the role they play in society.

With all new technologies, it can be difficult to separate fact from hype, but let’s aim straight for the gut of this issue. The issue with our conception about AI is that we are not talking about a specific technology, but instead about the borderline between what is mechanical and what is conscious. Alan Turing famously defined the test of AI as whether or not a computer could convincingly pose as a human. That is why the persistent problem with our way of conceptualizing AI has been that we disregard each attempt at it the minute we can see through the principles by which the machine operates.

Take for example IBM’s Deep Blue, which in 1997 was the first computer to beat a chess world champion under regular time controls. While this was an impressive feat, anyone who has played a chess bot would know that the interaction is not much different from one you would have with a hand-held calculator. Even though the mathematical principles that allow answers to appear instantaneously are genius, the technology itself isn’t – and it doesn’t capture our imagination.

The year of AI

2016 has been hailed as the year in which the public woke up to the realities of AI technology. In the business world especially, AI has become a major buzzword. Firms are selling products that help other businesses improve their marketing and communication, project management, human resource management and much more, all through so-called AI technologies. But what does it mean to say that AI exists today, and how does it differ from our ordinary computing technology?

We have grown used to the idea that machines are built to last and to persist in the form and function in which they were designed. The entire idea of man overcoming nature builds on the premise that we build enclaves in which the degenerative effects of the cyclical nature of life are shut out. But the computer is different: it is meant to change over the course of our interactions with it. With our evolving use of computers today, no one could build or design one that would be expected to last. The ordinary modern-day computer already straddles the dichotomy between natural and mechanic. It is meant to degenerate, albeit within the confines of its own system – computers already have programs installed that make this process automatic.

The idea of machine learning is based on the premise that these forms of automatic regenerative processes could incorporate processes of change that separate them from the previous design of the computer, or other designs installed and maintained by human machinists. It would do this based on its interaction with the user, and its general relationship with the outside world. The AI technologies that are currently hitting the market are highly specialised. Modern machines can automate a large range of complex tasks. However, this is not the same as saying that any one machine could perform all these tasks.

Moreover, it is the combination of tasks that make these computations hard. The dream of an artificial intelligence that makes a machine able to deal with ‘the task at hand’, whatever that might be, is still out of reach.

Rather than approaching this kind of ‘general AI’, recent AI breakthroughs have come by connecting machines to large data-sets and letting them specialise on one task within a huge array of ‘situations’. While these AIs can hardly be ascribed autonomous agency, this is not to say that these tools cannot be extremely useful. The breakthroughs of the last few years have come from the release of a few highly prominent technical developments among the giants of the digital landscape. IBM Watson is already being taken in use by nurses across the world to answer difficult questions which earlier required a specialized doctor to answer. What is more, IBM Watson will be used as a principle agent for companies to adapt their ‘connection to the cloud’ in ways that single companies cannot afford to develop by themselves. Companies like IBM are selling technologies that allow these firms to go into alliances with each other, send their data through some database and over time benefit from the algorithm that is coproduced through information economies of scale.

The creature beyond the myth beyond the hype

Our culture has evolved an almost instinctive understanding of the simple principles that guide our so-called ‘smart’ devices, and public discourse has developed rapidly in the last few years in preparation for the next technological revolution. A few years ago, this was generally referred to as «Big Data», but it is now mutating into something akin to AI. The term is already useful to pick up at this stage, because the rapid spread and combined force of bots, open/monetised data (i.e. blockchains) and augmentation design (like virtual reality or interactive infographics) will, in all likelihood, usher in the next generation of the internet. (For those keeping track, our current generation is «internet 2.0,» which is characterised by sharing technologies such as YouTube and Facebook.)

The next-gen version, internet 3.0, has for some time already been envisaged by Tim Berners Lee, the inventor of the World Wide Web. He proclaims that the internet contains the ingredients to live up to its «original» vision of being a liberating «free flow of information» among all users. However, as Lee now also argues, it is becoming clear that despite the fact that it is notoriously hard to eliminate a package of information once it is dispersed throughout the system, this does not mean that all users can benefit from its existence, and thus general access to these utilities are being constrained.

The original version of the WWW was designed for the sharing of research documents among scientists working within the early stages of CERN’s Large Hadron Collider. These were very large data sets, but not Big Data, nothing like ‘humongous data’. The sheer size of the internet today means that we are evermore desperate to have the data contained in scientific articles, as well as different kinds of valuable knowledge objects, given to us directly. «Linking the data,» as Lee calls it, means giving the internet more of a unifying body in which to hold the information. This is one reason why we could think of the future of the internet as ‘semantic’ – or adapting to the question (and ultimately the situation) of the user. Everyone that can access this body could access the information. In reality, it nevertheless comes down to implementing a design standard at a level ‘deeper down’ than the web pages that we now are used to accessing. This process is a constant battle between those who work to regulate the construction of this highly dispersed object (such as the W3C who proposes the HTML standard) so as for it to live up to its potential, and those who seek to benefit from the autonomy of certain channels of access.

If 2016 was the year when our idea of artificial intelligence met with reality, it would be high time for us to move beyond the dichotomies between machine/human, and mechanical/living. Let the ‘purist’ philosophers step aside, retire the popular gimmick of the Turing Test and yield its place to distinctions between specialised and generalised technologies. Let’s not ask how easily we could get fooled by it, but let’s ask how difficult it is to split it apart and contain it.

Packaging in the 21st century

By Eili Skrivervik
This article is written in collaboration with Grønt Punkt.

About one third of the food produced never makes it from farm to fork, according to the UN’s Food and Agriculture Organisation. Food waste is a growing and global issue, plastic packaging is just one part of it. Plastic is an important material when it comes to food packaging both due to low weight and good protection qualities. However, it isn’t very sustainable.

Squishy water

Skipping Rocks Lab is on a mission to eliminate plastic water bottles. Their first product, Ooho!, is a biodegradable and edible capsule for water made from seaweed. The gelatinous packaging is compostable and edible, and meant to be peeled off like fruit. The process they are currently developing allows for them to be made on the spot, just before consumption, eliminating the need for trucking bottled water long distances. Ooho! tastes and hydrates the same as water, yet it doesn’t pollute and puts an end to the ugly footprint plastic bottles leave on the environment. They are currently being trialled at events as an alternative to plastic bottles. Whether people are ready for drinking from a squishy, jellyfish-like blob remains the question.

Food waste turns packaging

Food ends up as waste for a number of reasons. Badly stored produce, produce that goes bad or that gets damaged under transport, items that never get picked off supermarket shelves, food that is marked with wrong expiration dates or that goes out of date, and improper packaged items all ends up in landfills. On top of that come all the resources that go into producing the food and transporting it. With changing values and increased awareness, many companies and organisations are making honest efforts in reducing food waste. Researchers at Egypt’s Nile University are doing similar things with shrimp shells. They buy shrimp shells discarded by restaurants, supermarkets and local fishermen at low prices, and use it to make plastic. Chitosan is the key component from the shrimp shells to creating eco-plastic. After being dissolved and dried, the plastic can be used to make anything, including packaging. The plastic also has antibacterial properties. Estimates today suggest Egypt imports around 3,500 tonnes of shrimp annually, which produces about 1,000 tonnes of shells as waste. Making a sustainable product from the waste is a step forward in green packaging and in the circular economy. Although the biodegradable plastic bags aren’t commercially available yet, the project does have the potential for large-scale industrial production.

The packaging issue reports that production of plastic is nearly 300 million tons annually, half of which is for single use. Packaging is the largest end use market segment accounting for just over 40% of total plastic usage. IKEA is one of the companies that are looking to swap plastic packaging for smarter and more environmental solutions. And so, out goes polystyrene and in goes fungus. The fungi packaging, developed by US based firm Ecovative, will be a huge step forward for the furniture giant – moving from polystyrene, which is tricky to recycle, to a biodegradable option. When a retailer the size of IKEA is making a clear effort to reduce their use of plastic, to satisfy customers wishes and to have a greener footprint, it indicates a major shift.

We are still a long way from making packaging waste disappear. But dude. We’re working on it.

Image Courtesy:


By Siv Helen Gjerstad

I went to the library the other day, looking for a new book on environmental psychology that I wanted to read. When I discovered that neither of the two copies of the book were checked out, I was happy that I could pick it up right away. But as I kept thinking about it, the fact that nobody else was reading the book was just disappointing. Why don’t we care more about the psychological aspects of sustainable transitions?

Eighty percent of Norwegians believed that climate change is caused by human activity in 2015. So most of us do admit that we are the source of environmental changes. Yet we don’t feel much personal responsibility to do anything about it, nor are we willing to make significant sacrifices to make a change.

At the Center for Technology, Innovation and Culture (TIK) we talk as if climate change and global warming is something that will have significant impact on our lives if we don’t take serious action. We talk about the policy changes that we need to make in order to be able to adapt our fossil fuel-based lifestyle to the goals of sustainable development. But do we know enough about how people react to and act upon this information about climate change, and the policies that are put in place to counter it?

Research on environmental psychology and behaviour is not a large academic field, but it is growing. The interesting thing about exploring these aspects of human mind and behaviour is that they are often less intuitive than we might think. One would assume that people who care about the environment would act more environmentally friendly, having a lifestyle with less carbon emissions than those who do not care. However, recent studies suggest that there is no link between the desire to project an environmentally friendly image and environmentally friendly behaviour. Neither energy use in travel nor consumption with significant impact on the environment seems to be affected by people’s attitudes. Denial and irrationality are important aspects of the human mind, and so we need to address them.

Another example I find interesting is how studies indicate negative spillover effects from one environmental activity to another. Festinger’s popular theory of cognitive dissonance would predict a catalyst effect between one environmentally friendly behaviour and another, but on the contrary people can actually be less prone to do more environmentally friendly measures when already doing some. It seems like instead of being motivated to adopt a more environmentally friendly behaviour profile, people feel like they contributed enough to the ‘common good’ already. Being a student at TIK and a cautious optimist I do believe that technology plays a significant role in successful sustainable transitions. That being said, it turns out that those of us who believe in new technology as the key to the reduction of carbon emissions are less willing to change our own consumption behaviour.

Most people acknowledge the fact that our behaviour is involved in climate change, yet we worried less about the greenhouse effect in 2013 than in 1989. That is a strange discovery, considering how scientific evidence has been significantly strengthened the last two decades. Psychologist and economist Per Espen Stoknes describes how information about risk, like the risk of global warming, is too abstract and too distant for us to actually be frightened, and how that impacts our willingness to adjust our behaviour.

These are just a few examples, but they illustrate how human environmental behaviour is very complex, and that it is difficult to anticipate people’s perceptions and behaviour. Stoknes elaborates on the paradox of being aware of global warming, and yet keeping up what can be considered self-destructive behaviour. He puts it very accurately: We pretend to be rational, while behaving irrationally. We need to understand the relevant psychological processes and the barriers in our ways of thinking before attempting to modify human actions. Certain measures might even have negative effects on carbon emissions, thus not knowing anything about environmental psychology may lead to ineffective measures and policy.

Though I am loath to admit it, an army of environmental psychologists is not going to save us from destroying our planet. Nevertheless, I think it is important that we take people’s behaviour and the underlying processes of that behaviour more into consideration. After all, it is nothing but human behaviour that will determine whether we succeed or fail in stopping global warming.