The Politics of a Cybernetic World. Exploring today’s digital world through the historical lens of cybernetics

*** This event is now fully booked, if you want to reserve a seat on the waiting list, you can email anne.hovingh@student.uva.nl ***

What: A creative and engaging event exploring the politics of cybernetics with Katherine Hayles, Luc Steels, Andrew Pickering, and Ricarda Franzen
When: March 23, 4-7 PM
Where: Crea Muziekzaal, Nieuwe Achtergracht 170, Amsterdam
Entrance: free, registration required
Funded by the Netherlands Organisation for Scientific Research (NWO) as part of the research project Safeguarding long-term equal stakeholdership in the Smart City & the Center for Urban Studies of the University of Amsterdam as part of a collaboration with the Sheffield Urban Automation Institute

This is the concluding event of the two-day seminar The State of Cybernetics. The digitization of cities, bodies and communities. Click here for more information about this seminar.

What do cities, robots, corporations, political organizations, human bodies and the ecological environment have in common? For the scientists involved in the development of cybernetics in the 1940s, this was all but an awkward question. In seminars organized across the world, the cyberneticians came to think of humans, machines and the social and natural world as identical in their informational essence. In their intellectual and hands-on experimentations, they called forth a world in which machines, bodies and nature are entangled as complex, permanently evolving systems. As they theorized information to flow ever more effortlessly within and between these systems, they conceived new modes of social organization and political subjectivity. Humans no longer appeared as sovereign and bounded individuals but as circuits of polymorphous informational systems.

The purpose of this afternoon is to revisit the legacy of cybernetics to shed light on contemporary digital politics. Many of the fundamental questions asked by cyberneticians regain salience today. What remains of liberal individualism when the boundaries between humans, machines and nature are blurred? What are the systemic properties and operating routines of democracy in a world in which machines and humans are increasingly entangled?

Program:

4-4.30 PM performative reading directed by Ricarda Franzen

“Cybernetics Performed”
A theatrical reading of the Macy Conferences, directed by Ricarda Franzen (University of Amsterdam)

This six week long theatrical research was motivated by an interest in the content and form of the Macy conferences on cybernetics (1946-1953) — the latter described as “a moment when a new set of ideas impinged on the human sciences and began to transform some traditional fields of inquiry.” (Heims 1991). Together with the four performers, counseled by Dorien Zandbergen, and based on an initial idea of David Gauthier’s, Ricarda Franzen directed the actors in exploring the performative potential of a text that she composed entirely out of the original transcripts of the Macy conferences. While the performance features a number of noted cyberneticians, conceptually it centers on the figure of Gregory Bateson as observed through the eyes of his daughter who would go on to write an ethnography of a 1968 conference.

Performed by Jono Freeman, Kaylee Spivey Good, Merel Eigenhuis and Alzbeta Tuckova

4.30-5.30 PM lecture Katherine Hayles and short Q&A

“Does a Computer Have an Umwelt? An Exploration of Meaning-making Beyond the Human”
Keynote lecture professor Katherine Hayles (Duke University)

This talk explores the possibility of meaning-making beyond the human and beyond the biological into artificial forms of cognition. Many of our environmental crises today can be understood as an over-emphasis on humans as the most important species on the planet and an under-recognition of meaning-making among nonhuman animals and plants. Exploring that possibility opens up in a new way how meaning-making occurs, and thus sheds new light on cognitive assemblages, where humans and computational media interact. Jakob von Uexküll’s “umwelt” theory, which he articulated in the 1920s and 1930s, proposes that biological lifeforms construct subjective worlds for themselves based on the kinds of sensory systems they have and their environmental interactions. In addition, von Uexküll was an early cybernetician, proposing feedback mechanisms for many biological systems. Although von Uexküll’s work remains central to biosemiotics, the cybernetic aspect is little known or cited. This nearly-forgotten thread suggests the possibility for an expansion of the umwelt beyond the biological. Computers, like biological organisms, know the world through the data available to them, which may be limited to their programs or may extend into the world through sensors and actuators. The crucial element that the umwelt idea adds to existing discourse is the link between sign and meaning, potentially casting new light on the ways in which computational media construct meanings for themselves as subjects. This talk will explore that possibility, comparing contemporary media archeology with the umwelt and outlining the implications for a theory of meaning for networked and programmable machines.

5.30-6.30 PM lecture Luc Steels and short Q&A

“Cybernetics, Artificial Intelligence, and Artificial Life. Past interactions and future prospects.”
Keynote lecture professor Luc Steels

In the late eighties and nineties I was a core participant – and hence privileged observer – of the rise of Artificial Life, working and interacting intensely with Chris Langton, Rodney Brooks, Francesco Varela and other key players in the field. What were the movitations of this research field and what has come of it?
Artificial Intelligence (AI) had sprung up in the late nineteen fifties out of the earlier work on cybernetics, focusing on similar issues as cybernetics, namely the nature of intelligence and how it could be captured in artifacts, but bringing a new powerful toolkit from the then emerging field of computer science to the table. Computer Science goes beyond electrical engineering by being able to represent and process hugely complex data structures at high speeds, so that it becomes possible to seriously start modeling human language processing, problem solving, logical reasoning, and expert decision-making. By the early nineteen eighties symbolic processing technologies had reached maturity and AI had become a field with industrial applications and a growing impact on information technology. By comparison cybernetics became almost exclusively restricted to the construction of adaptive controllers for autonomous systems.

But by the late eighties there was a kind of revival of cybernetics. Several of the early cybernetic experiments (such as Grey Walter’s Elmer and Elsie) were reconstructed, although now with more solid mechanical and computational technologies. New types of conferences sprung up (such as the Simulation of Adaptive Behavior series with the first one in 1990 (Steels, 1990)), a few seminal workshops (in particular the Corsendonck workshop in 1991 (Steels and Brooks, 1994)), advanced schools (the most famous one being the Trento springschool in 1994 (Steels, 1995)) and new journals (such as the journal of Adaptive Behavior). What was going on? I believe the key objective of this new wave was to address two critical issues that AI had (and still has) trouble with, namely meaning and origins. We argued that a proper handling of meaning required agency, autonomy, embodiment, and an ecological setting, all topics that cybernetics had also integrated. And to understand the origins of intelligence we needed to adapt concepts from evolutionary biology and complex systems science, such as self-organisation, selection, level-formation, a.o. Many ideas and fascinating experiments came out of all this and I will give a short overview and how these ideas spilled over into language and concept formation.

The Alife approach to AI ran its course and is no longer in the spotlight. Instead, during the past decade AI has become dominated by statistical machine learning techniques, generating a tsunami of new technologies and applications thanks to the availability of Big Data and the massive increase in computing power. This has almost swept away both the sophisticated research on knowledge representation and reasoning (although this research has its own huge impact today in the semantic web or expert decision-making systems) and the biologically inspired research that powered the temporary interaction between AI and AL in the nineties. I believe however that in the near future we will see a resurgence of the issues that were raised by Alife-AI, as AI systems become more ubiquitous and widespread.

The recent perception that AI has reached a new peak of achievement, coupled to the development of other digital technologies, such as virtual reality, cloud computing, social media, digital self-monitoring, brain-computer interfaces, etc., has recently given rise to a curious fascinating new set of narratives about the future of humankind. There seems to be a new kind of religion taking shape, centered around digital immortality, which is thought to be achievable through sophisticated virtual AI agents and technologies of mind-uploading based on digital traces of human activity. I will briefly report on an artistic project that explores these issues using the medium of opera.

6.30-7PM Panel and discussion

Panel with Andrew Pickering, Luc Steels and Katherine Hayles

About the contributors

Katherine Hayles is Professor and Director of Graduate Studies in the Program in Literature at Duke University, and Distinguished Professor Emerita at the University of California, Los Angeles. She teaches and writes on the relations of literature, science and technology in the 20th and 21st centuries. Amongst her distinguished works are How We Think: Digital Media and Contemporary Technogenesis; How We Became Posthuman: Virtual Bodies in Cybernetics, Literature and Informatics, and Writing Machines.

Luc Steels is professor of computer science at the University of Brussels (VUB), co-founder and chairman (from 1990 until 1995) of the VUB Computer Science Department (Faculty of Sciences) and founder and first-director of the Sony Computer Science Laboratory in Paris. His main research field is Artificial Intelligence covering a wide range of intelligent abilities, including vision, robotic behavior, conceptual representations and language.

Andrew Pickering is an emeritus professor at the University of Exeter. He is internationally known as a leader in the field of science and technology studies. He is the author of Constructing Quarks: A Sociological History of Particle Physics, The Mangle of Practice: Time, Agency and Science and Kybernetik und Neue Ontologien. In his book The Cybernetic Brain: Sketches of Another Future, he analyses cybernetics as a distinctive form of life spanning brain science, psychiatry, robotics, the theory of complex systems, management, politics, the arts, education, spirituality and the 1960s counterculture, and argues that cybernetics offers a promising alternative to currently hegemonic cultural formations.

Ricarda Franzen works as a dra­maturg, sound artist and researcher at the University of Amsterdam. Coinciding with her interests in art practice, she is interested in aspects of sound in relation to its environment but also as being used in theatre and radio dramas. For the Rotterdam-based laboratory for Unstable Media she co-produced a performance based on the ideas of Buckminster Fuller and Marshall McLuhan. For the theatrical performance she developed for ‘the State of cybernetics,’ she similarly draws inspiration from a group of historical cutting-edge thinkers and tinkerers.

The performers:
Jono Freeman studied as an actor in Sydney Australia, before obtaining a Bachelor in Performance Studies and a DipEd in Drama Method (USYD and UNSW), and becoming a high school Drama teacher.
Kaylee Spivey Good is an American actor working to obtain her MA in Theatre Studies from Universiteit van Amsterdam focusing on the theatricality of antiquites in the early 19th century.
Merel Eigenhuis is -besides an enthusiastic drama teacher- a MA student Theatre Studies. She’s currently most interested in the crossover between digital technologies and contemporary theatre.
Alzbeta Tuckova is a theatre maker and performer studying MA Theatre Studies in UvA.  She has a practical background in performance art and theatre and is passionate about the power of art to express politics.

The organizers

Dorien Zandbergen is an anthropologist of digital culture and politics, currently working as a postdoc researcher at the Sociology Department of the University of Amsterdam. Her current work critically explores the politics of urban digitization. In the documentary In search of the Smart Citizen, which she co-produced with Sara Blom (Creative Commons 2015), she interrogates the vision of the “smart city.” She co-founded Stichting Gr1p  to support artistic and literary interventions that help make complex technological themes, visible, debatable and tangible for a broad audience. Her recent academic publications include “From data fetishism to quantifying selves” (with Tamar Sharon, New Media & Society, 2016) and “We Are Sensemakers.” The (Anti-)politics of Smart City Co-creation” (Public Culture, 2017).

Justus Uitermark is Associate Professor of Sociology at the University of Amsterdam. He is affiliated with the Center for Urban Studies and the Amsterdam Institute for Social Science Research. Uitermark’s research uses relational theorizing and network analysis to examine self-organization, political conflict, and the social organization of the city. With colleagues at the University of Amsterdam, he is currently researching the online/offline interface, utilizing data sourced from Twitter and Instagram to analyze subcultures and social movements. Recent publications include “Longing for Wikitopia. The study and politics of self-organization” (in Urban Studies) and Cities and Social Movements (co-authored with Walter J. Nicholls, Wiley).

The State of Cybernetics. The digitization of cities, bodies and communities. Seminar. Amsterdam. March 22-23


On March 22 and 23, Dorien Zandbergen and Justus Uitermark, based at the University of Amsterdam, organize a seminar entitled “The State of Cybernetics. The digitization of cities, bodies and communities.”

The purpose of the seminar is to revisit the legacy of cybernetics to shed light on contemporary digital politics. Many of the fundamental questions asked by cyberneticians regain salience today. What remains of liberal individualism when the boundaries between humans, machines and nature are blurred? What are the systemic properties and operating routines of democracy in a world in which machines and humans are increasingly entangled?

Scholars from fields as diverse as Philosophy, Anthropology, and Artificial Intelligence will give presentations. The speakers include Simon Marvin, Noortje Marres, Andrew Pickering, Willem Schinkel, Linnet Taylor and Tsjalling Swierstra. 

To allow for an in-depth discussion, there is a limit to the number of participants. Are you interested in taking part? Please inquire with Anne Hovingh: anne.hovingh@student.uva.nl

After you register you will receive a more detailed program with abstracts, locations and times.

The overarching vision for this seminar is to build and strengthen a network of thinkers and practitioners interested in developing critical perspectives regarding digital politics and digital urbanism in particular. This network stretches beyond academia and crosses over to multiple disciplines and fields of practice. Starting September 2018, the aim is to work towards joint research proposals, publications, and events.

The seminar will be concluded by a public event on Friday March 23 at 4PM with lectures by Luc Steels and Katherine Hayles, a theatrical performance prepared by Ricarda Franzen and a discussion between the speakers joined by Andrew Pickering. 

The seminar is funded by the Netherlands Organisation for Scientific Research (NWO) as part of the research project “Safeguarding long-term stakeholdership in Smart Cities” and by the Center for Urban Studies of the University of Amsterdam as part of a collaboration with the Sheffield Urban Automation Institute.

Dear CEO – Carmen van Vilsteren (TU/e) and Luis Rodil-Fernández (Gr1p)

“What can we get from the body that we cannot get from explicit habits and behaviors that can be observed by means other than electrophysiology?” [LRF]


Writers of this conversation are:

Luis Rodil-Fernández, writing for the Gr1p Foundation, he is an artist, researcher, teacher and hacker. And Carmen van Vilsteren, Director of Strategic Area Health at Eindhoven University of Technology.


Luis Rodil-Fernández - Sunday 24-09-2017 15:15

Dear Carmen et al.,

This is a novel format for me, so I hope we can find a way to communicate our arguments in depth in such a short time, using the clumsy medium e-mail is.

My education is in the arts. I also studied computer science and have been an engineer for some years. Perhaps I should go into further detail about what my relationship with the study of the human body through sensors is. In my artistic work I use biomedical devices that interface directly with the body, so I have a certain familiarity with picking up bodily signals and using them for various purposes.

I was part of BALTAN Labs “Hacking the Body” program and was a resident at Philips Research, designing experiments for biosynchrony research using methods from electrophysiology. In addition to this more practical experience I am a privacy activist and a teacher. Each of my activities informs the other of course, and although I am an avid user of technology, it is hard for me to feel uncritical once I have managed to contextualise a particular development in tech.

I understand that the context of our exchange is handed to us by the title ‘We Know How You Feel’ and that the question pivots around the work that Nick Verstand did in collaboration with TNO, which uses EEG to infer emotional states. VPRO proposes a scenario in which similar technology could be applied to market research to more precisely target media content. If I understand correctly our conversation starts from that proposed scenario.

What strikes me most in that proposed scenario is perhaps its total lack of imagination, as it consists of adopting activities that already exist. It doesn’t take a great stretch to imagine how physiological data might be incorporated into the data pool that is currently being used to profile media users, by internet media anyway.

So called ‘surveillance capitalism’ is the economic model of most media platforms on the internet. In a single technological generation our televisions have turned from bulky analog devices just a tad more sophisticated than a radio receiver, into general computing devices sitting in our living rooms, containing a wealth of sensors capable of doing their own data gathering as well.

The scenario proposed by VPRO is one in which the economic model of the internet gets extrapolated to television with a few extra channels of consumer data thrown in, namely physiological data. All of this is already happening, at least as separate streams in diverse industries. The synthesis that this project of VPRO is proposing is altogether too plausible, well within the realm of possibility, which is the reason why I find it worth exploring in greater detail.

A business area called neuromarketing already exists, and its practitioners do exactly what VPRO is proposing in this scenario: finding technical means that allow marketeers to precisely target a product to an individual based on their unconscious biophysical activity. The hypothesis of somatic markers formulated by Antonio Damasio states that our body is constantly producing and processing cues about our emotional state, and that our body seems to ‘know’ things even before those things are consciously known to us. Neuromarketing aims at exploiting this gap between consciousness and the creation of desire.

Now, do these techniques have a place in our consumption of media? Do they have applications beyond ‘selling more stuff’? How does the individual media consumer benefit from this scenario? I presume the reason why GR1P was brought into this conversation is to provide a counterbalance to this all too plausible scenario, and question the ethical implications of such a development.

The massive amount of data that certain companies have on us, internet users, is more substantial than most of us suspect and already provides insight into processes we as individuals are unconscious of. Facebook for example makes use of a thing called ‘derived qualifiers’, such as ‘likelihood of becoming addicted’ that rates a particular user’s susceptibility to falling into addiction. These inferred markers are constructed by combining various other metrics that Facebook can quantify directly and they use these ‘qualifiers’ to more accurately target advertisements and content to us.

Facebook can already do this already without access to our physical bodies, so at this point I would like to raise a question: to what extent does having physical access to the body of the consumer matter in making these kinds of inferences possible? Have the quantitative methods used by Facebook and Google not already made this ‘access to the body’ unnecessary and obsolete? What can we get from the body that we cannot get from explicit habits and behaviors that can be observed by means other than electrophysiology?

There are many more points to discuss of course, but I hope this can be a productive introduction and that we can take it from here.

Looking forward to hearing from you.

Salud,

Luis


What can we get from the body that we cannot get from explicit habits and behaviors that can be observed by means other than electrophysiology?
Luis Rodil-Fernández


Carmen van Vilsteren - Tuesday 26-09-2017 20:40

Dear Luis,

Thanks for opening the conversation. To me the format is also very new. I am not a writer at all. I have a background as an engineer and I have been working in the health domain for most of my life, serving several larger and smaller companies. I was development manager for cardio vascular imaging systems at Philips in the nineties and up until today every second somewhere a patient is treated with a system we introduced in these years.

At the moment I combine my position as director of strategic area Health at Eindhoven University of Technology (TU/e) with that of CEO at Microsure, a startup in robotics for micro surgery. At TU/e we are working on several new technologies. In one of these, regenerative medicine, we try to entice the body to heal itself. At Microsure our ambition is to give surgeons superhuman precision.

Your letter reminds me of a project I saw some two years ago. It was called Probes and was set up by Hans Robertus. You may know him. The results were presented during the Dutch Design Week. A diverse group of students got the assignment to think about solutions for a society where people would live to be 150 years old.

One group came up with the idea of an implantable chip that records all your life and health events, to be used for instance for preventive and non-preventive treatment. So not just ‘We Know How You Feel’, but also ‘We know how you will feel in the future’.

To substantiate this idea they set up an interesting experiment. They hired an office in Strijp S (hotspot of the DDW) and bought a lifelike baby doll and some blank cards. They then offered the baby doll to people in the street, telling them that this was their so called new born baby and that they had to register their son or daughter at the municipal office next door.

Most people agreed to participate in the experiment and came up with a name for their ‘child’. At the office they were told about the possibility to implant this chip. The new parents had to decide on the spot if they wanted this to be done, since it would only work if implanted during the first day.

The students expected all sorts of discussions and questions about data protection, privacy and the safety of the technology. But what they did not expect happened. All ‘parents’ opened the discussion about ethics: do I want to do this to my child? Would you have had the chip implanted?

Regards,

Carmen


Luis Rodil-Fernández - Friday 29-09-2017 13:31

Hello Carmen,

To answer your question: before making that decision I would need to know a bit more about that hypothetical chip. What it does precisely, where it resides in the body, what the effects on the child are and who owns the implant. Is the chip a networked device or not? Does it perform any kind of data collection or is that data never stored? What earlier tests were done with the implant in humans? Who makes the chip? Is it a proprietary design or is it open source?

Of course I would have some serious concerns before happily implanting a technological artefact in the body of my new born for the rest of their life. But I wouldn’t be opposed as a matter of principle. I do think that technology has a role in improving people’s lives. My reaction wouldn’t be technophobic but cautious.

The questions that you thought people would ask regarding privacy, data protection and the safety of the technology are also ethical questions by the way. To me the question ‘would I implant this in my child?’ is not the only one about the ethical implications of the proposed scenario.

There are many examples of poor data security or poor privacy protection potentially resulting in the weaponisation of a technology that at first seemed innocuous. A technological device never comes to this world in isolation. It always brings with it a part of the future. A future that becomes our present the moment we let that technology enter our lives. We can’t possibly predict how it will evolve.

A chip implanted in my child today can be the root of discrimination of my child twenty years from now, or the target for an attack by a hostile actor the day after tomorrow. It’s s important to understand that these scenarios are not merely hypothetical. If the technology exists and the stakes are high, the technology will be weaponised.

I invite you to reflect on the recent revelations in the press about the role both Facebook and Twitter played in Russian meddling in the American election. To make money from advertising both companies offer sophisticated tools for targeting slices of their market. These tools enable such detailed targeting that advertisements can even be aimed at a single individual. It turns out that a 100.000 dollars worth of strategically placed posts were shown to American voters in the run up to the election.

In a press statement last week Mark Zuckerberg was indignant about the role Facebook had played (2) and he admitted to not having done enough to prevent these forces from intervening in the democratic process. There was no need for the influencers to break into Facebook systems or to employ anything that is traditionally thought of as a ‘hacker breach’. The (supposedly) Russian actors that wanted to buy influence did so by using tools that Facebook offers to legitimate advertisers.

What these actors did was to use these tools for a different purpose than Facebook had intended. All technologies, bar none, will be deployed for unintended use as the social context around them evolves. As William Gibson once wrote in his book ‘Burning Chrome’: the street finds its own uses for things. A technological artefact developed with the best intentions can and will very likely find unintended applications.

Going back to the ethical question you asked, I’d like to continue in that vein and to pose a few questions in return: given your ample experience in bringing technological products to the market I assume you have worked with a broad range of engineering and design professionals. How are these ethical questions dealt with in your professional environment? Is awareness of these issues common? What is the role of these ethical questions in the product development cycle? Have you seen any changes in these perceptions in your long years of work?

Salud,
Luis


“A chip implanted in my child today can be the root of discrimination of my child twenty years from now, or the target for an attack by a hostile actor the day after tomorrow. It’s s important to understand that these scenarios are not merely hypothetical. If the technology exists and the stakes are high, the technology will be weaponised.”
Luis Rodil-Fernández


Carmen van Vilsteren - Tuesday 6-10-2017 15:34

Dear Luis,

You want to know my opinion on the ads that were placed on Facebook and Twitter last year with the intention to influence the American presidential election. To be honest, I think you already made the perfect analysis yourself. People will indeed always find unintended ways of using technologies and other means. And then there’s the immediacy of posting on Facebook and Twitter. Things will come online without delay and thus with very limited room for intervention of correction.

Perhaps that stems from the way things are generally done in the media: very limited checks before publication, but an evaluation – and taking lessons from that – afterwards. I found out about this practice when I visited the local newspaper one day, to learn how they managed to make a new product – the newspaper – every day, while it took us multiple years to develop a new x-ray machine. This approach is called ‘benchmarking best practices’.

The newspaper editors told us they worked by a set of simple rules, which everyone knew. For example: no negative publicity on the royal family. And they did not check any stories upfront for lack of time, but would discuss them the next morning instead. This is in line with Mark Zuckerberg’s quote: ‘We don’t check what people say before they say it, and frankly, I don’t think our society should want us to.’ In the case of the meddling ads the damage was done long before any evaluation took place, so it was irreversible.

Your second question, about the role ethics play in the development of new medical technology, had me thinking a bit longer. To be honest I can’t recall any deep discussions on the topic during my years in the development of new cardio vascular x-ray systems. Improving these systems usually means improving the treatment for the patient with better images, reduced doses of x-rays, etcetera.

Patient safety is of the greatest importance during the development cycle of these imaging systems, so hazard analysis and extensive testing are always part of the process. Part of that testing is to determine how to best move around a patient and how to protect them against potential collisions during the process.

During my first project I started these tests with myself as the patient on the table. Initially some my colleagues thought that was bad and unsafe plan. My answer was this: if we don’t even dare to lie on that table ourselves we can’t ask a patient to do so. So it became common practice for developers to voluntarily play the patient during some of the collision tests.

Now, at Eindhoven University of Technology, I am confronted with many more ethical questions, for instance about the development of implants like pacemakers and brain implants. People depend on these technologies and their quality of life can be at stake. One of our four faculties that are involved in these projects has its own ethics department.

During the development of new devices and apps healthy people and patients are also ‘used’ as test persons. There is an increasing amount of regulations governing these practices in the Netherlands. Every experiment has to comply with these rules and regulations, and test persons have to sign an agreement before joining in. All of this is also overseen by an ethics commission from the university.

Kind regards,

Carmen

Dear CEO – Harrie van de Vlag and Paulien van Slingerland (TNO) and Linnet Taylor (Gr1p)

“The one fundamental rule about new technologies is that they are subject to function creep: they will be used for other purposes than their originators intended or even imagined.” [LT]


Writers of this conversations are:

Linnet Taylor writes for the Gr1p Foundation and is a researcher. Her research focuses on the use of new types of digital data in research and policymaking around issues of development, urban planning and mobility. Her pen pals are Harrie van de Vlag and Paulien van Slingerland, both Consultants Data Science at TNO.


Harrie van de Vlag & Paulien van Slingerland - Thursday 28-09-2017 17:40

Dear Linnet,

We are writing you to discuss a new trend in data science: “affective computing”.

Emotions and relationships have long been important in our economy. People do not buy a ticket for a concert, but for an unforgettable evening with friends. People are not looking for a new job, but for a place in an organisation with a mission that suits their world views and principles.

A stronger emotional connection has a higher value, both for companies and consumers. This is why at TNO we are researching how affective states (emotions) can be interpreted, using wearables that record properties like heart rate, brain activity (EEG), skin conductivity (sweat), etcetera.

Starting point for our research was a question by Arnon Grunberg, who was interested to learn how his readers felt while reading his books. For this purpose we have conducted an experiment in a controlled environment with 200 voluntary participants. To bring this technology out of the lab and into the field TNO, Effenaar Smart Venue and software developer Eagle Science are working towards new prototypes of appliances based on emotion measurements.

The first one will be demonstrated during the Dutch Design Week 2017 (October 21-29). Together with Studio Nick Verstand, we will present the audiovisual artwork AURA, an installation that displays emotions as organic pulsating light compositions, varying in form, colour and intensity.

Eventually this technology can be used for instance to develop new forms of market research, enabling companies to measure the emotional experience of voluntary consumers without disturbing their experience. This reveals which parts of the customer journey are perceived as positive and which as annoying. Acting on these insights allows companies to provide a better experience, for instance during shopping, while visiting a festival, or when following a training in virtual reality.

At TNO, we are well aware that emotions are closely tied to the private sphere of individuals. The question arises whether consumers need to choose between their privacy on the one hand and the comfort of personalised services on the other. The upcoming new privacy legislation (GDPR) also highlights the importance of this dilemma. This is why TNO is also researching technologies to share data analyses, without disclosing the underlying sensitive data itself. For instance because the data remains encrypted at all times. This way, from a technical point of view, the dilemma appears to be solved and there would no longer be a need to choose between privacy and convenience.

At the same time we expect that this can only be the case if people feel they can trust such a system, and that more is needed than just a technical solution. Therefore we are interested in your point of view. What else is needed to establish trust?

Best regards,

Paulien van Slingerland and Harrie van de Vlag
TNO
Innovators in Data Science


At TNO, we are well aware that emotions are closely tied to the private sphere of individuals. The question arises whether consumers need to choose between their privacy on the one hand and the comfort of personalised services on the other.
Harrie van de Vlag & Paulien van Slingerland


Linnet Taylor - Thursday 28-09-2017 23:07

Dear Paulien and Harrie,

I read with interest your explanation of your new project on measuring emotional experiences. It is exciting to be part of the birth of a new technology, and the wonder of innovation is clear in your AURA project which will translate sensed emotions into light. I think this will provide new opportunities to investigate processes of human emotion, especially for the ‘quantified self’ community already engaged in measuring and tracking their own experience of the world.

I question, however whether tracking one’s changing emotional state as one experiences media, or anything in fact, is part of a ‘customer journey’. This is not just about sensing, but about investigating the border between software and wetware – technology that aims to connect to and enhance the human brain.

It is interesting to its corporate sponsors because it promises new forms of access not to ‘the customer’ but to people, in all our idiosyncrasy and physicality. Those forms of access are not necessarily more accurate than asking people what they think, but they will be more seamless and frictionless, blending into our lives and becoming something we are rather than something we do.

You ask whether consumers need to choose between their privacy on the one hand and the comfort of personalized services on the other. I think this question may distract attention from a more central one: can we separate our existence as consumers from our existence as citizens, partners, workers, parents? Our emotions are an essential bridge between ourselves and others, and what we show or hold back determines the kinds of relationships we can form, and who we can be in relation to our social world.

The language of choice may not be the right language here: your project uses only volunteers, but is it clear what they are volunteering? Your technology has a 70-per-cent accuracy, according to test subjects. But there is profound disagreement amongst brain specialists as to what we measure when we study emotions.

William James, one of the founders of psychology, argued that our experience of emotions actually results from their physical expression: we feel sad because we cry and we feel happy because we smile, not the other way around. If this is true, the sensors you are developing will have better access to the biological content of our emotions than we will, which has implications for – among other things – our freedom to form our own identities and to experience ourselves.

I am reminded of a project of Facebook’s that was recently discussed in the media. The company’s lab is attempting to produce a brain-computer speech-to-text interface, which could enable people to post on social media directly from the speech centre of their brains – whatever this means, since there is no scientific consensus that there is such a thing as a “speech centre”.

The company’s research director claims this cannot invade people’s privacy because it merely decodes words they have already decided to share by sending them to this posited speech centre. Interestingly, the firm will not confirm that people’s thoughts, once captured, will not be used to create advertising revenue.

You ask what is needed to establish trust in such a system. This is a good question, because if trust is needed the problem is not solved. This is one of a myriad initiatives where people are being asked to trust that commercial actors, if given power over them, will not exploit it for commercial purposes. Yet this is tech and media companies’ only function. If their brief was to nurture our autonomy and personhood, they would be parents, priests or primary school teachers.

The one fundamental rule about new technologies is that they are subject to function creep: they will be used for other purposes than their originators intended or even imagined. A system such as this can measure many protected classes of information, such as children’s response to advertisements, or adults’ sexual arousal during media consumption.

These sources of information are potentially far more marketable than the forms of response the technology is currently being developed to measure. How will the boundary be set and enforced between what may and may not be measured, when a technology like this could potentially be pre-loaded in every entertainment device? Now that entertainment devices include our phones, tablets and laptops, as well as televisions and film screens, how are we to decide when we want to be watched and assessed?

Monitoring technologies produce data, and data’s main characteristic is that it becomes more valuable over time. Its tendency is to replicate, to leak, and to reveal. I am not sure we should trust commercial actors whose actions we cannot verify, because trust without verification is religious faith.

Yours,

Linnet Taylor
TILT (Tilburg Institute for Law, Technology and Society)


Harrie van de Vlag & Paulien van Slingerland - Thursday 05-10-2017 14:45

Dear Linnet,

Thank you for sharing your thoughts. The topics you describe underline the importance of discussing ethics and expectations concerning new technology in general, and affective computing in particular.

You end your letter saying that ‘if trust is needed, the problem is not solved’. This is true in cases where the trust would solely be based on a promise by a company or other party. However, there are two other levels of trust to take into account: trust based on law and trust based on technical design.

To start with trust based on law: the fact that a technology opens new possibilities, does not mean that these are also allowed by law. The fact that pencils can not only be used to write and draw, but also to kill someone, does not mean that that the latter is also allowed by law.

The same goes for affective computing: while the possibilities of affective computing and other forms of data analytics are expanding rapidly – your examples illustrate that – the possibilities of actually applying this technology are increasingly limited by law. As a matter of fact, new privacy legislation (GDPR) will become effective next year. Europe is significantly stricter in this than America (where companies like Facebook are based).

For example, as TNO is a Dutch party, we can not collect data for our research during the AURA demonstration without the explicit consent of the voluntary participants. They have to sign a document. Moreover, we need to ensure that the data processing is adequately protected. For special information, such as race, health and religion, extra strict rules apply.

Furthermore, we cannot use this data for any other purpose than the research described. For instance, VPRO was interested in our data for publication purposes. However, aside from the fact that we take the privacy of our participants very seriously, we are simply not allowed to do this by law. So TNO will not share this data with VPRO or any other party.

Altogether, applications of affective computing as well as systems for sharing analyses without disclosing data are both limited by law. We are actually developing the second category to facilitate practical implementation of the law, as the system is designed to guarantee technically that commercial companies (or anyone else for that matter) can not learn anything new about individuals.

This is trust by technical design, a novel concept that does not require a promise or law in order to work. At the same time, we realise that this is a new and unfamiliar way of thinking for many people. Therefore, we are interested to learn what is needed before such a system can be adopted as an acceptable solution.

To this end, let us rephrase our original question as follows: under what conditions would you recommend people to provide their data to such a system, given the technical guarantee that no company or other party would actually be able to see the data, even if they wanted to?

Best regards,

Paulien van Slingerland and Harrie van de Vlag
TNO
Innovators in Data Science


Can we separate our existence as consumers from our existence as citizens, partners, workers, parents? Our emotions are an essential bridge between ourselves and others, and what we show or hold back determines the kinds of relationships we can form, and who we can be in relation to our social world.
Linnet Taylor


Linnet Taylor - Zondag 08-10-2017 21:56

Dear Paulien and Harrie,

Your response is a useful one. It has made me consider what we mean when we talk about trust, and how the word becomes stretched across very different contexts and processes. You ask, under what conditions would I recommend people provide their data to a system that can sense their response to media content, given the technical guarantee that no company or other party would actually be able to see the data, even if they wanted to.

This is, of course, a difficult question. People should be free to adopt any technology that they find useful, necessary, interesting, stimulating. And it is likely that this sensing system will be judged all of these things. Let us be honest here, though – it is not a citizen collective that has asked us to write this exchange of letters.

We are exchanging thoughts about the future activities of media corporations, at the request of a media corporation. If the technology were going to be used exclusively in a highly bounded context where the data produced could not be shared, sold or reused in any way, I am not sure we would have been asked to have this conversation.

I think the reason we have been asked to exchange ideas is because there are huge implications to a technology that purports to allow the user to view people’s emotional processes. This technology has the potential to help media providers shape content into personal filter bubbles, like our timelines on social media.

These bubbles have their own advantages and problems. There has been much recent analysis, for example, of how the populist parties coming to power around the world have benefited hugely from digital filter bubbles where people access personalised content that aligns strongly with their own views.

It is indeed important that such a system should be used in accordance with the law. But data protection law, in this case, is a necessary but insufficient safeguard against the misuse of data. The real issue here is volume. Most people living today are producing vast quantities of digital data every moment they are engaged with the world.

These data are stored, kept, used, and eventually anonymised – at which point data protection law ceases to apply, because how can privacy relate to anonymised data? Yet the system you are developing demonstrates exactly how. It is another technology of many that will potentially make profiling easier. It will show providers our weak points, the characteristics that make it possible to sell to us – and it can do this even if we do not use it.

An example: someone wishes to live without a filter bubble and does not consent to any personalisation. But all the other data they emit in the course of their everyday life generate a commercial profile of them which is highly detailed and available on the open market. The features which make them sensitive to some types of content and not others are identifiable: they have children, they like strawberries, they are suffering domestic violence, they are made happy by images of cats. A jumble of many thousands of data points like these constitute our digital profiles.

But it is not only our characteristics. It is those of people around us, or like us. Knowledge about the attributes of users of a system such as yours (whose response to content can be directly measured) can be cross-referenced with the attributes of those who do not use it. Once this happens, it becomes possible to infer that my heart will beat harder when I watch one movie than when I watch another; that I will choose to go on watching that provider’s content; that my attention will be available for sale in a particular place at a particular time.

In this way, consent and privacy become meaningless if there are enough data points about us all: new technologies that pinpoint our behaviour, feelings and susceptibilities are valuable not for their immediate uses but as an addition to the long-term stockpile of data on all of us – and especially useful with regard to those who do not choose personalisation and are therefore harder to pinpoint and predict.

This is why I am sceptical about invoking ‘trust’ as something that can be generated by making sure individual applications of a particular technology comply with data protection law. Data protection is a cousin to privacy, but it is not at all the same thing. We may guard data without guarding privacy, and we may certainly trust that our data is being handled compliantly with the law, while also having reservations about the bigger picture.

Things that are perfectly permissible under data protection law, yet are also unfair include charging different people different prices for the same goods online; following users’ activities across devices to understand precisely what makes them respond to advertisements, and a company passing on our personal data to unlimited subsidiary companies. Law is no panacea, nor can it be relied upon to predict what will happen next.

I do not cite these things to argue that you should stop developing affective computing technologies for commercial use. I use them to suggest two fundamental realities: first, that we are no longer capable of understanding the long-term or collective implications of the data we emit, and second, that our consent is not meaningful in that context.

Having made my argument for these two problems, and how they relate to your work, I can pose a question in return: how can we, as developers and users of data analytics technologies, collaborate to look beyond legal compliance to the many possible futures of those technologies, and create ways to shape those futures?

Yours with best regards,

Linnet Taylor

Dear CEO – Geert-Jan Bogaerts (VPRO) and Tessel Renzenbrink (Gr1p)

“Technology is not inherently good or bad or neutral. It is what we make it.” [TR]

Writers of this conversation are:
Geert-Jan Bogaerts heads the department of digital media of Dutch broadcaster VPRO. Responsible for digital channels and innovation and distribution strategy.

Tessel Renzenbrink is part of the network of the Gr1p Foundation. She is freelance writer and web editor focusing on the impact of technology on society, particularly information and renewable energy technologies.


Geert-Jan Bogaerts - Sunday 17-09-2017 11:59

Dear Tessel,

It feels almost Victorian, like in an epistolary novel by Mary Shelley or Anne Brontë, to start corresponding with a complete stranger on a subject that’s apparently close to both our hearts. I‘m eager to learn what themes you will provide and I look forward to discussing them with you. Of course, at the same time there’s a strange twist to it. We write to each other but we also know that our correspondence will be made public, and therefore one tends to — or rather, I tend to — put a better foot forward. I mean, this is not without obligation.

But anyhow, we are meant to begin this correspondence with a short introduction. I don’t have to mention my name as you already know it, just like my position —head of digital at VPRO. Rather than just stating the facts I think it’s more interesting to tell you how I see myself, what I identify with most. I mean, of course I am a son, a father, a brother, a husband, a friend, a workmate — these are the parts we can all more or less see ourselves in. But what makes me different? What defines my identity most of all?

The first word that springs to mind is journalist. Although nowadays I am much more a manager, a strategist and a policy maker, my background as a journalist still shines through in everything I do. It determines what questions I ask, how I view the world and which solutions I come up with for problems I encounter. Fifteen years of editorial work — as a freelancer first and later writing for de Volkskrant (business desk and correspondent in Brussels) – does shape you for life.

At the time — we’re talking the late nineties — I was stationed in Brussels and reported on the EU, NATO and Belgium, but in my own time I got involved in the online world. The strategic implications of this technological progress were far from distinct then, but it was already evident that the internet would profoundly change our trade and society in general. In 2003 I made it my profession as well, first as head of online at de Volkskrant, from 2010 as a freelance writer. advisor and teacher and since 2014 in my present job.

How do I observe technological progress now? Not just from the strategic mission that comes with my job, but explicitly also from the impact this progress has on our culture, our coexistence, our economy, our politics, our government. I feel it is very much a key task for public broadcasters to sketch the consequences, to explain developments and to ask questions. It is from that perspective that I look at our project “We Know How You Feel”. What exactly does it mean when our thoughts and feelings will be out in the open? How does that change us? As an individual, in our relationships and in our social interactions?

I hope and expect this project will bring us interesting new insights.

Warm regards,

GJ Bogaerts
Head of digital VPRO


What exactly does it mean when our thoughts and feelings will be out in the open? How does that change us? [GJ]


Tessel Renzenbrink - Sunday 24-09-2017 23:55

Hi Geert-Jan,

I must confess that I started out as a techno-optimist. I was convinced that the liberating possibilities of information and communication technology would actually lead to the most positive outcome. These possibilities lie mainly in the fundamental shift from centralised to decentralised. From a world ruled by a small group of people in positions of power to a world in which every voice is equal. I was convinced that this leveling would erode the power of institutional strongholds.

Take the mass media for example. Newsrooms at papers and TV stations used to both determine what the news was and how it was framed. The documentary Page One relates how The New York Times saw its authority diminished when the internet surpassed the paper as an information source.

In the days of old the NYT set the agenda. What the paper wrote determined what people talked about. That fact is presented with pride and a yearning for better days. No one asks if it is at all desirable when just a handful of editors sets the public debate, day in day out.

Another example of decentralisation is the rise of cryptocurrencies like Bitcoin. They enable monetary transactions without the interference of a central authority. Banks will no longer be too big to fail when that system takes hold, they will be obsolete.

As we all know things went different. The internet did not decentralise the world but the world centralised the internet. Once the web became popular, it was taken over by commercial parties. Almost 80 percent of web traffic now goes through Google and Facebook.

Googles algorithms determine which information comes up when you do a web search. Facebook has positioned itself between our personal interactions with family and friends and forces us to communicate by Facebook rules. It will do everything to keep us on its platform as long as possible, so it can sell our time and attention to advertisers. And, of course, both companies collect enormous amounts of data on us.

By now I see that technology does not necessarily propel us towards the most positive (or negative) outcome. Technology is not inherently good or bad or neutral. It is what we make it. That is why I got involved in the Gr1p network. The Gr1p Foundation wants to give people more grip on their digital surroundings so they can make informed choices.

Our choice of technologies and the way we use them impacts our society. But presently technological development is mainly corporate driven. That’s why both with Gr1p and in my work as a writer I strive for greater involvement of citizens in the digitization of society, so we can decide in a democratic way what kind of future we want to build using technology.

I fully agree with you that there is a task for public broadcasters here. And — more specifically about the subject of our correspondence — I find it useful that VPRO Medialab dives into emerging technologies. As a public institution you can study them from a different perspective than profit driven companies do. My first question to you therefore concerns how you give an interpretation to that task.

If I understand correctly you research a new technology and its impact on the process of media production every year. Last year it was virtual reality and this year it’s wearables. You aim specifically at measuring emotions using wearable technology and the role this could play in creating and consuming media.

A practical application you research is the use of emotional data by broadcasters to offer people a personal, mood based viewing experience. With what purpose do you research that application? What kind of service would you like to offer your viewers by using wearables?

You wrote that the project aims to find out what it means when our thoughts and emotions are out there for everyone to see. How is that researched practically? Which questions are asked and what is being done to find the answers? What do you think could be the distinctive role of the Medialab in the questioning of wearable technologies?

Kind regards,

Tessel Renzenbrink
Gr1p network


Geert-Jan Bogaerts - Saturday 30-09-2017 21:04

Hi Tessel,

I did not only start out as a techno-optimist, I still am one. I just never believed that technological progress in itself was a prerequisite for human happiness, collective or individual. It is a necessary condition though. Without technological advancement we would still be subject to the whims of nature.

But indeed, ultimately it’s how we apply technology that determines its quality, positive or negative. So I agree that technology in itself is neutral. It’s the scientists, the artists, the designers and the storytellers who can ultimately give it direction and meaning. In my view they set a standard, a standard we in turn need to determine how far we can stray from it.

We can assume a critical stance towards Google and Facebook and other data drivers because there is an entirely different group of people thinking about alternative approaches. They constitute the subculture of technological progress and they never cease to ask critical questions about applications, whether these are driven by profit or by a lust for power and control (the NSA’s of this world).

Anyhow, as far as I’m concerned public broadcasters for as long as possible will be a safe environment, where this critical questioning and free thinking is possible, where alternatives can be thought out and where experimenting with new technologies is allowed. At the VPRO we even consider that to be a core task.

We apply it as often as possible in our own productions but naturally we also apply a set of rules: we must reach a minimum amount of viewers, listeners or visitors. And there is a limit to what productions can cost. We set up the Medialab in Eindhoven to be a truly free environment, where we try to liberate ourselves as much as possible from all these rules we have to work by.

The Medialab is always on the lookout for relevant developments it can pick up and research, fed as much as possible by the available knowledge inside the VPRO and a wider network of artists, scientists, designers, authors and journalists.

Innovation in public broadcasting is always focused on media, both their production and consumption. That’s another reason why it is a core task: we see our audience moving away from so-called linear viewing and embracing new platforms. So we have to get to know these platforms as well. We must be able to handle them and to judge if such a platform or new technology could be of any benefit to us. By doing so we get to know these technologies and we find out what their positive – and possible negative – applications are.

We expect the influence of wearable technology on our media consumption to grow as it becomes more popular. We’ve already seen that very convincingly with the portables we now all carry: our smartphones, our tablets and our e-readers. But wearable technology is developing rapidly: from smart watches to sweatbands and underwear that can monitor our heart rate, blood pressure and body temperature. Even our sex life is not safe. Remote satisfaction no longer requires a tour de force…

Wearables can be used to produce media and to consume media. We will be able to create wonderful things using them, but we must also look at the flip side. My biggest worry concerns the data wearable technology can collect and exchange. And that is what this program predominantly focuses on.

Which personal data are we giving away without knowing it? How can we make our public conscious of that fact? What do my glance, my posture and the way I walk tell the shop where I get my daily groceries? We know that some clothes stores already experiment with personal display-advertising after a lightning fast analysis of my personal traits.

“We Know How You Feel” aims at giving the audience insight in these developments and processes. Last year we did a similar project, called “We Are Data”. The accompanying website clicklickclik.click received almost a million clicks. It is evident that the subject lives. It’s urgent and it calls out for critical questioning.

I see many similarities between the goals I mentioned above and your observations about Gr1p. My counter question to you is: what do you see as the most effective way to reach these goals? Is it enough to make the public conscious? And what is the best way to achieve this awareness?

Warm regards,

GJ Bogaerts
head of digital VPRO


Technology is always an expression of certain norms and values. That’s why it is necessary for scientists and artists to critically question it. [TR]


Tessel Renzenbrink - Sunday 4-10-2017 22:08>

Hi Geert-Jan,

Technology is neither good or bad, on that we agree. But unlike you I don’t think technology is neutral. On the contrary. Every technological artefact is an expression of a set of cultural values. Algorithms for example can mimic the prejudices that live in a society.

To give an example: some courts in the United States use algorithms to determine the sentence a convict will receive. Based on data a calculation is made of the risk that someone will reoffend in the future. In case of a high score the judge can decide to pass a higher sentence.

Research shows these algorithms are biased: black people are often given higher scores than white people. Eric Holder, who served as attorney general under president Obama, spoke out against the use of such algorithms, because they could ‘exacerbate unwarranted and unjust disparities that are already far too common in our criminal justice system and in our society.’

Technology is always an expression of certain norms and values. That’s why it is necessary — like you say — for scientists and artists to critically question technology, to make these implicit values visible and to discuss them when necessary. But that’s not sufficient, because it is reactive. If you only react after the technology is brought to the market you already operate within a certain paradigm.

As people and as a society we will have to act earlier in the process, so we can determine what we want from tech. It does matter what you build. To think about what you want to build before you even start. And that is where values come in. Ultimately it is not about which technology you want to realise but about which values you wish to embody in technology.

In that sense it’s interesting that Medialab does not just lay down critical questions about wearables with ‘We know how you feel’, but also experiments with them, in cooperation with artist Nick Verstand and TNO. By doing so Medialab & co claim a creative role and the ability to determine the values.

The research question they pose is: can we tune our media offering to your mood on the basis of emotional data? But is that an interesting question? Which underlying values do you recognise with such a goal and which ones do you leave out?

I see how this use of emotional data could serve broadcasters. A personalised media offering might make people stay on your channel longer. It benefits the ratings. But how does it serve the public interest? Because to me the hunt for clicks and eyeballs that holds so many media in thrall is not a goal in itself.

And then there is of course the life-size ghost of the filter bubble. Personalisation based on data — emotional or otherwise — will per definition lead to a media offering tuned to your interests and convictions. That way it will confirm and reproduce your view of the world, while I think it’s a key task for a public broadcaster to make people familiar with the social environment of other groups in society.

In your letter you asked if broad awareness is enough to steer technological development in a direction that serves the public interest. Well, I don’t think it is enough but it’s a start. It’s under the pressure of a collective conviction that things start to change.

Take for example this other technological revolution that is in full swing now: the energy transition.Through the decades people grew more and more conscious of the fact that the economy and the energy sector in particular had to become sustainable.

Because of this awareness action was taken in ever more domains. Governments introduced laws and treaties. Engineers started innovating. Tax money was made available to pay for this innovation. Consumers made different choices. Companies went green.

You asked for the best way to achieve this awareness and my answer is: alternatives. Without alternatives there is to no course of action and that leads to a certain resignation. Why bother about something you can’t change anyway? Only when viable sustainable energy technologies became available people were able to turn their worries into actions.

But of course these alternatives did not come out of the blue. They were pioneered by people and institutions looking for other solutions, asking different questions because they took different values as their starting point. That’s why I want to ask you what values underlie the AURA art installation.

Kind regards,

Tessel


Geert-Jan Bogaerts - Sunday 7-10-2017 18:11

Hi Tessel,

Let me start by answering your question on the values that underlie our Medialab project. By far the most important value to me is insight. You could make a sequence, starting with data. Data leads to facts, when you have facts you are informed and information in turn can lead to understanding or insight. None of these steps is self evident: it takes an effort to get from data to facts, from facts to information and from information to insight.

With our project “We Know How You Feel” we aim to question the ease with which some people in the world of media seem to take the use of algorithms and data for granted. For if we don’t use them in the right way we do indeed run the risks you described earlier: filter bubbles, the eradication of surprise and serendipity, choosing the common denominator instead of finding interesting niches that can truly teach people something new.

Only when we (as a society) have a real understanding of how data and algorithms influence our lives – and will influence them even more as smart appliances take over more of our environment – we can think of alternatives. I’m fortunate enough to work for a broadcaster that has a genuine interest in these alternatives and reports on them on a regular base, most prominently in Tegenlicht/Backlight.

Imagination is the foundation of every technological innovation and every invention. The American author Neal Stephenson has teamed up with the University of Arizona for an interesting collaboration, Project Hieroglyph. It connects science fiction writers to scientists and builds on a thought by Carl Sagan.

He once said that the road to the most groundbreaking science was paved by science fiction. It’s the power of imagination leading the way for science. If we hadn’t been able to imagine that one day people would no longer die of of smallpox or pneumonia, the smallpox vaccine and antibiotics would never have been invented.

Shortly after World War II Arthur C. Clarke had the idea that global communication could be made a lot easier if we’d be able to launch satellites that would stay in a fixed place above the earth. Twenty years later, in the mid-60’s, the first geostationary satellite was launched and nowadays we can no longer live without them.

On the neutrality of technology: I agree with you that we as people create technology and that we do so from our own needs and biases. In that sense technology is not neutral indeed. I think the nuance can be found in Kevin Kelly’s observations in ‘What technology wants’. Kelly states that technology has its own evolution and creates its own progress, independent from mankind. In that view technology is not influenced by human prejudice.

Let’s make this vision concrete. Google was recently blasted because an image search for ‘black people’ also came up with photos of gorillas. Shortly afterwards Google’s Adwords turned out to show more ads for well paid jobs to men than to women.

These are examples of technology being applied in a non-neutral way. But underneath these examples lies an instrument panel of static mathematics, with lots of attention for regression analysis and standard deviations in the programming languages Google applies. Scientists had been using this panel for much longer and it was only a matter of time before software developers would discover it as a tool to analyse the enormous amount of available data. That analysis in turn enables new applications. Siri and Alexa grow ever smarter, but at the same time they remain products of human imagination and consequently of human bias.

In my view the real peril of this development is not contained in the observation itself. Human progress is only possible because we have ideals that spring from our own vision of the world. The peril is in the fact that the means to achieve this progress are in the hands of ever fewer people. Facebook, Google, Apple, Amazon and Microsoft are building our new world. It frightens me that these are companies that evade any form of democratic control, and are judged by their shareholders on just one thing: net profit per share.

To be honest, I’m not very optimistic that a ‘collective conviction’ can arise – as you wrote – to raise the pressure for change. These companies operate on a world-wide scale and there is not a trace of world-wide political consensus on how to handle them. The EU goes its own way and introduces a ‘right to be forgotten’. Within the EU Germany is the only country that holds platforms liable for allowing hate speech. US policy meanwhile is aimed at safeguarding the position of these companies and therefore introduces laws to protect these enterprises. And then we haven’t even started talking about the breaches of internet freedom by for example Russia and China.

But, entirely in line with my techno-optimistic vision, I also believe that technology will provide a solution for all of this in the end – blockchain FTW!

GJ Bogaerts
head of digital VPRO


Tessel Renzenbrink - Sunday 4-10-2017 22:08

Hey Geert-Jan,

There are three elements in your letter I find hard to reconcile. You say technology always expresses bias because it springs from the human brain, which is never value-free. I agree with you on that point: technology is not neutral. You continue by saying that the real danger lies in the fact that technology is developed by a small tech elite: the Amazons, the Facebooks. These companies are not subject to democratic control and their main steering mechanism is financial gain. That is a scary thought indeed.

But in the end you say everything will be fine, because technology itself — in the shape of blockchain — will provide a solution. Like an autonomous power that will dethrone the monopolists, irrespective of what we, people do. That conclusion is at odds with the first two statements. Technology after all is always an expression of human values. When technological development lies in the hands of a small group of people, it will spread and cultivate the values of this group. That will give these people more control over the playing field. The technological domain will become ever more homogeneous and assume a form that serves the interests of this group.

Yet you still trust blockchain technology to develop autonomously in this environment, so it can erode the power of the tech elite. I don’t share your optimism on this. Blockchain, like any other technology, is subject to the economical, political and social structures in which it is developed. Why would the dynamic that led to the monopolisation of the internet by a few companies leave blockchain undisturbed? In the last two paragraphs you say you have little faith in social pressure bringing about change. According to you blockchain will act as that change agent.

I have a totally opposite view and I’ll tell you why by means of an example. This fascinating graph shows the prosperity level of mankind over the last 2000 years, and is often cited when it comes to technological progress.

© Our World in Data

Reference: Max Roser (2017) – ‘Economic Growth’. CC BY-SA licentie.

The graph shows that prosperity barely increased through the ages. And then, in the middle of the 18th century, growth suddenly exploded exponentially. In their book ‘The Second Machine Age’ Erik Brynjolfsson en Andrew McAfee identify the cause of this turning point in history. The bend in the hockey stick curve coincides with the invention of the steam engine: the start of the first industrial revolution.

Not everyone was lifted by the waves of growing prosperity. On the contrary. The transition from an agrarian to an industrial society went hand in hand with terrible injustices. There was exploitation, child labour and labourers worked fourteen hours a day, living in extreme deprivation. Things only started changing when our ancestors demanded better living circumstances in force. That is why social involvement in technological development does matter. The steam engine created an exponential increase in prosperity, but what is then done with that prosperity is not inherent to the steam engine. We, people decide on that.

Now we are on the brink of a third industrial revolution, Industry 4.0 or the second machine age. Whatever we wish to call it, we have to make sure that history does not repeat itself. That this time around we guide the technological revolution in such a way it benefits everyone.

I do not believe that blockchain will achieve that for us in some miraculous way or other. We will have to enforce it ourselves. Because that is the danger of techno-optimism: the belief that technology automatically leads to the best possible outcome and that therefore we don’t have to take responsibility.

Kind regards,

Tessel


Geert-Jan Bogaerts - Monday 16-10-2017 15:42

Hi Tessel,

Techno-optimism does not relieve us of our duty to act and to critically question!

So even though I consider technology to be both cause of and solution to many of our problems, I still think that parties like Gr1p and VPRO must promote our ongoing critical questioning of that technology.

Regards,
GJ Bogaerts


Tessel Renzenbrink - Monday 16-10-2017 17:01

Hi Geert-Jan.

Thanks for the lively correspondence. It was interesting to exchange thoughts with you.

Good to continue on the 25th!

Kind regards.
Tessel

Wouter Moraal

Wouter Moraal is a master student in Media Technology at Leiden University. He loves to translate complex matters into understandable articles, clips, and interactive installations. He aims to help people gain more control over their own lives, by enlightening them about their technological surroundings. He has actively volunteered for the Privacy Cafe, the Internetfreedom Toolbox and other projects that are supported by digital human rights organisation Bits of Freedom. He also initiated the online campaign against the Dutch Law for telecommunications data retention.

Wouter learned to critically evaluate the interrelations between media, technology and culture as part of his Bachelor’s degree in Language and Culture Studies. Since then he has been particularly interested in the relation between technology, privacy, security, and freedoms. During his current master education he is engaged both practically and theoretically in several different subjects at the cutting edge of science, technology, and art.

Wouter Moraal on Twitter