Dear CEO – Carmen van Vilsteren (TU/e) and Luis Rodil-Fernández (Gr1p)

“What can we get from the body that we cannot get from explicit habits and behaviors that can be observed by means other than electrophysiology?” [LRF]


Writers of this conversation are:

Luis Rodil-Fernández, writing for the Gr1p Foundation, he is an artist, researcher, teacher and hacker. And Carmen van Vilsteren, Director of Strategic Area Health at Eindhoven University of Technology.


Luis Rodil-Fernández - Sunday 24-09-2017 15:15

Dear Carmen et al.,

This is a novel format for me, so I hope we can find a way to communicate our arguments in depth in such a short time, using the clumsy medium e-mail is.

My education is in the arts. I also studied computer science and have been an engineer for some years. Perhaps I should go into further detail about what my relationship with the study of the human body through sensors is. In my artistic work I use biomedical devices that interface directly with the body, so I have a certain familiarity with picking up bodily signals and using them for various purposes.

I was part of BALTAN Labs “Hacking the Body” program and was a resident at Philips Research, designing experiments for biosynchrony research using methods from electrophysiology. In addition to this more practical experience I am a privacy activist and a teacher. Each of my activities informs the other of course, and although I am an avid user of technology, it is hard for me to feel uncritical once I have managed to contextualise a particular development in tech.

I understand that the context of our exchange is handed to us by the title ‘We Know How You Feel’ and that the question pivots around the work that Nick Verstand did in collaboration with TNO, which uses EEG to infer emotional states. VPRO proposes a scenario in which similar technology could be applied to market research to more precisely target media content. If I understand correctly our conversation starts from that proposed scenario.

What strikes me most in that proposed scenario is perhaps its total lack of imagination, as it consists of adopting activities that already exist. It doesn’t take a great stretch to imagine how physiological data might be incorporated into the data pool that is currently being used to profile media users, by internet media anyway.

So called ‘surveillance capitalism’ is the economic model of most media platforms on the internet. In a single technological generation our televisions have turned from bulky analog devices just a tad more sophisticated than a radio receiver, into general computing devices sitting in our living rooms, containing a wealth of sensors capable of doing their own data gathering as well.

The scenario proposed by VPRO is one in which the economic model of the internet gets extrapolated to television with a few extra channels of consumer data thrown in, namely physiological data. All of this is already happening, at least as separate streams in diverse industries. The synthesis that this project of VPRO is proposing is altogether too plausible, well within the realm of possibility, which is the reason why I find it worth exploring in greater detail.

A business area called neuromarketing already exists, and its practitioners do exactly what VPRO is proposing in this scenario: finding technical means that allow marketeers to precisely target a product to an individual based on their unconscious biophysical activity. The hypothesis of somatic markers formulated by Antonio Damasio states that our body is constantly producing and processing cues about our emotional state, and that our body seems to ‘know’ things even before those things are consciously known to us. Neuromarketing aims at exploiting this gap between consciousness and the creation of desire.

Now, do these techniques have a place in our consumption of media? Do they have applications beyond ‘selling more stuff’? How does the individual media consumer benefit from this scenario? I presume the reason why GR1P was brought into this conversation is to provide a counterbalance to this all too plausible scenario, and question the ethical implications of such a development.

The massive amount of data that certain companies have on us, internet users, is more substantial than most of us suspect and already provides insight into processes we as individuals are unconscious of. Facebook for example makes use of a thing called ‘derived qualifiers’, such as ‘likelihood of becoming addicted’ that rates a particular user’s susceptibility to falling into addiction. These inferred markers are constructed by combining various other metrics that Facebook can quantify directly and they use these ‘qualifiers’ to more accurately target advertisements and content to us.

Facebook can already do this already without access to our physical bodies, so at this point I would like to raise a question: to what extent does having physical access to the body of the consumer matter in making these kinds of inferences possible? Have the quantitative methods used by Facebook and Google not already made this ‘access to the body’ unnecessary and obsolete? What can we get from the body that we cannot get from explicit habits and behaviors that can be observed by means other than electrophysiology?

There are many more points to discuss of course, but I hope this can be a productive introduction and that we can take it from here.

Looking forward to hearing from you.

Salud,

Luis


What can we get from the body that we cannot get from explicit habits and behaviors that can be observed by means other than electrophysiology?
Luis Rodil-Fernández


Carmen van Vilsteren - Tuesday 26-09-2017 20:40

Dear Luis,

Thanks for opening the conversation. To me the format is also very new. I am not a writer at all. I have a background as an engineer and I have been working in the health domain for most of my life, serving several larger and smaller companies. I was development manager for cardio vascular imaging systems at Philips in the nineties and up until today every second somewhere a patient is treated with a system we introduced in these years.

At the moment I combine my position as director of strategic area Health at Eindhoven University of Technology (TU/e) with that of CEO at Microsure, a startup in robotics for micro surgery. At TU/e we are working on several new technologies. In one of these, regenerative medicine, we try to entice the body to heal itself. At Microsure our ambition is to give surgeons superhuman precision.

Your letter reminds me of a project I saw some two years ago. It was called Probes and was set up by Hans Robertus. You may know him. The results were presented during the Dutch Design Week. A diverse group of students got the assignment to think about solutions for a society where people would live to be 150 years old.

One group came up with the idea of an implantable chip that records all your life and health events, to be used for instance for preventive and non-preventive treatment. So not just ‘We Know How You Feel’, but also ‘We know how you will feel in the future’.

To substantiate this idea they set up an interesting experiment. They hired an office in Strijp S (hotspot of the DDW) and bought a lifelike baby doll and some blank cards. They then offered the baby doll to people in the street, telling them that this was their so called new born baby and that they had to register their son or daughter at the municipal office next door.

Most people agreed to participate in the experiment and came up with a name for their ‘child’. At the office they were told about the possibility to implant this chip. The new parents had to decide on the spot if they wanted this to be done, since it would only work if implanted during the first day.

The students expected all sorts of discussions and questions about data protection, privacy and the safety of the technology. But what they did not expect happened. All ‘parents’ opened the discussion about ethics: do I want to do this to my child? Would you have had the chip implanted?

Regards,

Carmen


Luis Rodil-Fernández - Friday 29-09-2017 13:31

Hello Carmen,

To answer your question: before making that decision I would need to know a bit more about that hypothetical chip. What it does precisely, where it resides in the body, what the effects on the child are and who owns the implant. Is the chip a networked device or not? Does it perform any kind of data collection or is that data never stored? What earlier tests were done with the implant in humans? Who makes the chip? Is it a proprietary design or is it open source?

Of course I would have some serious concerns before happily implanting a technological artefact in the body of my new born for the rest of their life. But I wouldn’t be opposed as a matter of principle. I do think that technology has a role in improving people’s lives. My reaction wouldn’t be technophobic but cautious.

The questions that you thought people would ask regarding privacy, data protection and the safety of the technology are also ethical questions by the way. To me the question ‘would I implant this in my child?’ is not the only one about the ethical implications of the proposed scenario.

There are many examples of poor data security or poor privacy protection potentially resulting in the weaponisation of a technology that at first seemed innocuous. A technological device never comes to this world in isolation. It always brings with it a part of the future. A future that becomes our present the moment we let that technology enter our lives. We can’t possibly predict how it will evolve.

A chip implanted in my child today can be the root of discrimination of my child twenty years from now, or the target for an attack by a hostile actor the day after tomorrow. It’s s important to understand that these scenarios are not merely hypothetical. If the technology exists and the stakes are high, the technology will be weaponised.

I invite you to reflect on the recent revelations in the press about the role both Facebook and Twitter played in Russian meddling in the American election. To make money from advertising both companies offer sophisticated tools for targeting slices of their market. These tools enable such detailed targeting that advertisements can even be aimed at a single individual. It turns out that a 100.000 dollars worth of strategically placed posts were shown to American voters in the run up to the election.

In a press statement last week Mark Zuckerberg was indignant about the role Facebook had played (2) and he admitted to not having done enough to prevent these forces from intervening in the democratic process. There was no need for the influencers to break into Facebook systems or to employ anything that is traditionally thought of as a ‘hacker breach’. The (supposedly) Russian actors that wanted to buy influence did so by using tools that Facebook offers to legitimate advertisers.

What these actors did was to use these tools for a different purpose than Facebook had intended. All technologies, bar none, will be deployed for unintended use as the social context around them evolves. As William Gibson once wrote in his book ‘Burning Chrome’: the street finds its own uses for things. A technological artefact developed with the best intentions can and will very likely find unintended applications.

Going back to the ethical question you asked, I’d like to continue in that vein and to pose a few questions in return: given your ample experience in bringing technological products to the market I assume you have worked with a broad range of engineering and design professionals. How are these ethical questions dealt with in your professional environment? Is awareness of these issues common? What is the role of these ethical questions in the product development cycle? Have you seen any changes in these perceptions in your long years of work?

Salud,
Luis


“A chip implanted in my child today can be the root of discrimination of my child twenty years from now, or the target for an attack by a hostile actor the day after tomorrow. It’s s important to understand that these scenarios are not merely hypothetical. If the technology exists and the stakes are high, the technology will be weaponised.”
Luis Rodil-Fernández


Carmen van Vilsteren - Tuesday 6-10-2017 15:34

Dear Luis,

You want to know my opinion on the ads that were placed on Facebook and Twitter last year with the intention to influence the American presidential election. To be honest, I think you already made the perfect analysis yourself. People will indeed always find unintended ways of using technologies and other means. And then there’s the immediacy of posting on Facebook and Twitter. Things will come online without delay and thus with very limited room for intervention of correction.

Perhaps that stems from the way things are generally done in the media: very limited checks before publication, but an evaluation – and taking lessons from that – afterwards. I found out about this practice when I visited the local newspaper one day, to learn how they managed to make a new product – the newspaper – every day, while it took us multiple years to develop a new x-ray machine. This approach is called ‘benchmarking best practices’.

The newspaper editors told us they worked by a set of simple rules, which everyone knew. For example: no negative publicity on the royal family. And they did not check any stories upfront for lack of time, but would discuss them the next morning instead. This is in line with Mark Zuckerberg’s quote: ‘We don’t check what people say before they say it, and frankly, I don’t think our society should want us to.’ In the case of the meddling ads the damage was done long before any evaluation took place, so it was irreversible.

Your second question, about the role ethics play in the development of new medical technology, had me thinking a bit longer. To be honest I can’t recall any deep discussions on the topic during my years in the development of new cardio vascular x-ray systems. Improving these systems usually means improving the treatment for the patient with better images, reduced doses of x-rays, etcetera.

Patient safety is of the greatest importance during the development cycle of these imaging systems, so hazard analysis and extensive testing are always part of the process. Part of that testing is to determine how to best move around a patient and how to protect them against potential collisions during the process.

During my first project I started these tests with myself as the patient on the table. Initially some my colleagues thought that was bad and unsafe plan. My answer was this: if we don’t even dare to lie on that table ourselves we can’t ask a patient to do so. So it became common practice for developers to voluntarily play the patient during some of the collision tests.

Now, at Eindhoven University of Technology, I am confronted with many more ethical questions, for instance about the development of implants like pacemakers and brain implants. People depend on these technologies and their quality of life can be at stake. One of our four faculties that are involved in these projects has its own ethics department.

During the development of new devices and apps healthy people and patients are also ‘used’ as test persons. There is an increasing amount of regulations governing these practices in the Netherlands. Every experiment has to comply with these rules and regulations, and test persons have to sign an agreement before joining in. All of this is also overseen by an ethics commission from the university.

Kind regards,

Carmen

Dear CEO – Tijmen Schep (Gr1p) and Sandor Gaastra (MinEZ)

In a world where the pressure to be perfect is mounting I call privacy the right to be imperfect. [TS]


Speaking in this conversation:

Sandor Gaastra, Director General for the Energy, Telecom and Competition department at the Ministry of Economic Affairs.

Tijmen Schep writes for the Gr1p Foundation and is a technology critic and privacy designer who helps citizens, designers and policymakers understand new chances and challenges in our data-driven society.


Sandor Gaastra - Letter 1

Dear Tijmen,

Since we are going to discuss privacy anyway, I might as well tell you something about myself. I am the director general of Energy, Telecom and Competition at the ministry of economic affairs. I am a civil servant: I don’t take political decisions. I prepare them, execute them and make sure they’re monitored.

Eh… just replace ‘I’ by ‘my people’. And, on second thought, leave the ‘my’ out, because I really don’t own everyone who works on for example accessible and affordable networks for electronic communication, which contribute to innovation and economic growth.

Before I get obscured by policy lingo: I was a manager at the police force once and I still am a father, a cyclist, a reader and a holiday maker, amongst other things. I find technology, policy and administration fun and important, but people even more so. And – coincidence or not – privacy finds itself where technology, information and people meet.

Once privacy was rather uncomplicated: you couldn’t pry on your neighbours, publish confidential photos or open letters that weren’t meant for you. This is neatly laid down in law, in terms like ‘protection of the personal sphere’ and ‘confidentiality of the mail’.

But times change and all of a sudden digitalisation has provided us with a lot of work. The basic rights remained untouched, but they had to be extended to the digital domain. Confidentiality of the mail for example was widened last year to also include digital communication.

Half of the Dutch people ‘pay’ for digital services by sharing private information. Social media harvest these data and ‘at the back door’ sell them on to the highest bidder. Who that is? I haven’t a clue. It happens in a split second. But in the data systems of both the platforms and the buyers ever more detailed personal profiles grow.

Everything you buy, watch, like, shout out and take photos of is in it. Where you are, where and with whom you spend the night, where you’re heading. If you’re sick (‘buys lozenges and paracetamol’) or happy (‘orders two white beers on terrace’) or in high spirits (‘listens to St Matthew Passion’).

Possible consequences: you pay too much for concert tickets, you’re not offered cheaper health insurance, or you’re blackmailed or bullied with photos on Facebook. A tough problem, because it goes – almost – unnoticed. Until things go wrong.

Every time personal data leak on a massive scale, there is a call for law making, a privacy police or harsher punishment. It’s understandable that citizens turn to their government, and of course the basic right to privacy can not be eroded. But we shouldn’t create unnecessary thresholds for the free flow of data or for corporate innovation.

That’s why we create a legal foundation to enforce that everyone handles personal data with care, and then we look what we can achieve with less stringent means, like information, a change of behaviour and incentives for market players to be transparent about how they handle our data. If that’s enough? What do you think?

Kind regards,

Sandor

PS: Privacy is personal. If I meet you when I’m out cycling I will kindly say hello, but I am bothered by a photo of me in my cycling gear on a public website. Has someone ever crossed your privacy boundary?


Every time personal data leak on a massive scale, there is a call for law making, a privacy police or harsher punishment.
Sandor Gaastra


Tijmen Schep - Letter 1

Hey Sandor,

Great letter!

Before I get into detail: it’s cool that you want to correspond with me. When I was invited romantic images invaded my mind. I was immediately reminded of the written exchanges people like Darwin and other thinkers used to have. People who needed to hire a pricey portrait painter if they wanted to take a selfie. All of a sudden I realised that our museums are in fact full of selfies.

This week, at a conference on privacy friendly smart devices, I heard a nice anecdote about Socrates. He was of the opinion that you could only think in couples, that thinking was always an exchange. To him, reading a book was not thinking. The dear chap therefore never put pen to paper, because he thought of words as a new technology that could not be trusted. For if we were no longer required to remember everything our brains would turn lazy.

Distrust in new technology is of all times. As is unchained optimism, of course. The challenge is to make the two meet somewhere in the moderate middle. The polder model or third way of technology policy, maybe that’s what the two us will end up with here.

I like how you kick off your letter. Funny: because we have privacy, breaking it can create a bond. I also love cycling very much and I really like reading as well (sorry Socrates). I’m part of a walking club and I cannot possibly resist soggy cakes.

I call myself technology critic and privacy designer – some kind of digital mythbuster actually. I am one of the founders of the SETUP media lab, a cultural organisation that employs humour to provide insight into data issues to a wider audience. My question in life is: how can I help people to look at matters of technology in a thoughtful way?

Because I studied the data industry for such a long time I am so very aware of the ways in which a beautiful thing like this correspondence – two people calmly taking the time to think together – will be guzzled up by algorithms at the same time. The more highbrow words I apply faultlessly, the higher some specialised algorithms will rate my IQ. Do I use the word ‘I’ too often? Then my team player score will possibly go down.

I don’t know for sure, and that – as you said yourself – is the thing: algorithms that compare millions of people can see patterns we can’t conceive. Say: 10.000 people who installed a ‘free’ diabetes app turn out to also have liked Snoop Dogg and claying on Facebook relatively often. When I then like Snoop Dogg and claying on Facebook, the conclusion will be: risk of diabetes + 3%.

That guess is then sold as knowledge and that’s how these ‘free’ services are in the end paid for by for instance your health insurance company (and thereby ultimately by you). Can you anticipate something like that when you click that ‘I Agree’ button on Facebook? I don’t think the average citizen can oversee the consequences. As far as I’m concerned it’s time to roll out bigger guns.

I imagine how in 2023 I might apply for a job at Philips because I want to create privacy friendly thermostats there. But an algorithmic HR firm stands on guard and advises against me. According to the algorithm another applicant from an area with a better post code showed an emotionally slightly more stable use of language, and his collection of Facebook likes was more in line with the corporate culture. Maybe in 2023 I’ll start a company that deals in transplants of Facebook likes.

To answer your question: in fact I feel my privacy boundary is being overstepped constantly. Does that sound weird? I think I may know just a little bit too much about this market, about this technology, and especially about the all too human desire to contain risks that drives all of this. Do you ever experience similar pressure? Or do you see it in people around you? And how would you answer your own question?

Bye!

Tijmen


My question in life is: how can I help people to look at matters of technology in a thoughtful way?
Tijmen Schep


Sandor Gaastra - Letter 2

Hello Tijmen,

Just my luck: having to communicate about complex concepts like privacy with a soggy cake junkie… But I will cast aside my resistance because you raise an important topic with that ‘invasion of privacy’. Exchanging personal information, preferably about slightly ridiculous titbits, forges a bond. And gossip is a universal human need, essential for the creation of friendship and animosity and maintaining the social order in communities. Something like monkeys grooming, behavioural scientists say.

Back in 1875 even the people at Bell Labs believed the phone would never be found in every house (let alone in everyone’s coat pocket): what would people have to tell each other? A lot, we now know. And most of it is social talk: gossip. Same thing with the internet. It started as a communications network for scientists, some user friendly applications were developed and there you are: just about every adolescent produces a constant stream of pics, vlogs and tweets. So that people will talk about them or they can gossip themselves. And because gossip feels awkward in writing, we invented the emoji that everyone sprinkles over their electronic communication nowadays.

All this online self exposure is fun, friendly and hardly ever earth-shattering. But mainly it’s also an ineradicable human desire. And that’s why it can turn perilous or painful: cyber bullying, private photos and videos that go worldwide, sexting, extortion, you name it. These are tough problems. We as a government have learned two hard lessons: we cannot force a change of behaviour. People will still be sloppy with passwords, their posts and whom they share their data with.

And malevolent digital actors (from adolescent bullies to genuine criminals) are notoriously hard to track down and prosecute, if only because they can operate anonymously or are far, far away. That’s why we not only try to tackle the perpetrators, but also want to empower potential victims. We do that with campaigns aimed at awareness and education. Commercials on tv, but also school materials, websites, ad campaigns, etcetera. It helps, but with this approach we only cover part of the problem – or rather: the assembly of problems we capsulise as ‘digital privacy’. It also stretches our stamina as a civil service considerably.

I think one of the reasons why the problem is so tough, is people not really getting the ‘mechanics’ and ‘dynamics’ of all these digital novelties. The speed with which you give away things without actually losing anything yourself. The ease with which you can copy things free of any charge and then share them with half the world, not knowing who’s out there. Can a better understanding of these mechanisms possibly make people more sensible in their dealings with data and digital services? You sparked my curiosity by telling me about SETUP and the employment of humour to provide insight into data issues to a wider audience. Could that be of any help?

Much of our energy as a government now goes to introducing the European Commission’s general data protection regulation, which will come into force on May 25 2018. With that GDPR the present EU framework, dating from the previous century, is modernised and made more suitable for the digital domain.

Companies that process data (just about any company really) must have a transparent privacy statement, make clear what they use data for, and may not demand more data than needed for their services. Companies are held accountable for their handling of data (and therefore will have to keep data on that) and everyone must be able to easily transfer their own data to another service provider (data portability).

Government (the Personal Data Authority in particular) is dealt a more prominent supervising task. All in all it’s a major operation, as we say in The Hague. Does it solve all the problems? Probably not, but we have laid down a firm guideline for the use of personal data, including selling them on. The underlying problem of course is that many people give permission for selling data, by accepting terms of use without giving them any real attention.

In short: making laws is important, but more is needed. Awareness in citizens, ethical acting by companies, market incentives for privacy friendly solutions. We’re in a transitional phase. Companies should see the importance of acting privacy friendly as a unique selling point. So, a matter of stick and carrot.

On your job application in 2023: an American company already assesses requests for loans with artificial intelligence and machine learning. They process 70.000 markers per request (yes, you read that right), comprising the date on which you opened your most recent bank account and your use of language in Facebook updates. In short: it’s there and it’s alive. This issue has far wider implications than privacy and poses new questions for governments.

In the future algorithms will decide on your suitability for a job or your eligibility for a mortgage, but also on the the treatment of your disease. These decisions are better (theoretically that is), but it feels awkward and uncertainty and suspicion lurk. In practice though unwanted things won’t happen easily. For example, in the Netherlands data may not be used to exclude people from insurance, and this is not a matter for discussion either.

As to having my privacy boundary crossed I had think for a bit. I felt like writing: I live such an obedient life and I do so little on social media that it barely bothers me. But you mean something different, something that sounds like an invasion of your autonomy and your right to self determination. Or am I wrong?

In ‘Privacy for homo digitalis’ the lawyers Moerel and Prins say that privacy is ‘an antechamber for other fundamental rights and freedoms of the individual, which together are in turn instrumental for the correct functioning of our democratic constitutional state.’ Do you mean something like that? However, I will get back to that.

It does surprise me though that you say your privacy boundary is overstepped all the time. I read everywhere that the young are more conscious of privacy but also have a more relaxed attitude towards it. Are you an exception or are the reports wrong?

I’d love to learn that from you soon!

Sandor


In the future algorithms will decide on your suitability for a job or your eligibility for a mortgage, but also on the the treatment of your disease.
Sandor Gaastra


Tijmen Schep - Letter 2

Hey Sandor,

I totally agree with you: we are linguistic, social creatures and we comprehend the world by telling tales about it. What better than a juicy story? Whenever adventures in love were once again shamelessly blown out of proportion in my circle of friends, someone would always shout ‘best story counts!’.

I think the young are in a difficult position. They are at the forefront of technology use, but at the same have little else than ‘streetsmarts’ (as Danah Boyd explains) to cope with it. They don’t fathom the real issues surrounding data collection and the loss of liberties. But they all feel the pressure to belong.

I think we do indeed need new words to grasp the dark side of all this chatter. Mainly under this rising social pressure to belong terms like ‘normcore’ and ‘basic bitch’ were invented in school yards. Both describe the trend to dress as normal and inconspicuously as possible.

In one of his last public lectures philosopher Zygmunt Bauman said: ‘Fear of exclusion is the dominant fear of our time. We are not rebelling against the overbearing state, we are rebelling today against being ignored, against being neglected, against being unseen’. Philosophers like Bauman, but also Foucault and Deleuze, described how fear of not belonging, of not being normal, is one of the most powerful human incentives.

Its immense power was beautifully exposed by some VPRO film makers, who wanted to stream their entire lives online for three weeks. The experiment was prematurely aborted: the psychological pressure of ‘having to be your best self all the time’ became too much.

At the same time China attempts to put that force to work deliberately. You probably heard of the ‘social credit’ system, which will give all Chinese citizens an ‘obedience score’ from 2020 on. If your score is low because you think or consume in too much of a Western way, you can soon wave goodbye to a government job, a loan or a visa.

I see the same effects emerging in the West, but in our case they are a side effect of market forces. We don’t see through that, blinded as we are by the ‘fairytales’ from Silicon Valley. I foresee that it could have an equally powerful chilling effect on society here. I invented a term for it: social cooling – the data version of global warming.

You might recognise this: you’re on Facebook and you run into a link, but you’re in two minds about clicking it, because you think: ‘That visit will likely be recorded and it might not reflect good on me later’. When I speak at conferences some two thirds of the audiences already recognise this example. Research points in the direction of emerging self-censorship.

Wikipedia entries on terrorism for example were visited less often after Edward Snowden’s revelations. People feared their visits would be recorded by the NSA. Last month saw Donald Trump demanding the release of data on people who had protested against his policies. Would you still feel comfortable to demonstrate then?

Living in a reputation economy does not only have implications for our democratic processes, it also seriously impacts our ability to innovate. It not only fuels self-censorship but also a culture of risk avoidance. An example: when surgeons in New York were given scores for their work, doctors who took the risk of performing complicated operations were rated lower – because more people died under their knives.

Doctors who didn’t do a thing had high scores, even though none of their patients lived any longer. The surgeons who dared to take risks felt the pressure to perform in an ‘average’ way. It’s not hard to guess the effect of such systems on entrepreneurism and innovation. That is the paradox to me: in the long term the creative industry reduces our creative powers.

That brings me back to your analysis that I consider privacy and autonomy to be the same thing. That’s right. To me the two are fundamentally connected. In a social world privacy is the right to shirk social pressure and conformism, to shape your own ideas and to escape anything mainstream or populist.

In a world where the pressure to be perfect is mounting I call privacy the right to be imperfect. Which in effect makes it the right to be human. Privacy provides the space to think different, an essential condition if we truly want to innovate and not merely copy Silicon Valley’s ‘the more data, the better’ model.

New terms like social cooling will hopefully contribute to spreading this insight. But new laws are also immensely helpful. In that aspect the GDPR is a relief, because it opens the door to finding ethical business models.

Finally, I think we need good examples to make these problems palpable and insightful. SETUP does indeed create humorous examples for a wide audience. During last year’s Dutch Design Week it presented a coffee maker that would serve good or bad coffee, depending on your post code. The lower the ‘status score’ of your neighbourhood, the more watery the coffee. It made the ever increasing influence of data on our lives concrete.

All in all I think I am not an exception, but just someone who is slightly ahead of the pack, because I get the workings and the influence of the data industry a bit better than the average Dutch person. Thankfully critical questioning is on the rise in the general population. Take for example the call for a referendum on the so-called dragnet law.

I think we we will be able to recognise the dark sides of data earlier, and I hope my work will contribute to the prevention of no choice, crash based policies. That’s why I want to end with this question: what do you think is needed to distinguish the baby from the bath water in an earlier stage?

Waves

Soggy cake junkie

PS: I added that American company with its 70.000 markers to creepycompanies.com, a website I launched last weekend to – once again – provide examples of the dark side of data to a wider audience. Thanks for the tip 🙂


Living in a reputation economy does not only have implications for our democratic processes, it also seriously impacts our ability to innovate.
Tijmen Schep


Sandor Gaastra - Letter 3

Hello Tijmen,

Thanks for your letter. Honest, I think you truly brilliantly explain why privacy matters. Not just because of the trouble it may get you into, but also because it is necessary to be yourself, to prevent you from being defined only by what the outside world thinks of you, preconditioned or not. It made me think of Dave Eggers ‘The Circle’. To me the most ominous aspect of that book is big brother sneaking in, veiled in good intentions, warm feelings and noble ideas. But the result is an oppressive world, in which conformism suppresses authenticity.

I felt myself longing for the cluttered interior – with reindeer antlers light – of protagonist Mae Holland’s parents, even though I really do prefer clean and austere design. Eggers pictures a world in which social cooling is on the verge of turning epidemic, I see now. I will take it as a sign of my sound mental health that I reacted so strongly to it. Maybe we should discuss this further over a coffee (after I entered the post code for the Noordeinde, of course ;-))

Your statement on the creative industry also made me think: Economic Affairs is the ministry that stimulates economic growth and innovation, but at the same time protects consumers and their privacy (and many more things, from agriculture to food safety). Over the last few years we made a government wide effort to make the Netherlands and the Dutch people more innovative and enterprising.

And we appear to be successful, given the flourishing creative industry, the expanding startup culture and the innovation in fields like financial technology and agriculture (we rule the world in precision farming, my colleague in ‘agro’ told me). And now you tell me digitalisation and data driven innovation achieve the exact opposite, and put a brake on the ‘true capacity to innovate’! That would be a bad thing, not just for the economy but also for people themselves. On top of that I think it would lead to resistance in society.

By chance I recently read a newspaper article on normcore. Two almost identically dressed girls were stunned when they where told they dress in such a conformist way. They both sported black and white trainers, but one girl’s shoes had round toes and that was something the other girl would never wear. Young people conform, but at the same time still see themselves as utterly unique and authentic.

Food for thought: while China uses social credits to create obedient citizens, we employ similar mechanisms (including self-regulation) to entice providers that let illegal content like child porn and hate speech pass too often to do better. It’s much more efficient than more laws and regulations, which require a giant effort for enforcement, investigation and prosecution. It only works if the providers cooperate. Fortunately they do.

And you keep the people alert with creepycompanies.com. Good! By the way, some people feel that such an AI and big data driven credit checker is in fact a harbinger of a glorious future, in which loans are cheaper (because checks cost less and the market expands) and therefore more readily available for everyone. I can see their point, but it remains creepy that third parties collect and combine data which could be used in my favour or against me in such a way.

The brand-new coalition agreement provides a good indication of the direction the government is heading with its privacy policy. Citizens for example must remain able to communicate with the government by normal mail. The government safeguards the confidentiality of the data it has on citizens: data in general registers and other privacy sensitive information is always encrypted, and DigiD is made safer.

Citizens get more control over their own personal data. They can point out socially relevant offices and organisations which automatically receive a limited amount of personal data if required. In short, the government is looking for the balance between privacy and ease of use when it comes to the data their citizens entrusted them with.

Another consideration in the coalition agreement is even more sensitive. It’s less about the digital domain, but more about the fight against terrorism in general. In case repressive actions need to be taken ‘a critical assessment must be made every time of how far privacy and other freedoms are curtailed’, the agreement states.

That will be the balancing act in the years to come, I think. Lots of cooperation with the corporate world – not just the ‘big boys and girls’ but also innovative start-ups with brilliant ideas on privacy-proofing my phone, my smart thermostat or that discriminating coffee maker. By doing so we will be taught that lesson about the baby and the bath water. And we will certainly employ tougher means sometimes: laying down responsibilities, law making, enforcement and biting sanctions. But I mentioned that before.

I feel like writing another letter’s worth about the new European e-privacy guideline, which together with the GDPR (Algemene Verordening Gegevensbescherming or AVG in Dutch) you so lauded will form the policy frame for privacy and electronic communication. I won’t though. Beware of overloading your fellow humans with policy frames, because they’re even harder to digest than soggy cakes. And we’ll probably speak in person soon in Eindhoven, in de Effenaar. Won’t we?

Looking forward to it! Regards,

Sandor

PS: Speaking of ethical business models and creative start-ups: do you see opportunities to take privacy in the Dutch digital domain to the next level, using technological innovation? Or is that ‘technology fix thinking’?


Tijmen Schep - Letter 3

Hey Sandor,

Thanks man! I find it equally fascinating to learn more on how government works. As Bauman wrote about my generation: I too have a lot in faith in government. I haven’t the slightest doubt about your good intentions. So bring on these policy frames!

I think we’ve come full circle by looking at that ‘consent’ question. Our society is based on the concept that we are capable of overseeing the situation, and then making a level headed consideration. What makes government so special in theory is that it allows its people to think about long-term questions and to involve as many stakeholders as possible. But to be honest, I see many signals that governments are not entirely capable to do so when it comes to technology.

A writer who has fascinated me lately is C.P. Snow. He published an infamous article on the ‘two cultures’. We can not properly oversee most problems in society, he says, because the exact sciences and the humanities have grown so far apart. That is where my question about the baby and the bath water came from: when I was a student it struck me that the humanities had already understood the problems with technology, but their insights didn’t reach far. The ‘TEDx McOptimism’ goes down like cake, while the humanities’ view – its complicated and there are no easy solutions – is more like eating carrots and humus. Nice as well, but not as tasty as soggy cake.

Policy makers often think they need to learn more about the workings of technology to be able to gasp it. That is an option, but there is an alternative route. We can also strive to better understand human desires and dreams, and to see how new technologies always respond to these desires. The smart part of this route: technology changes fast, but these desires have been steady as a rock for ages. Professor Rein de Wilde for example described the vision of ‘the land of milk and honey’ (see the internet of things), and Imar de Vries spoke of the dream of ‘angelic communication’: the desire for perfect mutual understanding and thus the prevention of misunderstandings.

Ethnographer Grant McCracken describes how we always safeguard our hope for a better world somewhere, far away from the messy reality of the here and now. We do that mainly in the future – things will all get better – and he calls that ‘the horizon of expectation’. Long story very short: in the past God (middle ages), politics (renaissance) and (until 2008) ‘the invisible hand of the economy’ offered us a place where we dared to park our hope. Nowadays it appears to be predominantly technology that offers us this haven.

The difficult thing in my work is that I touch on something very profound in people: their hopes for a better world. It’s hard to criticise technology because at heart they don’t want the pedestal to be rocked – we all want to keep our faith in technology. For instance, I see how it is kept kept out of the messy human world by presenting it as ‘neutral’ (algorithms) and as a some sort of inevitable force of nature, which ‘impacts society’ from the outside.

I don’t mean to say that good intentions will always fail or that hope is irrational, far from that. My point is that good intentions must go hand in hand with down to earth long-term thinking. We have to go from ‘best story counts’ to ‘most holistic story counts’. Maybe we can call it ‘sustainable optimism’. The good thing is, it’s not just healthier, but also more powerful. I can’t predict the future, but thanks to my baggage from the humanities I can tell you which predictions of the future are mostly slinky expressions of an ideology.

Take the blockchain for example. It’s one of these technologies that make all my alarm bells ring. You can hear the engineers think: ‘Oh shit, the internet turned out to be a surveillance machine. But version 2.0, the blockchain, will set things straight. It will be incorruptible’. But because there is still too little critical awareness they create a technology with even more authoritarian potential (this is explained in depth on techologiebeleid.nl, a site I launched to provide policy makers with access to the best insights from the humanities).

And so I get to your question. Could we create a market for products that respect our human dignity? We sure can. But to do so we will first have to bridge the gap between the ‘two cultures’, and to involve researchers from the humanities: ethicists, ethnographers, sociologists. Only then will the startup scene finally start thinking about mankind in an adult manner, and hopefully stop spreading simplistic stories, which are in fact mainly made up to part investors from their money, and sometimes take on an almost religious character (singularity).

I can already see the first seeds sprouting, and also big boys like Apple – always well ahead in gauging our desires – regard privacy as a feature now. I very much hope the ministry of economic affairs will stimulate this market. I certainly see opportunities and I think much can be learned from the way organic food was turned into a flourishing trade (your colleague in ‘agro’ knows more about this).

I have no doubt this market will materialise. Privacy (read: autonomy) is such a fundamental human desire. In the next ten years people will start to see how strongly data influence their chances. That data based credit checker will also turn down people because of their data, but that’s a part of the story they’d rather not tell. The Dutch will be Dutch though: only when we begin to feel a negative financial impact from data, we will switch to that smart thermostat, city, door bell, messenger or browser whereby ‘smart’ also stands for ethical.

That leaves me with one question: will the Netherlands be a frontrunner in this?

Let’s discuss it over this coffee in Eindhoven. I hope there will be cake 🙂
Tijmen

Dear CEO – Harrie van de Vlag and Paulien van Slingerland (TNO) and Linnet Taylor (Gr1p)

“The one fundamental rule about new technologies is that they are subject to function creep: they will be used for other purposes than their originators intended or even imagined.” [LT]


Writers of this conversations are:

Linnet Taylor writes for the Gr1p Foundation and is a researcher. Her research focuses on the use of new types of digital data in research and policymaking around issues of development, urban planning and mobility. Her pen pals are Harrie van de Vlag and Paulien van Slingerland, both Consultants Data Science at TNO.


Harrie van de Vlag & Paulien van Slingerland - Thursday 28-09-2017 17:40

Dear Linnet,

We are writing you to discuss a new trend in data science: “affective computing”.

Emotions and relationships have long been important in our economy. People do not buy a ticket for a concert, but for an unforgettable evening with friends. People are not looking for a new job, but for a place in an organisation with a mission that suits their world views and principles.

A stronger emotional connection has a higher value, both for companies and consumers. This is why at TNO we are researching how affective states (emotions) can be interpreted, using wearables that record properties like heart rate, brain activity (EEG), skin conductivity (sweat), etcetera.

Starting point for our research was a question by Arnon Grunberg, who was interested to learn how his readers felt while reading his books. For this purpose we have conducted an experiment in a controlled environment with 200 voluntary participants. To bring this technology out of the lab and into the field TNO, Effenaar Smart Venue and software developer Eagle Science are working towards new prototypes of appliances based on emotion measurements.

The first one will be demonstrated during the Dutch Design Week 2017 (October 21-29). Together with Studio Nick Verstand, we will present the audiovisual artwork AURA, an installation that displays emotions as organic pulsating light compositions, varying in form, colour and intensity.

Eventually this technology can be used for instance to develop new forms of market research, enabling companies to measure the emotional experience of voluntary consumers without disturbing their experience. This reveals which parts of the customer journey are perceived as positive and which as annoying. Acting on these insights allows companies to provide a better experience, for instance during shopping, while visiting a festival, or when following a training in virtual reality.

At TNO, we are well aware that emotions are closely tied to the private sphere of individuals. The question arises whether consumers need to choose between their privacy on the one hand and the comfort of personalised services on the other. The upcoming new privacy legislation (GDPR) also highlights the importance of this dilemma. This is why TNO is also researching technologies to share data analyses, without disclosing the underlying sensitive data itself. For instance because the data remains encrypted at all times. This way, from a technical point of view, the dilemma appears to be solved and there would no longer be a need to choose between privacy and convenience.

At the same time we expect that this can only be the case if people feel they can trust such a system, and that more is needed than just a technical solution. Therefore we are interested in your point of view. What else is needed to establish trust?

Best regards,

Paulien van Slingerland and Harrie van de Vlag
TNO
Innovators in Data Science


At TNO, we are well aware that emotions are closely tied to the private sphere of individuals. The question arises whether consumers need to choose between their privacy on the one hand and the comfort of personalised services on the other.
Harrie van de Vlag & Paulien van Slingerland


Linnet Taylor - Thursday 28-09-2017 23:07

Dear Paulien and Harrie,

I read with interest your explanation of your new project on measuring emotional experiences. It is exciting to be part of the birth of a new technology, and the wonder of innovation is clear in your AURA project which will translate sensed emotions into light. I think this will provide new opportunities to investigate processes of human emotion, especially for the ‘quantified self’ community already engaged in measuring and tracking their own experience of the world.

I question, however whether tracking one’s changing emotional state as one experiences media, or anything in fact, is part of a ‘customer journey’. This is not just about sensing, but about investigating the border between software and wetware – technology that aims to connect to and enhance the human brain.

It is interesting to its corporate sponsors because it promises new forms of access not to ‘the customer’ but to people, in all our idiosyncrasy and physicality. Those forms of access are not necessarily more accurate than asking people what they think, but they will be more seamless and frictionless, blending into our lives and becoming something we are rather than something we do.

You ask whether consumers need to choose between their privacy on the one hand and the comfort of personalized services on the other. I think this question may distract attention from a more central one: can we separate our existence as consumers from our existence as citizens, partners, workers, parents? Our emotions are an essential bridge between ourselves and others, and what we show or hold back determines the kinds of relationships we can form, and who we can be in relation to our social world.

The language of choice may not be the right language here: your project uses only volunteers, but is it clear what they are volunteering? Your technology has a 70-per-cent accuracy, according to test subjects. But there is profound disagreement amongst brain specialists as to what we measure when we study emotions.

William James, one of the founders of psychology, argued that our experience of emotions actually results from their physical expression: we feel sad because we cry and we feel happy because we smile, not the other way around. If this is true, the sensors you are developing will have better access to the biological content of our emotions than we will, which has implications for – among other things – our freedom to form our own identities and to experience ourselves.

I am reminded of a project of Facebook’s that was recently discussed in the media. The company’s lab is attempting to produce a brain-computer speech-to-text interface, which could enable people to post on social media directly from the speech centre of their brains – whatever this means, since there is no scientific consensus that there is such a thing as a “speech centre”.

The company’s research director claims this cannot invade people’s privacy because it merely decodes words they have already decided to share by sending them to this posited speech centre. Interestingly, the firm will not confirm that people’s thoughts, once captured, will not be used to create advertising revenue.

You ask what is needed to establish trust in such a system. This is a good question, because if trust is needed the problem is not solved. This is one of a myriad initiatives where people are being asked to trust that commercial actors, if given power over them, will not exploit it for commercial purposes. Yet this is tech and media companies’ only function. If their brief was to nurture our autonomy and personhood, they would be parents, priests or primary school teachers.

The one fundamental rule about new technologies is that they are subject to function creep: they will be used for other purposes than their originators intended or even imagined. A system such as this can measure many protected classes of information, such as children’s response to advertisements, or adults’ sexual arousal during media consumption.

These sources of information are potentially far more marketable than the forms of response the technology is currently being developed to measure. How will the boundary be set and enforced between what may and may not be measured, when a technology like this could potentially be pre-loaded in every entertainment device? Now that entertainment devices include our phones, tablets and laptops, as well as televisions and film screens, how are we to decide when we want to be watched and assessed?

Monitoring technologies produce data, and data’s main characteristic is that it becomes more valuable over time. Its tendency is to replicate, to leak, and to reveal. I am not sure we should trust commercial actors whose actions we cannot verify, because trust without verification is religious faith.

Yours,

Linnet Taylor
TILT (Tilburg Institute for Law, Technology and Society)


Harrie van de Vlag & Paulien van Slingerland - Thursday 05-10-2017 14:45

Dear Linnet,

Thank you for sharing your thoughts. The topics you describe underline the importance of discussing ethics and expectations concerning new technology in general, and affective computing in particular.

You end your letter saying that ‘if trust is needed, the problem is not solved’. This is true in cases where the trust would solely be based on a promise by a company or other party. However, there are two other levels of trust to take into account: trust based on law and trust based on technical design.

To start with trust based on law: the fact that a technology opens new possibilities, does not mean that these are also allowed by law. The fact that pencils can not only be used to write and draw, but also to kill someone, does not mean that that the latter is also allowed by law.

The same goes for affective computing: while the possibilities of affective computing and other forms of data analytics are expanding rapidly – your examples illustrate that – the possibilities of actually applying this technology are increasingly limited by law. As a matter of fact, new privacy legislation (GDPR) will become effective next year. Europe is significantly stricter in this than America (where companies like Facebook are based).

For example, as TNO is a Dutch party, we can not collect data for our research during the AURA demonstration without the explicit consent of the voluntary participants. They have to sign a document. Moreover, we need to ensure that the data processing is adequately protected. For special information, such as race, health and religion, extra strict rules apply.

Furthermore, we cannot use this data for any other purpose than the research described. For instance, VPRO was interested in our data for publication purposes. However, aside from the fact that we take the privacy of our participants very seriously, we are simply not allowed to do this by law. So TNO will not share this data with VPRO or any other party.

Altogether, applications of affective computing as well as systems for sharing analyses without disclosing data are both limited by law. We are actually developing the second category to facilitate practical implementation of the law, as the system is designed to guarantee technically that commercial companies (or anyone else for that matter) can not learn anything new about individuals.

This is trust by technical design, a novel concept that does not require a promise or law in order to work. At the same time, we realise that this is a new and unfamiliar way of thinking for many people. Therefore, we are interested to learn what is needed before such a system can be adopted as an acceptable solution.

To this end, let us rephrase our original question as follows: under what conditions would you recommend people to provide their data to such a system, given the technical guarantee that no company or other party would actually be able to see the data, even if they wanted to?

Best regards,

Paulien van Slingerland and Harrie van de Vlag
TNO
Innovators in Data Science


Can we separate our existence as consumers from our existence as citizens, partners, workers, parents? Our emotions are an essential bridge between ourselves and others, and what we show or hold back determines the kinds of relationships we can form, and who we can be in relation to our social world.
Linnet Taylor


Linnet Taylor - Zondag 08-10-2017 21:56

Dear Paulien and Harrie,

Your response is a useful one. It has made me consider what we mean when we talk about trust, and how the word becomes stretched across very different contexts and processes. You ask, under what conditions would I recommend people provide their data to a system that can sense their response to media content, given the technical guarantee that no company or other party would actually be able to see the data, even if they wanted to.

This is, of course, a difficult question. People should be free to adopt any technology that they find useful, necessary, interesting, stimulating. And it is likely that this sensing system will be judged all of these things. Let us be honest here, though – it is not a citizen collective that has asked us to write this exchange of letters.

We are exchanging thoughts about the future activities of media corporations, at the request of a media corporation. If the technology were going to be used exclusively in a highly bounded context where the data produced could not be shared, sold or reused in any way, I am not sure we would have been asked to have this conversation.

I think the reason we have been asked to exchange ideas is because there are huge implications to a technology that purports to allow the user to view people’s emotional processes. This technology has the potential to help media providers shape content into personal filter bubbles, like our timelines on social media.

These bubbles have their own advantages and problems. There has been much recent analysis, for example, of how the populist parties coming to power around the world have benefited hugely from digital filter bubbles where people access personalised content that aligns strongly with their own views.

It is indeed important that such a system should be used in accordance with the law. But data protection law, in this case, is a necessary but insufficient safeguard against the misuse of data. The real issue here is volume. Most people living today are producing vast quantities of digital data every moment they are engaged with the world.

These data are stored, kept, used, and eventually anonymised – at which point data protection law ceases to apply, because how can privacy relate to anonymised data? Yet the system you are developing demonstrates exactly how. It is another technology of many that will potentially make profiling easier. It will show providers our weak points, the characteristics that make it possible to sell to us – and it can do this even if we do not use it.

An example: someone wishes to live without a filter bubble and does not consent to any personalisation. But all the other data they emit in the course of their everyday life generate a commercial profile of them which is highly detailed and available on the open market. The features which make them sensitive to some types of content and not others are identifiable: they have children, they like strawberries, they are suffering domestic violence, they are made happy by images of cats. A jumble of many thousands of data points like these constitute our digital profiles.

But it is not only our characteristics. It is those of people around us, or like us. Knowledge about the attributes of users of a system such as yours (whose response to content can be directly measured) can be cross-referenced with the attributes of those who do not use it. Once this happens, it becomes possible to infer that my heart will beat harder when I watch one movie than when I watch another; that I will choose to go on watching that provider’s content; that my attention will be available for sale in a particular place at a particular time.

In this way, consent and privacy become meaningless if there are enough data points about us all: new technologies that pinpoint our behaviour, feelings and susceptibilities are valuable not for their immediate uses but as an addition to the long-term stockpile of data on all of us – and especially useful with regard to those who do not choose personalisation and are therefore harder to pinpoint and predict.

This is why I am sceptical about invoking ‘trust’ as something that can be generated by making sure individual applications of a particular technology comply with data protection law. Data protection is a cousin to privacy, but it is not at all the same thing. We may guard data without guarding privacy, and we may certainly trust that our data is being handled compliantly with the law, while also having reservations about the bigger picture.

Things that are perfectly permissible under data protection law, yet are also unfair include charging different people different prices for the same goods online; following users’ activities across devices to understand precisely what makes them respond to advertisements, and a company passing on our personal data to unlimited subsidiary companies. Law is no panacea, nor can it be relied upon to predict what will happen next.

I do not cite these things to argue that you should stop developing affective computing technologies for commercial use. I use them to suggest two fundamental realities: first, that we are no longer capable of understanding the long-term or collective implications of the data we emit, and second, that our consent is not meaningful in that context.

Having made my argument for these two problems, and how they relate to your work, I can pose a question in return: how can we, as developers and users of data analytics technologies, collaborate to look beyond legal compliance to the many possible futures of those technologies, and create ways to shape those futures?

Yours with best regards,

Linnet Taylor

Dear CEO – Geert-Jan Bogaerts (VPRO) and Tessel Renzenbrink (Gr1p)

“Technology is not inherently good or bad or neutral. It is what we make it.” [TR]

Writers of this conversation are:
Geert-Jan Bogaerts heads the department of digital media of Dutch broadcaster VPRO. Responsible for digital channels and innovation and distribution strategy.

Tessel Renzenbrink is part of the network of the Gr1p Foundation. She is freelance writer and web editor focusing on the impact of technology on society, particularly information and renewable energy technologies.


Geert-Jan Bogaerts - Sunday 17-09-2017 11:59

Dear Tessel,

It feels almost Victorian, like in an epistolary novel by Mary Shelley or Anne Brontë, to start corresponding with a complete stranger on a subject that’s apparently close to both our hearts. I‘m eager to learn what themes you will provide and I look forward to discussing them with you. Of course, at the same time there’s a strange twist to it. We write to each other but we also know that our correspondence will be made public, and therefore one tends to — or rather, I tend to — put a better foot forward. I mean, this is not without obligation.

But anyhow, we are meant to begin this correspondence with a short introduction. I don’t have to mention my name as you already know it, just like my position —head of digital at VPRO. Rather than just stating the facts I think it’s more interesting to tell you how I see myself, what I identify with most. I mean, of course I am a son, a father, a brother, a husband, a friend, a workmate — these are the parts we can all more or less see ourselves in. But what makes me different? What defines my identity most of all?

The first word that springs to mind is journalist. Although nowadays I am much more a manager, a strategist and a policy maker, my background as a journalist still shines through in everything I do. It determines what questions I ask, how I view the world and which solutions I come up with for problems I encounter. Fifteen years of editorial work — as a freelancer first and later writing for de Volkskrant (business desk and correspondent in Brussels) – does shape you for life.

At the time — we’re talking the late nineties — I was stationed in Brussels and reported on the EU, NATO and Belgium, but in my own time I got involved in the online world. The strategic implications of this technological progress were far from distinct then, but it was already evident that the internet would profoundly change our trade and society in general. In 2003 I made it my profession as well, first as head of online at de Volkskrant, from 2010 as a freelance writer. advisor and teacher and since 2014 in my present job.

How do I observe technological progress now? Not just from the strategic mission that comes with my job, but explicitly also from the impact this progress has on our culture, our coexistence, our economy, our politics, our government. I feel it is very much a key task for public broadcasters to sketch the consequences, to explain developments and to ask questions. It is from that perspective that I look at our project “We Know How You Feel”. What exactly does it mean when our thoughts and feelings will be out in the open? How does that change us? As an individual, in our relationships and in our social interactions?

I hope and expect this project will bring us interesting new insights.

Warm regards,

GJ Bogaerts
Head of digital VPRO


What exactly does it mean when our thoughts and feelings will be out in the open? How does that change us? [GJ]


Tessel Renzenbrink - Sunday 24-09-2017 23:55

Hi Geert-Jan,

I must confess that I started out as a techno-optimist. I was convinced that the liberating possibilities of information and communication technology would actually lead to the most positive outcome. These possibilities lie mainly in the fundamental shift from centralised to decentralised. From a world ruled by a small group of people in positions of power to a world in which every voice is equal. I was convinced that this leveling would erode the power of institutional strongholds.

Take the mass media for example. Newsrooms at papers and TV stations used to both determine what the news was and how it was framed. The documentary Page One relates how The New York Times saw its authority diminished when the internet surpassed the paper as an information source.

In the days of old the NYT set the agenda. What the paper wrote determined what people talked about. That fact is presented with pride and a yearning for better days. No one asks if it is at all desirable when just a handful of editors sets the public debate, day in day out.

Another example of decentralisation is the rise of cryptocurrencies like Bitcoin. They enable monetary transactions without the interference of a central authority. Banks will no longer be too big to fail when that system takes hold, they will be obsolete.

As we all know things went different. The internet did not decentralise the world but the world centralised the internet. Once the web became popular, it was taken over by commercial parties. Almost 80 percent of web traffic now goes through Google and Facebook.

Googles algorithms determine which information comes up when you do a web search. Facebook has positioned itself between our personal interactions with family and friends and forces us to communicate by Facebook rules. It will do everything to keep us on its platform as long as possible, so it can sell our time and attention to advertisers. And, of course, both companies collect enormous amounts of data on us.

By now I see that technology does not necessarily propel us towards the most positive (or negative) outcome. Technology is not inherently good or bad or neutral. It is what we make it. That is why I got involved in the Gr1p network. The Gr1p Foundation wants to give people more grip on their digital surroundings so they can make informed choices.

Our choice of technologies and the way we use them impacts our society. But presently technological development is mainly corporate driven. That’s why both with Gr1p and in my work as a writer I strive for greater involvement of citizens in the digitization of society, so we can decide in a democratic way what kind of future we want to build using technology.

I fully agree with you that there is a task for public broadcasters here. And — more specifically about the subject of our correspondence — I find it useful that VPRO Medialab dives into emerging technologies. As a public institution you can study them from a different perspective than profit driven companies do. My first question to you therefore concerns how you give an interpretation to that task.

If I understand correctly you research a new technology and its impact on the process of media production every year. Last year it was virtual reality and this year it’s wearables. You aim specifically at measuring emotions using wearable technology and the role this could play in creating and consuming media.

A practical application you research is the use of emotional data by broadcasters to offer people a personal, mood based viewing experience. With what purpose do you research that application? What kind of service would you like to offer your viewers by using wearables?

You wrote that the project aims to find out what it means when our thoughts and emotions are out there for everyone to see. How is that researched practically? Which questions are asked and what is being done to find the answers? What do you think could be the distinctive role of the Medialab in the questioning of wearable technologies?

Kind regards,

Tessel Renzenbrink
Gr1p network


Geert-Jan Bogaerts - Saturday 30-09-2017 21:04

Hi Tessel,

I did not only start out as a techno-optimist, I still am one. I just never believed that technological progress in itself was a prerequisite for human happiness, collective or individual. It is a necessary condition though. Without technological advancement we would still be subject to the whims of nature.

But indeed, ultimately it’s how we apply technology that determines its quality, positive or negative. So I agree that technology in itself is neutral. It’s the scientists, the artists, the designers and the storytellers who can ultimately give it direction and meaning. In my view they set a standard, a standard we in turn need to determine how far we can stray from it.

We can assume a critical stance towards Google and Facebook and other data drivers because there is an entirely different group of people thinking about alternative approaches. They constitute the subculture of technological progress and they never cease to ask critical questions about applications, whether these are driven by profit or by a lust for power and control (the NSA’s of this world).

Anyhow, as far as I’m concerned public broadcasters for as long as possible will be a safe environment, where this critical questioning and free thinking is possible, where alternatives can be thought out and where experimenting with new technologies is allowed. At the VPRO we even consider that to be a core task.

We apply it as often as possible in our own productions but naturally we also apply a set of rules: we must reach a minimum amount of viewers, listeners or visitors. And there is a limit to what productions can cost. We set up the Medialab in Eindhoven to be a truly free environment, where we try to liberate ourselves as much as possible from all these rules we have to work by.

The Medialab is always on the lookout for relevant developments it can pick up and research, fed as much as possible by the available knowledge inside the VPRO and a wider network of artists, scientists, designers, authors and journalists.

Innovation in public broadcasting is always focused on media, both their production and consumption. That’s another reason why it is a core task: we see our audience moving away from so-called linear viewing and embracing new platforms. So we have to get to know these platforms as well. We must be able to handle them and to judge if such a platform or new technology could be of any benefit to us. By doing so we get to know these technologies and we find out what their positive – and possible negative – applications are.

We expect the influence of wearable technology on our media consumption to grow as it becomes more popular. We’ve already seen that very convincingly with the portables we now all carry: our smartphones, our tablets and our e-readers. But wearable technology is developing rapidly: from smart watches to sweatbands and underwear that can monitor our heart rate, blood pressure and body temperature. Even our sex life is not safe. Remote satisfaction no longer requires a tour de force…

Wearables can be used to produce media and to consume media. We will be able to create wonderful things using them, but we must also look at the flip side. My biggest worry concerns the data wearable technology can collect and exchange. And that is what this program predominantly focuses on.

Which personal data are we giving away without knowing it? How can we make our public conscious of that fact? What do my glance, my posture and the way I walk tell the shop where I get my daily groceries? We know that some clothes stores already experiment with personal display-advertising after a lightning fast analysis of my personal traits.

“We Know How You Feel” aims at giving the audience insight in these developments and processes. Last year we did a similar project, called “We Are Data”. The accompanying website clicklickclik.click received almost a million clicks. It is evident that the subject lives. It’s urgent and it calls out for critical questioning.

I see many similarities between the goals I mentioned above and your observations about Gr1p. My counter question to you is: what do you see as the most effective way to reach these goals? Is it enough to make the public conscious? And what is the best way to achieve this awareness?

Warm regards,

GJ Bogaerts
head of digital VPRO


Technology is always an expression of certain norms and values. That’s why it is necessary for scientists and artists to critically question it. [TR]


Tessel Renzenbrink - Sunday 4-10-2017 22:08>

Hi Geert-Jan,

Technology is neither good or bad, on that we agree. But unlike you I don’t think technology is neutral. On the contrary. Every technological artefact is an expression of a set of cultural values. Algorithms for example can mimic the prejudices that live in a society.

To give an example: some courts in the United States use algorithms to determine the sentence a convict will receive. Based on data a calculation is made of the risk that someone will reoffend in the future. In case of a high score the judge can decide to pass a higher sentence.

Research shows these algorithms are biased: black people are often given higher scores than white people. Eric Holder, who served as attorney general under president Obama, spoke out against the use of such algorithms, because they could ‘exacerbate unwarranted and unjust disparities that are already far too common in our criminal justice system and in our society.’

Technology is always an expression of certain norms and values. That’s why it is necessary — like you say — for scientists and artists to critically question technology, to make these implicit values visible and to discuss them when necessary. But that’s not sufficient, because it is reactive. If you only react after the technology is brought to the market you already operate within a certain paradigm.

As people and as a society we will have to act earlier in the process, so we can determine what we want from tech. It does matter what you build. To think about what you want to build before you even start. And that is where values come in. Ultimately it is not about which technology you want to realise but about which values you wish to embody in technology.

In that sense it’s interesting that Medialab does not just lay down critical questions about wearables with ‘We know how you feel’, but also experiments with them, in cooperation with artist Nick Verstand and TNO. By doing so Medialab & co claim a creative role and the ability to determine the values.

The research question they pose is: can we tune our media offering to your mood on the basis of emotional data? But is that an interesting question? Which underlying values do you recognise with such a goal and which ones do you leave out?

I see how this use of emotional data could serve broadcasters. A personalised media offering might make people stay on your channel longer. It benefits the ratings. But how does it serve the public interest? Because to me the hunt for clicks and eyeballs that holds so many media in thrall is not a goal in itself.

And then there is of course the life-size ghost of the filter bubble. Personalisation based on data — emotional or otherwise — will per definition lead to a media offering tuned to your interests and convictions. That way it will confirm and reproduce your view of the world, while I think it’s a key task for a public broadcaster to make people familiar with the social environment of other groups in society.

In your letter you asked if broad awareness is enough to steer technological development in a direction that serves the public interest. Well, I don’t think it is enough but it’s a start. It’s under the pressure of a collective conviction that things start to change.

Take for example this other technological revolution that is in full swing now: the energy transition.Through the decades people grew more and more conscious of the fact that the economy and the energy sector in particular had to become sustainable.

Because of this awareness action was taken in ever more domains. Governments introduced laws and treaties. Engineers started innovating. Tax money was made available to pay for this innovation. Consumers made different choices. Companies went green.

You asked for the best way to achieve this awareness and my answer is: alternatives. Without alternatives there is to no course of action and that leads to a certain resignation. Why bother about something you can’t change anyway? Only when viable sustainable energy technologies became available people were able to turn their worries into actions.

But of course these alternatives did not come out of the blue. They were pioneered by people and institutions looking for other solutions, asking different questions because they took different values as their starting point. That’s why I want to ask you what values underlie the AURA art installation.

Kind regards,

Tessel


Geert-Jan Bogaerts - Sunday 7-10-2017 18:11

Hi Tessel,

Let me start by answering your question on the values that underlie our Medialab project. By far the most important value to me is insight. You could make a sequence, starting with data. Data leads to facts, when you have facts you are informed and information in turn can lead to understanding or insight. None of these steps is self evident: it takes an effort to get from data to facts, from facts to information and from information to insight.

With our project “We Know How You Feel” we aim to question the ease with which some people in the world of media seem to take the use of algorithms and data for granted. For if we don’t use them in the right way we do indeed run the risks you described earlier: filter bubbles, the eradication of surprise and serendipity, choosing the common denominator instead of finding interesting niches that can truly teach people something new.

Only when we (as a society) have a real understanding of how data and algorithms influence our lives – and will influence them even more as smart appliances take over more of our environment – we can think of alternatives. I’m fortunate enough to work for a broadcaster that has a genuine interest in these alternatives and reports on them on a regular base, most prominently in Tegenlicht/Backlight.

Imagination is the foundation of every technological innovation and every invention. The American author Neal Stephenson has teamed up with the University of Arizona for an interesting collaboration, Project Hieroglyph. It connects science fiction writers to scientists and builds on a thought by Carl Sagan.

He once said that the road to the most groundbreaking science was paved by science fiction. It’s the power of imagination leading the way for science. If we hadn’t been able to imagine that one day people would no longer die of of smallpox or pneumonia, the smallpox vaccine and antibiotics would never have been invented.

Shortly after World War II Arthur C. Clarke had the idea that global communication could be made a lot easier if we’d be able to launch satellites that would stay in a fixed place above the earth. Twenty years later, in the mid-60’s, the first geostationary satellite was launched and nowadays we can no longer live without them.

On the neutrality of technology: I agree with you that we as people create technology and that we do so from our own needs and biases. In that sense technology is not neutral indeed. I think the nuance can be found in Kevin Kelly’s observations in ‘What technology wants’. Kelly states that technology has its own evolution and creates its own progress, independent from mankind. In that view technology is not influenced by human prejudice.

Let’s make this vision concrete. Google was recently blasted because an image search for ‘black people’ also came up with photos of gorillas. Shortly afterwards Google’s Adwords turned out to show more ads for well paid jobs to men than to women.

These are examples of technology being applied in a non-neutral way. But underneath these examples lies an instrument panel of static mathematics, with lots of attention for regression analysis and standard deviations in the programming languages Google applies. Scientists had been using this panel for much longer and it was only a matter of time before software developers would discover it as a tool to analyse the enormous amount of available data. That analysis in turn enables new applications. Siri and Alexa grow ever smarter, but at the same time they remain products of human imagination and consequently of human bias.

In my view the real peril of this development is not contained in the observation itself. Human progress is only possible because we have ideals that spring from our own vision of the world. The peril is in the fact that the means to achieve this progress are in the hands of ever fewer people. Facebook, Google, Apple, Amazon and Microsoft are building our new world. It frightens me that these are companies that evade any form of democratic control, and are judged by their shareholders on just one thing: net profit per share.

To be honest, I’m not very optimistic that a ‘collective conviction’ can arise – as you wrote – to raise the pressure for change. These companies operate on a world-wide scale and there is not a trace of world-wide political consensus on how to handle them. The EU goes its own way and introduces a ‘right to be forgotten’. Within the EU Germany is the only country that holds platforms liable for allowing hate speech. US policy meanwhile is aimed at safeguarding the position of these companies and therefore introduces laws to protect these enterprises. And then we haven’t even started talking about the breaches of internet freedom by for example Russia and China.

But, entirely in line with my techno-optimistic vision, I also believe that technology will provide a solution for all of this in the end – blockchain FTW!

GJ Bogaerts
head of digital VPRO


Tessel Renzenbrink - Sunday 4-10-2017 22:08

Hey Geert-Jan,

There are three elements in your letter I find hard to reconcile. You say technology always expresses bias because it springs from the human brain, which is never value-free. I agree with you on that point: technology is not neutral. You continue by saying that the real danger lies in the fact that technology is developed by a small tech elite: the Amazons, the Facebooks. These companies are not subject to democratic control and their main steering mechanism is financial gain. That is a scary thought indeed.

But in the end you say everything will be fine, because technology itself — in the shape of blockchain — will provide a solution. Like an autonomous power that will dethrone the monopolists, irrespective of what we, people do. That conclusion is at odds with the first two statements. Technology after all is always an expression of human values. When technological development lies in the hands of a small group of people, it will spread and cultivate the values of this group. That will give these people more control over the playing field. The technological domain will become ever more homogeneous and assume a form that serves the interests of this group.

Yet you still trust blockchain technology to develop autonomously in this environment, so it can erode the power of the tech elite. I don’t share your optimism on this. Blockchain, like any other technology, is subject to the economical, political and social structures in which it is developed. Why would the dynamic that led to the monopolisation of the internet by a few companies leave blockchain undisturbed? In the last two paragraphs you say you have little faith in social pressure bringing about change. According to you blockchain will act as that change agent.

I have a totally opposite view and I’ll tell you why by means of an example. This fascinating graph shows the prosperity level of mankind over the last 2000 years, and is often cited when it comes to technological progress.

© Our World in Data

Reference: Max Roser (2017) – ‘Economic Growth’. CC BY-SA licentie.

The graph shows that prosperity barely increased through the ages. And then, in the middle of the 18th century, growth suddenly exploded exponentially. In their book ‘The Second Machine Age’ Erik Brynjolfsson en Andrew McAfee identify the cause of this turning point in history. The bend in the hockey stick curve coincides with the invention of the steam engine: the start of the first industrial revolution.

Not everyone was lifted by the waves of growing prosperity. On the contrary. The transition from an agrarian to an industrial society went hand in hand with terrible injustices. There was exploitation, child labour and labourers worked fourteen hours a day, living in extreme deprivation. Things only started changing when our ancestors demanded better living circumstances in force. That is why social involvement in technological development does matter. The steam engine created an exponential increase in prosperity, but what is then done with that prosperity is not inherent to the steam engine. We, people decide on that.

Now we are on the brink of a third industrial revolution, Industry 4.0 or the second machine age. Whatever we wish to call it, we have to make sure that history does not repeat itself. That this time around we guide the technological revolution in such a way it benefits everyone.

I do not believe that blockchain will achieve that for us in some miraculous way or other. We will have to enforce it ourselves. Because that is the danger of techno-optimism: the belief that technology automatically leads to the best possible outcome and that therefore we don’t have to take responsibility.

Kind regards,

Tessel


Geert-Jan Bogaerts - Monday 16-10-2017 15:42

Hi Tessel,

Techno-optimism does not relieve us of our duty to act and to critically question!

So even though I consider technology to be both cause of and solution to many of our problems, I still think that parties like Gr1p and VPRO must promote our ongoing critical questioning of that technology.

Regards,
GJ Bogaerts


Tessel Renzenbrink - Monday 16-10-2017 17:01

Hi Geert-Jan.

Thanks for the lively correspondence. It was interesting to exchange thoughts with you.

Good to continue on the 25th!

Kind regards.
Tessel

Maxigas

Maxigas is a postdoctoral researcher in the CareNet group at the
Internet Interdisciplinary Institute, Universitat Oberta de
Catalunya. His current research lines are European Hacking History,
the social history and contemporary use of the Internet Relay Chat
protocol, and the role of classic cybernetics in shaping computing
cultures.

Maxigas personal website.