“What can we get from the body that we cannot get from explicit habits and behaviors that can be observed by means other than electrophysiology?” [LRF]
Writers of this conversation are:
Luis Rodil-Fernández, writing for the Gr1p Foundation, he is an artist, researcher, teacher and hacker. And Carmen van Vilsteren, Director of Strategic Area Health at Eindhoven University of Technology.
Luis Rodil-Fernández - Sunday 24-09-2017 15:15
Dear Carmen et al.,
This is a novel format for me, so I hope we can find a way to communicate our arguments in depth in such a short time, using the clumsy medium e-mail is.
My education is in the arts. I also studied computer science and have been an engineer for some years. Perhaps I should go into further detail about what my relationship with the study of the human body through sensors is. In my artistic work I use biomedical devices that interface directly with the body, so I have a certain familiarity with picking up bodily signals and using them for various purposes.
I was part of BALTAN Labs “Hacking the Body” program and was a resident at Philips Research, designing experiments for biosynchrony research using methods from electrophysiology. In addition to this more practical experience I am a privacy activist and a teacher. Each of my activities informs the other of course, and although I am an avid user of technology, it is hard for me to feel uncritical once I have managed to contextualise a particular development in tech.
I understand that the context of our exchange is handed to us by the title ‘We Know How You Feel’ and that the question pivots around the work that Nick Verstand did in collaboration with TNO, which uses EEG to infer emotional states. VPRO proposes a scenario in which similar technology could be applied to market research to more precisely target media content. If I understand correctly our conversation starts from that proposed scenario.
What strikes me most in that proposed scenario is perhaps its total lack of imagination, as it consists of adopting activities that already exist. It doesn’t take a great stretch to imagine how physiological data might be incorporated into the data pool that is currently being used to profile media users, by internet media anyway.
So called ‘surveillance capitalism’ is the economic model of most media platforms on the internet. In a single technological generation our televisions have turned from bulky analog devices just a tad more sophisticated than a radio receiver, into general computing devices sitting in our living rooms, containing a wealth of sensors capable of doing their own data gathering as well.
The scenario proposed by VPRO is one in which the economic model of the internet gets extrapolated to television with a few extra channels of consumer data thrown in, namely physiological data. All of this is already happening, at least as separate streams in diverse industries. The synthesis that this project of VPRO is proposing is altogether too plausible, well within the realm of possibility, which is the reason why I find it worth exploring in greater detail.
A business area called neuromarketing already exists, and its practitioners do exactly what VPRO is proposing in this scenario: finding technical means that allow marketeers to precisely target a product to an individual based on their unconscious biophysical activity. The hypothesis of somatic markers formulated by Antonio Damasio states that our body is constantly producing and processing cues about our emotional state, and that our body seems to ‘know’ things even before those things are consciously known to us. Neuromarketing aims at exploiting this gap between consciousness and the creation of desire.
Now, do these techniques have a place in our consumption of media? Do they have applications beyond ‘selling more stuff’? How does the individual media consumer benefit from this scenario? I presume the reason why GR1P was brought into this conversation is to provide a counterbalance to this all too plausible scenario, and question the ethical implications of such a development.
The massive amount of data that certain companies have on us, internet users, is more substantial than most of us suspect and already provides insight into processes we as individuals are unconscious of. Facebook for example makes use of a thing called ‘derived qualifiers’, such as ‘likelihood of becoming addicted’ that rates a particular user’s susceptibility to falling into addiction. These inferred markers are constructed by combining various other metrics that Facebook can quantify directly and they use these ‘qualifiers’ to more accurately target advertisements and content to us.
Facebook can already do this already without access to our physical bodies, so at this point I would like to raise a question: to what extent does having physical access to the body of the consumer matter in making these kinds of inferences possible? Have the quantitative methods used by Facebook and Google not already made this ‘access to the body’ unnecessary and obsolete? What can we get from the body that we cannot get from explicit habits and behaviors that can be observed by means other than electrophysiology?
There are many more points to discuss of course, but I hope this can be a productive introduction and that we can take it from here.
Looking forward to hearing from you.
What can we get from the body that we cannot get from explicit habits and behaviors that can be observed by means other than electrophysiology?
Carmen van Vilsteren - Tuesday 26-09-2017 20:40
Thanks for opening the conversation. To me the format is also very new. I am not a writer at all. I have a background as an engineer and I have been working in the health domain for most of my life, serving several larger and smaller companies. I was development manager for cardio vascular imaging systems at Philips in the nineties and up until today every second somewhere a patient is treated with a system we introduced in these years.
At the moment I combine my position as director of strategic area Health at Eindhoven University of Technology (TU/e) with that of CEO at Microsure, a startup in robotics for micro surgery. At TU/e we are working on several new technologies. In one of these, regenerative medicine, we try to entice the body to heal itself. At Microsure our ambition is to give surgeons superhuman precision.
Your letter reminds me of a project I saw some two years ago. It was called Probes and was set up by Hans Robertus. You may know him. The results were presented during the Dutch Design Week. A diverse group of students got the assignment to think about solutions for a society where people would live to be 150 years old.
One group came up with the idea of an implantable chip that records all your life and health events, to be used for instance for preventive and non-preventive treatment. So not just ‘We Know How You Feel’, but also ‘We know how you will feel in the future’.
To substantiate this idea they set up an interesting experiment. They hired an office in Strijp S (hotspot of the DDW) and bought a lifelike baby doll and some blank cards. They then offered the baby doll to people in the street, telling them that this was their so called new born baby and that they had to register their son or daughter at the municipal office next door.
Most people agreed to participate in the experiment and came up with a name for their ‘child’. At the office they were told about the possibility to implant this chip. The new parents had to decide on the spot if they wanted this to be done, since it would only work if implanted during the first day.
The students expected all sorts of discussions and questions about data protection, privacy and the safety of the technology. But what they did not expect happened. All ‘parents’ opened the discussion about ethics: do I want to do this to my child? Would you have had the chip implanted?
Luis Rodil-Fernández - Friday 29-09-2017 13:31
To answer your question: before making that decision I would need to know a bit more about that hypothetical chip. What it does precisely, where it resides in the body, what the effects on the child are and who owns the implant. Is the chip a networked device or not? Does it perform any kind of data collection or is that data never stored? What earlier tests were done with the implant in humans? Who makes the chip? Is it a proprietary design or is it open source?
Of course I would have some serious concerns before happily implanting a technological artefact in the body of my new born for the rest of their life. But I wouldn’t be opposed as a matter of principle. I do think that technology has a role in improving people’s lives. My reaction wouldn’t be technophobic but cautious.
The questions that you thought people would ask regarding privacy, data protection and the safety of the technology are also ethical questions by the way. To me the question ‘would I implant this in my child?’ is not the only one about the ethical implications of the proposed scenario.
There are many examples of poor data security or poor privacy protection potentially resulting in the weaponisation of a technology that at first seemed innocuous. A technological device never comes to this world in isolation. It always brings with it a part of the future. A future that becomes our present the moment we let that technology enter our lives. We can’t possibly predict how it will evolve.
A chip implanted in my child today can be the root of discrimination of my child twenty years from now, or the target for an attack by a hostile actor the day after tomorrow. It’s s important to understand that these scenarios are not merely hypothetical. If the technology exists and the stakes are high, the technology will be weaponised.
I invite you to reflect on the recent revelations in the press about the role both Facebook and Twitter played in Russian meddling in the American election. To make money from advertising both companies offer sophisticated tools for targeting slices of their market. These tools enable such detailed targeting that advertisements can even be aimed at a single individual. It turns out that a 100.000 dollars worth of strategically placed posts were shown to American voters in the run up to the election.
In a press statement last week Mark Zuckerberg was indignant about the role Facebook had played (2) and he admitted to not having done enough to prevent these forces from intervening in the democratic process. There was no need for the influencers to break into Facebook systems or to employ anything that is traditionally thought of as a ‘hacker breach’. The (supposedly) Russian actors that wanted to buy influence did so by using tools that Facebook offers to legitimate advertisers.
What these actors did was to use these tools for a different purpose than Facebook had intended. All technologies, bar none, will be deployed for unintended use as the social context around them evolves. As William Gibson once wrote in his book ‘Burning Chrome’: the street finds its own uses for things. A technological artefact developed with the best intentions can and will very likely find unintended applications.
Going back to the ethical question you asked, I’d like to continue in that vein and to pose a few questions in return: given your ample experience in bringing technological products to the market I assume you have worked with a broad range of engineering and design professionals. How are these ethical questions dealt with in your professional environment? Is awareness of these issues common? What is the role of these ethical questions in the product development cycle? Have you seen any changes in these perceptions in your long years of work?
“A chip implanted in my child today can be the root of discrimination of my child twenty years from now, or the target for an attack by a hostile actor the day after tomorrow. It’s s important to understand that these scenarios are not merely hypothetical. If the technology exists and the stakes are high, the technology will be weaponised.”
Carmen van Vilsteren - Tuesday 6-10-2017 15:34
You want to know my opinion on the ads that were placed on Facebook and Twitter last year with the intention to influence the American presidential election. To be honest, I think you already made the perfect analysis yourself. People will indeed always find unintended ways of using technologies and other means. And then there’s the immediacy of posting on Facebook and Twitter. Things will come online without delay and thus with very limited room for intervention of correction.
Perhaps that stems from the way things are generally done in the media: very limited checks before publication, but an evaluation – and taking lessons from that – afterwards. I found out about this practice when I visited the local newspaper one day, to learn how they managed to make a new product – the newspaper – every day, while it took us multiple years to develop a new x-ray machine. This approach is called ‘benchmarking best practices’.
The newspaper editors told us they worked by a set of simple rules, which everyone knew. For example: no negative publicity on the royal family. And they did not check any stories upfront for lack of time, but would discuss them the next morning instead. This is in line with Mark Zuckerberg’s quote: ‘We don’t check what people say before they say it, and frankly, I don’t think our society should want us to.’ In the case of the meddling ads the damage was done long before any evaluation took place, so it was irreversible.
Your second question, about the role ethics play in the development of new medical technology, had me thinking a bit longer. To be honest I can’t recall any deep discussions on the topic during my years in the development of new cardio vascular x-ray systems. Improving these systems usually means improving the treatment for the patient with better images, reduced doses of x-rays, etcetera.
Patient safety is of the greatest importance during the development cycle of these imaging systems, so hazard analysis and extensive testing are always part of the process. Part of that testing is to determine how to best move around a patient and how to protect them against potential collisions during the process.
During my first project I started these tests with myself as the patient on the table. Initially some my colleagues thought that was bad and unsafe plan. My answer was this: if we don’t even dare to lie on that table ourselves we can’t ask a patient to do so. So it became common practice for developers to voluntarily play the patient during some of the collision tests.
Now, at Eindhoven University of Technology, I am confronted with many more ethical questions, for instance about the development of implants like pacemakers and brain implants. People depend on these technologies and their quality of life can be at stake. One of our four faculties that are involved in these projects has its own ethics department.
During the development of new devices and apps healthy people and patients are also ‘used’ as test persons. There is an increasing amount of regulations governing these practices in the Netherlands. Every experiment has to comply with these rules and regulations, and test persons have to sign an agreement before joining in. All of this is also overseen by an ethics commission from the university.