Before we augment people with tech, we’ll need proper rules
New technologies – from artificial intelligence to synthetic biology – are set to alter the world, the human condition, and our very being in ways that are hard to imagine. The discussion of these developments limits itself as a rule to individual values. But it is also crucial to talk about the collective human values that we wish to guarantee in our intimate technological society. That brings an important political question at the table. How to develop and implement human enhancement technologies in a socially responsible way?
During the last few decades, the human being has become an increasingly acceptable object of study and technological intervention. We are an engineering project ourselves. An important engine behind this development is the combination of nano-, bio-, information, and cognitive technology. This so-called NBIC convergence is creating a new wave of applications, consisting in large part of intimate technologies capable of monitoring, analyzing, and influencing our bodies and behavior. In essence, the NBIC convergence means a steadily more profound interaction between the natural sciences (nano and info) and the life sciences (bio and cogno). This interaction leads to two megatrends: “Biology becomes technology” and “technology becomes biology.”
In the natural sciences, a revolution has occurred in the area of materials. If in the seventies we could research and manufacture materials on a micro-scale, we have now learned to do it on a nanoscale. A DNA strand, for example, is almost two nanometers (or two-millionths of a millimeter) thick. Nanotechnology laid the groundwork for the computer revolution. In turn, those computers make it possible to make better materials and machines. That way nanotechnology and information technology spur each other on. Digitization makes it possible to gather large amounts of data about the material, biological and social world, in order to analyze and apply it. Consider the self-driving car that makes use of digital maps and adds new information to those maps with every meter traveled. In this way, a cybernetic loop arises between the physical and digital worlds.
Living organisms, like the human body, are seen more and more as measurable, analyzable, and manufacturable
The above developments in the natural sciences stimulate the life sciences, such as genetics, medicine, and neuroscience. Modern equipment, from DNA chips to MRI scans, offers countless opportunities to investigate and intervene in body and brain. This leads to the statement that “biology is increasingly becoming technology.” That means that living organisms, like the human body, are seen more and more as measurable, analyzable, and manufacturable. Germline technology is a typical example of this trend. In the summer of 2017, an American research team succeeded for the first time in using CRISPR-Cas9 technology to repair a hereditary disorder in the DNA of a (viable) human embryo.
At their turn insights from the life sciences inspire the design of new types of devices: think of DNA computers and self-repairing materials. Simulation of the workings of the brain in hardware and software is, for instance, an important goal of the largescale European Human Brain Project, into which the European Commission has been investing a billion euros for ten years. This leads to the statement that “technology is increasingly becoming biology.” Engineers increasingly attempt to build qualities typical of living creatures, such as self-healing, reproduction, and intelligence, into technology. Examples of this second trend are artificial intelligence and android social robots.
The trends “biology becomes technology” and “technology becomes biology,” when applied to the human being, ensure that humans and technology are increasingly merging with each other. The Rathenau Insituut, therefore, speaks of an intimate technological revolution.
Consider technologies external to our bodies too
The trend “biology becomes technology” drives the debate over “human enhancement.” Traditionally, this debate focuses on invasive medical technologies that work inside the human body. Consider psycho-pharmaceuticals like methylphenidate (Ritalin), which are used to suppress powerful behavioral impulses and improve the storage capacity of our random-access memory, or modafinil, which can help make us more alert and thoughtful. But also neurotechnologies like deep brain stimulation and other brain implants, biotechnologies like synthetic blood substitutes, artificial retinas, gene therapy, and germline modification – all commonly cited examples in discussions about human enhancement.
What does it mean to be human in the 21st century? That question also pertains to the trend “technology becomes biology,” that is, technologies outside the body that have an impact on people’s physical, mental, and social achievements. One example is the Tactical Assault Light Operator Suit (TALOS), an exoskeleton developed by the US Army to make soldiers stronger and less vulnerable to bullets. Besides that, consider persuasive technology: information technology designed to influence human behavior. Think for example of smartphone apps giving people advice on what (not) to eat, on their driving, and on how they should handle social relations or money. Or a smart bracelet that monitors perspiration and heartrate and vibrates if the wearer displays aggression. The wearer has learned by means of a role-playing game that aggressive behavior doesn’t pay off. Consequently, it is expected that he or she will avoid similar behavior in the real world. Through EEG neurofeedback, people can also get insight into their brain activity and learn to influence it in order to change their behavior.
Intimate technologies offer opportunities for human enhancement, but can also lead to essential changes in human skills and the way we communicate with one another.
The above technologies, working outside the body, raise questions about autonomy and informed consent: are people in “smart” environments really able to make informed decisions? When does the concept of technological paternalism become relevant? Can persuasive technology further weaken an already weak will? Is it morally permissible to influence people’s behavior – even for the better – without their knowledge? Just like invasive technologies, non-invasive technologies raise questions about privacy, as well as bodily and mental integrity. In the case of many persuasive technologies, you have to give away a lot of your data in order to improve yourself. Do users really remain in control of their own data? Do we have the right to remain anonymous, to opt-out of being measured, analyzed, and coached? And how could we, in a world full of sensors? The rise of facial and emotion recognition, in particular, makes this a pressing question.
People can voluntarily insert the above invasive and non-invasive technologies into their bodies and lives, for instance, to become stronger or more attractive. But technology can also have unintended side-effects. Through the increasingly intensive use of technology, our abilities begin to change. We develop new competencies (a phenomenon called “reskilling” or “upskilling”), such as all kinds of digital skills. Other competencies might be reduced (“deskilling”). There is, for example, a body of research appearing to indicate that our social skills, such as empathy, are crumbling through excessive computer use. Intimate technologies, then, offer opportunities for human enhancement, but can also lead to essential changes in human skills and the way we communicate with one another. Such changes in the human condition transcend the level of the individual. They touch upon collective questions and values and demand public debate and, where necessary, political consideration.
Paying attention to collective values
The current debate on human enhancement, though, largely limits itself to individual goals. Examples of classic questions are: is human enhancement an individual right? Can people decide for themselves whether they want technological enhancements? In The Techno-Human Condition, Braden Allenby and Daniel Sarewitz argue that such an approach is inadequate. They suggest that the debate over the impact of human enhancement ought to be conducted on the following three levels of complexity:
- The direct impact of a single technology;
- The way in which technology influences a socio-technological system and the social and cultural patterns affected by the same;
- The impact of technology on a global level.
Take the car as an example. The car, in principle, gets you from A to B faster than a bike would (level 1 reasoning). But if many people drive cars, the bike can sometimes be a faster option in the city (level 2 reasoning). On a global scale, the rise of the car has led to a variety of important developments, such as the development of the oil economy, Fordism (the model of mass production and consumption), and climate change. Allenby and Sarewitz posit that the current debate over human development frequently remains on the instrumental level. It revolves especially around the question of whether people have the right, on the basis of free choice, to opt into technologies designed to enhance their bodies and minds. In opposition to what transhumanists often suppose, they show that – just as the car isn’t the faster choice than the bike under every circumstance – the use of human enhancement technology on an individual level doesn’t straightforwardly lead to a better individual quality of life, let alone to a better society. The application of human enhancement technology will frequently be driven by economic or military motives (level 2 reasoning). Such a scenario complicates the issue of individual free choice, because in that case, “The posthuman person is not a self-made man, but a person designed by others.”
The posthuman person is not a self-made man, but a person designed by others.
The mass deployment of human enhancement technology will also have effects – although hard to predict – on a global level. In Homo Deus, Harari sketches two (parallel) long-term scenarios: first, the arrival of the physically and mentally enhanced “superman” (Homo Deus) and a division between supermen and normal people (level 3 reasoning). According to Harari, in the long term, this could lead to the abandonment of the principle of equality that forms the basis of the Universal Declaration of Human Rights. In addition to this “biology becomes technology” scenario, Harari presents a “technology becomes biology” scenario. He anticipates the rise of “dataism,” in which humanity embeds itself in an Internet-of-All-Things and allows itself to be guided purely by AI-generated advice dispensed by computers. In this scenario, humanity has given up all its privacy, autonomy, individuality, and consequently democracy, which is based on personal political choices. Although such scenarios are speculative, they show us which important issues are at stake and show that it is important to look (far) beyond the individual, instrumental level.
The Dutch discussion of germline technology shows that this often does not happen. So far collective interests play a negligible role in that debate. And that is in spite of the fact that CRISPR-applied modifications in the DNA of the embryo are irreversible and heritable by future generations. In the current debate, the pragmatic approach we know from the medical-ethical regime still dominates. In this debate, a lot of attention is paid to the international position of the Netherlands. The country doesn’t want to fall behind as a knowledge economy. Second, there is a special focus on the health benefits germline modification can deliver for the individual in question. A traditional risk-benefit analysis is central to this. Third, significant emphasis is placed on strengthening reproductive autonomy. It is about the opportunity germline modification offers to prospective parents with a hereditary condition: to have a genetically healthy child of their own.
But germline modification also raises questions that do not fit neatly within the framework of medical-ethical principles oriented towards safety, informed consent, and reproductive autonomy. In terms of collective values and international human rights, there should also be a place in the debate for the notion that the human genome is our common heritage, and thus our collective property.
New NBIC technologies are set to alter the world, the human condition, and our very being beyond our imagination. Above, we argued that in relation to human enhancement we must consider both invasive medical technologies (the trend “biology becomes technology”) and technologies outside the body that nevertheless have an impact on people’s bodily, mental, and social performance (the trend “technology becomes biology”). Futurist thinkers from Harari to Aldous Huxley and Raymond Kurzweil show us what is potentially at stake this century: radical improvement of human capacities and choices, division between “natural” and “enhanced” humans, the abolition of the individual and in its wake, democracy. This brings a crucial political question at the table: how can we develop and implement human enhancement technology in a societally responsible way?
Technological citizenship is the collection of rights and duties that makes it possible for citizens to profit from the blessings of technology and protects them against the attendant risks.
To give direction to that potentially radical transition, a democratic search for shared moral principles is necessary, principles that can set the fusion of human and technology off on the right track. An absolute condition for that collective search is a well-developed “technological citizenship” for all citizens. Technological citizenship is the collection of rights and duties that makes it possible for citizens to profit from the blessings of technology and protects them against the attendant risks. It means understanding how statistical results, (genetic) profiling and self-learning algorithms work, seeing how that affects us, and being prepared to defend against unwanted influences and choose (potentially non-technological) alternatives where necessary. Besides, it is important that citizens have the option of participating in the decision-making process regarding technology at every stage of development, from research to application. Technological citizenship emancipates the regular citizen in relation to the experts and developers of technology.
The role of institutions
Education plays a central role in the promotion of technological citizenship. And that begins with primary and secondary education. Here lies a clear role for the government. Meanwhile, in April 2017 the Dutch House of Representatives approved a curriculum revision prepared by Platform Onderwijs2032 (Education2032). It adds two new fields to the curriculum: digital literacy and citizenship. In 2018 development teams are getting started making those fields a reality. It would be good for the two development teams to work in close cooperation, taking into account the fact that citizenship in a technological culture only has meaning if we can engage in an informed discussion about the effect of technology on our private lives and our society.
But education is not enough. To make their citizenship a reality, people need institutions. Without suitable administrative institutions, technological citizenship is an empty shell. It must be possible for rights and duties to be democratically demanded, fixed, and implemented. Individuals, then, can only be considered true technological citizens if they know themselves to be protected by an optimally equipped system of governance. The following four components are crucial to this: 1) rights and compliance monitoring, 2) public debate, 3) political vision, and 4) socially responsible companies.
Robots should not replace human relationships but improve them, whether we are talking about care for the elderly or the upbringing of children
First, citizens must be able to appeal to fundamental human rights suitable to the time we live in. At the request of the Parliamentary Assembly of the Council of Europe, the guardian of human rights in Europe, the Rathenau Instituut researched how robotization, artificial intelligence, and virtualization could challenge our current conception of human rights. The Rathenau Instituut proposed, among other things, two new human rights. First, the right not to be measured, analyzed or coached. People must have the right not to be surveilled or covertly influenced, and to evade continuous algorithmic analysis. Secondly, the right to meaningful human contact within caregiving. Robots should not replace human relationships but improve them, whether we are talking about care for the elderly or the upbringing of children. Already-existing rights and duties should be put into practice in everyday lifeso that technological citizens can count themselves truly protected. We wonder whether the current Dutch supervisory authorities are really able to carry out their missionand whether their mandate is truly adequate. The Netherlands Institute for Human Rights pays little attention to the question of how digitalization can place human rights under pressure. The Dutch Data Protection Authority is given little scope to look at collective values other than privacy.
Second, a social debate over the impact of new technologies is necessary. While civil society is strongly organized to address environmental problems, the Netherlands still has few established social organizations willing to enter into a critical discussion about the new intimate technology revolution, except in relation to privacy and security. Meanwhile, we ought to be asking questions regarding which collective human values we wish to guarantee in our intimately technological society. If we don’t debate these issues at this early stage, we effectively leave the course of technological advancement to the engineers, to the market, and to individual choice. Pessers warns for the collective effect of individual self-determination, which society stealthily confronts with a fait accompli, without any democratic debate. For example, in the case of prenatal diagnostics, the abortion of a number of children with Down syndrome doesn’t change society. But if that starts to happen on a mass scale, it raises the question of whether we really want a society entirely without people with Down syndrome.
If we don’t debate these issues at this early stage, we effectively leave the course of technological advancement to the engineers, to the market, and to individual choice.
Politics and government are called upon to take the lead in the debate and the administrative handling of the intimate technology revolution. Nevertheless, there is at this moment no broad political vision addressing the impact of technology on our being and the current political debate is driven largely by random incidents. For such a vision, further knowledge development is necessary. When it comes to our natural environment, the central concept is ecological sustainability. It required many years and the discovery of new knowledge to give qualitative and quantitative meaning to this concept. We think that in the debate over the relation between technology and humanity, the concept of “human sustainability” must play a central role. Human sustainability means the preservation of human individuality: what aspects of humanity and our being-human do we see as malleable, and which do we want to preserve? Think for example of the desire to keep our empathetic capacities working at a high level, or to have children born from a real mother, not an artificial womb. Concepts such as human dignity and human sustainability require much greater research and consideration.
Finally, citizens must be able to trust that user interests come first when businesses develop new technological products. The increasing fusion between people and technology forces us to keep in mind the values and norms that we design into products and computer coding. On the subject of privacy, academics have argued for years that organizations should pay attention to privacy measures and data minimization when developing information systems. Privacy by design has become a core principle of new European privacy regulations. Privacy-oriented technology is an example of the broader concept of value-sensitive design, which attempts to incorporate not only privacy but a broad range of relevant collective values, including basic human rights, into the development of technology.
This article is republished from NextNature by Ira van Keulen and Rinie van Est, Rathenau Instituut, under a Creative Commons license. Read the original article. This essay has been previously published by the Hans van Mierlo Foundation, a scientific think tank related to the Dutch democratic liberal party (D66).