Brain Activity & Thoughts: Should Neuro-Rights Look Beyond the Individual?
|
Neuro-rights may protect people from certain harms due to neurotech advances. Neurotech has potential to improve medical treatments and revolutionize care, but there are foreseeable risks. Marcello Ienca defines neuro-rights “as the ethical, legal, social, or natural principles of freedom or entitlement related to a person’s cerebral and mental domain; that is, the fundamental normative rules for the protection and preservation of the human brain and mind.” Importantly, technological advances that use implants or devices could be subject to evaluation using some common bioethics, privacy legal, and responsible tech principles and terminology. New tech could harm people by infringing an already existing right or a right that one would reasonably expect to have even if neither codified nor overtly recognized in the neuro arena. There is debate over whether new laws are needed as current privacy rights likely protect people’s mental and cognitive privacy and liberty to some degree.
Examples of neurotech devices and the conditions they might treat are devices to enable people who are paralyzed to move their limbs by thinking commands, to allow a colorblind person to experience seeing color using sound waves, to alleviate symptoms of drug addiction by changing the dopamine in the brain, or to read and put text to thoughts for those who cannot speak. Beyond medicine, neurotech has commercial, enhancement, and military uses. There are also movements to use technology to enhance through self-experimentation and biohacking that operate outside of the traditional medical landscape. Risks arise from the possible physical harm, the potential for largescale data collection, and the power to alter society.

A Developing Field of Ethics and Law
Neuro-rights as a body of scholarship is in its infancy. The suggested rights tend to stem from well-recognized rights and ethical considerations like liberty (to have thoughts, beliefs, etc.), dignity, privacy, control or agency (including free will and autonomy), integrity, fairness in access to new technologies, and freedom from built-in bias. (The bills passed in Chile “include the rights to personal identity, free will, mental privacy, equal access to cognitive enhancement technologies, and protection against algorithmic bias.”) Yuste, et al., list proposed neuro-rights including “(1) the right to identity, or the ability to control both one’s physical and mental integrity; (2) the right to agency, or the freedom of thought and free will to choose one’s own actions; (3) the right to mental privacy, or the ability to keep thoughts protected against disclosure; (4) the right to fair access to mental augmentation, or the ability to ensure that the benefits of improvements to sensory and mental capacity through neurotechnology are distributed justly in the population; and (5) the right to protection from algorithmic bias, or the ability to ensure that technologies do not insert prejudices.” These principles do not include harm prevention explicitly.
In the AI ethics arena algorithmic bias is well recognized. Access to care varies among countries with the likelihood that some countries prefer entirely tax-supported health care while other countries have limited coverage, private insurers, and may be less likely to ensure equal access especially when technologies are new and are not the standard of care.

Privacy, integrity, and liberty may be reshaped as mental privacy, integrity of thoughts and agency over how thoughts are transcribed and conveyed, and cognitive liberty, but they stem from familiar terms with extensive bodies of work supporting their virtue and application.
Whose Rights? Other Individuals and Society May Need Protections
I would argue that the rights-based framework could overlook the broader issues of freedom and the degree to which one person’s freedom might impact another person or society. That is, the rights of the person doing the neuroactivity are the subject of proposed laws and resolutions, but the rights of the people who do not wish to go along are not as well protected. A changing tide that brings a new social norm carries some dangers that may not be addressed in a rights-based framework that fails to account for the rights of others in society, or the circumstances when different parties’ rights conflict. (For example, if I have the right to improve my vision in an extreme way that sees through distant windows, then you no longer can exercise your right to privacy, or it becomes more difficult for you to do so. In enhancement, from intellect to strength, bioethicists similarly recognize that those who wish to go about their business as usual may have no right to a non-enhanced peer group or populace.) The choice to participate or to opt of modern technology will likely become difficult, and eventually may not seem like a choice if certain neurotech that is more likely to alter society by setting a new standard is widespread. When seen in the context of the debate between transhumanists and bioconservatives, or from the realistic center position that much of technology and medicine lies in between treatment and enhancement, neurotech will require an analysis of what makes human beings special, unique, and valued, and how changing that calculus could impact society. Emotions like compassion, empathy, and pity, and the impetus to help the downtrodden may decrease as more people see cognitive, physical, and mental states as a choice rather than a result of genetics, effort, and environment. The rights-based considerations may protect individuals but ignore the implications for global society.
The use of neurotech for enhancement is part of the biohacking and cyborg movement which evades the requirement of IRB approval for neurotech experimentation. Furthermore, neurotech could eventually become a norm by which tasks are accomplished. For example, if I think I want the subway turnstile to unlock, will it unlock and know whose account to charge, but what will that mean for someone who wishes tokens were still needed?
Privacy laws and principles: informed consent for use of cognitive data
Informed consent is unlikely to protect people in the way they may wish. For now, neurotech devices require an implant, making consent to the device and a certain amount of data collection a reasonable part of the initial process. I would argue, as I often argue about informed consent’s inadequacy in the big data setting, informed consent will be inadequate to protect people once excess information is collected, stored, subject to possible breaches, malware, or inappropriate uses. Informed consent does not address societal vulnerabilities. Technology that can translate, record, or map thoughts would “mark a radical departure from conventional accounts of one’s mind as accessible only to oneself.”
Other laws would need to address some of the new problems of neurological and cognitive data, recorded thoughts, and thought patterns. Collection and use limitation principles should apply. Taking a different angle like making the data derived un-subpoena-able or inadmissible in court could ensure fairness in countries that value constitutional and human rights and require prosecutors to prove a case using well-established rules of evidence, but such legal measures would not help in countries where individual liberties, a right to a fair trial, and a prohibition on unreasonable searches and seizures do not exist. While it is unlikely, especially in the US where free speech rights have gone corporate, advertising laws could prohibit the use of certain data for targeted advertising.

The jump from wearables to implants is arguably small. Neurostimulator implants for epilepsy already record data. Cognitive data is collected many ways—test scores and implants may divulge similar personal data. To the extent that medical uses of implants that stimulate the brain lead to collection of data that is similar to that already collected other ways, rules in place likely already govern the privacy space. Addressing their insufficiency might require laws in other arenas or data collection and use prohibitions in privacy laws. Yet researchers advocate for more accessible deidentified or anonymized data, arguing that promoting scientific discovery outweighs privacy concerns.
Neurotech’s data could lead to an intrusion on privacy that is both of a different type and degree. A new element of privacy for technology that reads and records thoughts, “mental privacy” (as protected by Chile’s new constitutional amendment), will likely be limited to instances where there is a reasonable expectation of privacy. Thoughts are predictable and may be estimated from pictures that show facial expression and private thoughts may corroborate thoughts willingly posted or stated in public or on social media. People may also agree to share neurotech data for the sake of using it for day-to-day activities. For example, if thinking is way to change the television channel, then a corporate entity will likely abscond with the thoughts as part of the use pact (I know the words television and channel are likely to be outdated any minute…).

Collection limitation, use limitation, proportionality, harm prevention, the ability to access and fix information (possibly even correct data used out of context) are important considerations. Many data laws promote ease of cross-border data transfer or facilitate medical research rather than having privacy as the singular goal. Something as personal as a thought deserves significantly more privacy protection. Use prohibitions would do more for privacy than informed consent as data uses may prove dangerous in unexpected ways. For example, foreign bad actors could hack stored data revealing thoughts, impulses, and cognitive abilities of populations. The many sector-specific US privacy laws and state laws may already cover some neurotech data and it is difficult to see whether more sector-based privacy laws would add much other than confusion.
Privacy law and policy becomes increasingly less important as people willingly divulge data undermining the confidential aspect of privacy. The biggest privacy issue is protection from surveillance, bad actors, and discrimination which can be achieved by looking to other bodies of law like criminal law and civil rights.
Unethical Uses
The use of neurotech to exert control over another person against that person’s will seems far-fetched but there are many criminal justice and military scenarios where the temptation to force truth-telling is so strong that a neuro-device might be considered. The UN Declaration on the Protection of All Persons from Being Subjected to Torture and Other Cruel, Inhuman or Degrading Treatment or Punishment would not seem to apply definitively as implanting a device, if anything, could be argued to make physical torture obsolete. The US narrowed its take on what actions comprise torture during the “war on terror” and its privacy frameworks would be unlikely to protect military prisoners’ thoughts without special statutory provisions.
Data-generating patents will also be problematic as neurotech, in ways similar to bodywear tech and biological data collection devices, may include a wealth of data that gives certain corporate actors an advantage over their competition (like the owners of data-generating patents for BRCA genetic detection had and combined with trade secret laws led to a chokehold over the particular data market).
Treatment or Enhancement: Will Use Matter?
The ethical justifications and considerations surrounding neurotech advances that bring people to a physical or mental norm should vary from those used for enhancement. The role of this distinction is unclear in the brand new vernacular of neuro-rights — neuro-rights may extend the transhumanism school of thought and encourage biohacking or non-IRB or non-FDA approved implants. There are people who describe themselves as “cyborgs” having engaged in neurotech usually for enhancement, and often performed by biohackers as opposed to IRB approved researchers or doctors. Looking at which uses are within the traditional ethos of medicine requires absorbing and understanding the ethics driving the multitude of views on the goals and limitations of medicine. We may consider neurotech discoveries science, but not necessarily medicine, making the intersection of tech ethics and medical ethics the best combined starting point. There already is a widespread ethics debate over genetic alterations and enhancement with a body of ethics literature applicable to modern technologies. And tech ethics already speaks to complex issues like robotics and the future of work.
The Big Picture
In addition to the neuro-rights framework, and its effort to shore up individual rights, it may be imperative to turn our attention to societal issues. If neurotech has a solution for Alzheimer’s but its use can be controlled before it is used to enhance almost everyone’s memory, or it can allow those with paralysis to move about without tempting those already fast to use the tech to become Olympic-caliber, a rights-based, privacy, and agency model does not offer society a say in rapid change, and may presume acceptance. But if rapid societal change continues, an ethics analysis would do well to consider the role of neurotech in society, its potential for good and bad, and it must go beyond the privacy issue by envisioning the use of thought and neuro signals as a method of participation in common daily actions, as cell phones arguably are now. Laws that prevent surveillance and impose collection and use limitations would protect society while recognizing autonomy and the power to participate as well as opt out of approved, well-considered neurotech implants.
Encouraging and incentivizing research and development of neurotech that is within the ethos of medicine is a worthy priority that may lead to a beneficial period of rapid change and provide a roadmap for resolving complex medical problems.
Thoughts & Questions
If drugs and technologies vie for sales in the neurocognitive arena, would US pharmaceutical companies lobby against devices to protect their market share? There is an inherent, noteworthy competition between tech and pharmaceuticals.
Eavesdropping laws like the Wiretap Act could expand beyond the “oral” to include thoughts and cognitive data that is not expressed orally.
Can the processes embedded in deliberative democracy allow for referendums or other public participation in the approval of neurotech for uses beyond medical treatment? The group in favor of keeping the status quo would not have rights under the neuro-rights framework. (Anti-progress is not really a right…)
If privacy’s two-pronged nature breaks apart, will protection from government (and corporate) intrusion be possible without confidentiality?
Neuro-rights leads to the separate categorization of data that is derived from thoughts and brain activity, but the data could be viewed as any other personal data, subject to existing frameworks and laws.
Similarly, protection from bodily intrusion should protect people from neuro-intrusion, or the wrongful use of neurotech to brainwash, shape thinking, or alter neuro-activity. (For example, if there is a cure for addiction, it is unlikely that it could be imposed against someone’s will.)
While arguably controversial, presuming data is owned by the data subject is a better starting point for all personal data and would promote fairness, compensation, and autonomy. Data ownership and control are increasingly important as data is so available and often changes hands without the knowledge or consent of the data subject.
Photo 132826092 / Brain © Blackboard373 | Dreamstime.com