If you’ve ever wished your brain was more user-friendly, neurotechnology might seem like a dream come true. It’s all about offering you ways to hack your brain, getting it to do more of what you want and less of what you don’t want.
There are “nootropics” — also known as “smart drugs” or “cognitive enhancers” — pills that supposedly give your brain a boost. There’s neurofeedback, a tool for training yourself to regulate your brain waves; research has shown it has the potential to help people struggling with conditions like ADHD and PTSD. There’s brain stimulation, which uses electric currents to directly target certain brain areas and change their behavior; it’s shown promise in treating severe depression by disrupting depression-linked neural activity.
Oh, and Elon Musk and Mark Zuckerberg are working on brain-computer interfaces that could pick up thoughts directly from your neurons and translate them into words in real time, which could one day allow you to control your phone or computer with just your thoughts.
Some of these technologies can offer very valuable help to people who need it. Brain-computer interfaces, for example, are already helping some paralyzed people.
But neurotechnology can also seriously threaten privacy and freedom of thought. In China, the government is mining data from some employees’ brains by having them wear caps that scan their brainwaves for anxiety, rage, or fatigue.
Lest you think other countries are above this kind of mind-reading, police worldwide have been exploring “brain-fingerprinting” technology, which analyzes automatic responses that occur in our brains when we encounter stimuli we recognize. The claim is that this could enable police to interrogate a suspect’s brain; his brain responses would be more negative for faces or phrases he doesn’t recognize than for faces or phrases he does recognize. The tech is scientifically questionable, yet India’s police have used it since 2003, Singapore’s police bought it in 2013, and the Florida State Police signed a contract to use it in 2014.
All these developments worry Nita Farahany, an ethicist and lawyer at Duke University and the author of a new book, The Battle for Your Brain: Defending the Right to Think Freely in the Age of Neurotechnology. As an Iranian American, she’s particularly scared of a future where governments read minds and punish people for thinking about, say, organizing to overthrow an authoritarian regime. “Will George Orwell’s dystopian vision of thoughtcrime become a modern-day reality?” she writes.
Yet Farahany is no Luddite: She believes we should be free to embrace neurotechnology if we choose — but only if we also update our laws so we can reap its benefits without courting its risks. She argues that we need to revamp human rights law with a focus on protecting our cognitive liberty — the right to self-determination over our brains, our thoughts, our inner world.
I talked to Farahany about the ethical dilemmas raised by emerging neurotechnologies. Should you have the right to enhance your brain however you want? What about erasing painful memories, à la Eternal Sunshine of the Spotless Mind? A transcript of our conversation, condensed and edited for clarity, follows.
Neurotechnology seems like it’s on a collision course with freedom of thought. Do you think that huge risk is counterbalanced by the benefits we stand to reap?
The risks are profound. And the gaps in our existing rights are deeply problematic. So, where do I come out on the balance? I’m a little bit of a tech inevitabilist. I think the idea that you can somehow stop the train and say, “On balance, maybe this isn’t better for humanity and therefore we shouldn’t introduce it” — I just don’t see it working.
Maybe people will say, “My brain is too sacred and the risks are so profound that I’m not willing to do it myself,” but with the ways that people unwittingly give up information all the time and the benefits that are promised to them, I think that’s unlikely. I think we’ve got to carve out a different approach.
I hear the idea that maybe we can’t or don’t want to ban the tech wholesale, but I do want to push back a bit on this idea of tech inevitability. That strikes me as a myth that the tech world likes to tell itself and all of us. History is full of examples of technologies that we’ve either decided not to build or that we’ve built but placed very tight restrictions on — nuclear weapons, genetic engineering.
I tend to think more in terms of, how can we shape the incentive structure so that companies or governments will be less likely to roll out certain technologies? And of course, part of the incentive structure has to be law.
Let me respond to [the idea of placing] tight regulations around it. Here’s the thing that keeps me from going there: We have an unbelievable burden of neurological disease and mental illness worldwide. Even as our physical health overall improves, our mental health is deteriorating, and depression rates are skyrocketing.
I think we need urgently to address that. And part of the reason that we haven’t urgently addressed that is because we haven’t invested the same, and put brain health and wellness on the same level, as all the rest of our physical health. And I think empowering people with information to be able to take their mental health and brain health into their own hands could be transformational for those trends. My hope is to find some way to make that possible.
The dystopian possibilities of this technology are off the charts, but so is the possibility of finally claiming cognitive freedom in the sense of true mental health and well-being.
What exactly is cognitive freedom or cognitive liberty to you?
It’s a right from and a right to. Overall, I define it as the right to self-determination over our brains and mental experiences. That means a right from interference, and a right to access, change, and improve our own brains. That’s maybe why I come out differently than some people who might just say, let’s tightly regulate this or just ban it.
In terms of a freedom to, there are all kinds of cognitive enhancements that people might be interested in. I’m thinking of nootropics or smart drugs, but there are also other types of neurotechnology that people could potentially use — neurofeedback, brain stimulation.
Even if we imagine that we’re in a world where these technologies are equally accessible to all, I still wonder: Should workers actually be forbidden from cognitive enhancements because it creates a norm that others might then feel subject to? Will the pressure to enhance become coercive so people end up using smart drugs or devices even though they didn’t want to?
It’s a good question. That especially becomes problematic if we’re talking about drugs that are unhealthy, right? Part of the reason that we ban steroids in sports is because we want to protect players in a kind of paternalistic way … because that can have serious health consequences.
But I want you to imagine if there aren’t health consequences. Let’s not talk about methamphetamines; let’s talk about drugs that have very clean safety profiles. Then ask the same question of, if everybody feels pressure because everybody else has improved their health and well-being or their cognitive abilities, what’s wrong with that world?
And if what’s wrong with that world is that we feel like we’ve increased the rat race, and made us all feel like we have to be more productive all the time, then what we’re complaining about is the structures and underlying forces in society, not the drugs.
I think the issue would be, who gets to decide what counts as improvement? I was once having a conversation with some folks in the Bay Area. We were talking about smart drugs and everyone at the table was saying, “If you put a pill in front of me right now that could send up my IQ from, let’s say, 100 to 150, I’d want to take that!” I was an outlier saying, “Actually, I don’t necessarily want to be smarter. Smarter is not necessarily happier or wiser. And I’m also worried about the implicit coercion thing.”
For me, it all comes back to the same question: Do you have a right to self-determination over your own brain? So to your question, “Who gets to decide?” — I think you get to decide. I think you should be the one who decides whether or not you enhance or slow it down, or you don’t do any of those things at all.
I’m writing against the grain, right? There is what I think is a very strong paternalistic drive when it comes to health, even in mainstream academia and bioethics, where people are, for the most part, extremely liberal. And I come out differently. I come out believing that giving people autonomy over their brains and mental experiences is critical.
There is truth to that, but at the same time, I think you’re writing very much with the grain in the sense that the dominant mode of thinking since the Enlightenment is that the individual is the proper seat of autonomy and decision-making. And you’re very much arguing for individual autonomy.
I classically think of myself as someone who is very ardently pro that! But I’m also aware that even people like John Stuart Mill, who was really harping on liberty and the individual, were simultaneously acknowledging that we’ve got to have liberty, but only up to the point where it hits upon society’s interests and maybe harms others.
So far we’ve mostly been talking about enhancing the brain, but there’s this question about whether cognitive liberty means I should also be allowed to diminish my brain. I right away think of Eternal Sunshine of the Spotless Mind and the ability to erase painful memories.
In your book, you talk about this specific neurotech technique, DecNef, that can potentially be used to process traumatic memory. A person sits inside a scanner and recalls a traumatic memory. Machine learning algorithms map the areas of the brain that that activates, and then the person basically erases those memories through a process of neural feedback. So the idea is that neurotech may offer hope for healing traumatic memory, or maybe even prevent it from getting established in the brain to begin with.
Yeah, I write about this because it’s very personal to me. … I give the example of our second daughter, Callista, who died. And our experience of being in the hospital with her and how traumatic that was and the PTSD that I suffered for years as a result afterwards. And I tried therapy. I tried the drugs [like propranolol, a medication usually prescribed for high blood pressure that was studied — in vain, it turned out — to see if it could prevent PTSD by disrupting memory consolidation]. I have not yet tried DecNef, but I would if I had the opportunity to and was still suffering from PTSD.
It works in the same way that, when you are most symptomatic of intractable depression, you have a particular pattern of neurons firing in your brain — and then through implicit reactivation of those same pathways, you could rewire the brain by training it over and over again to have a different outcome. The precision with which you can see the activation patterns and then use that information to rewire it is profound.
It was really striking to me that you wrote that you would try DecNef if given the chance. That set me off wondering for myself personally. On the one hand, it sounds amazing, this idea of neurotech healing traumatic memory or even preventing it from getting established in the brain to begin with.
On the other hand, I was thinking about how my dad passed away about a year ago. In the last year of his life, I was caring for him and it was really intense. I think probably there was some kind of trauma incurred there. And as a result, the past year has been one of the hardest years of my life.
If you’d asked me earlier whether I want to sign up for this thing that will prevent that mental anguish, I might have been tempted. But a year later, having gone through that suffering, I actually think there was a lot of growth that thankfully I was able to come out of it with. More self-compassion and compassion. It reminds me of this concept of post-traumatic growth, where people come out of an experience with new capacities — the flip side of PTSD. And in the book you also write that as a result of your experience, you feel like you came out with more compassion and you’re a stronger ethicist.
Yeah, I don’t think I would’ve used DecNef ex ante. There is something really important about suffering. It has been core to the human condition. It helps us to prevail. So much poetry and music and everything else comes from suffering.
I say I would have used it because the trauma echoed for years and I couldn’t sleep, and it was vivid in ways that… I couldn’t function. I would never want to forget Callista or what we went through with Callista. But living through it — from the emotional power of it, to the fear, to the smells, to the echoes of the sounds in my brain — I did not need it at that level.
And so if DecNef could help turn it down so that when I remembered it, I could remember as I do now, with fondness … but not literally relive it — I would, I would do that. I would regain that time to not relive that over and over again.
Absolutely. That makes a ton of sense. This is something that I was genuinely struggling with while reading, because on the one hand I felt this sense of, I don’t want to cheat myself out of an opportunity for potential post-traumatic growth, but also, I think there really is such a thing as too much suffering.
The Buddhist teacher Thich Nhat Hanh has a phrase I really like: “No mud, no lotus.” Meaning, some suffering can be fertile ground for growth. But when he was presented with the question of how much we should suffer, he said, “Not too much!” Because that can just be like a landslide that we don’t know how to pull ourselves out of.
I think that’s right. I hope that people’s choices are to not eliminate experiencing sadness and suffering. I don’t want that. I don’t think that’s good for humanity. I also don’t think it’s up to me to decide for individuals what suffering they do and don’t want to go through.
Absolutely. And I want to underline that treating PTSD or depression is not the same as eliminating suffering. We should absolutely treat things like PTSD or depression. But I’m really not sure about the quest to eliminate suffering, as some people want to do in the transhumanist movement — the movement that’s all about using tech to usher in a new phase of human evolution.
You ask in your book: “If your brain had a switch to turn off suffering, would you use it?” I wouldn’t.
I wouldn’t. But I would turn down the volume for the years that followed [with PTSD], because I didn’t need it at that volume.