Author Talks: Can you use your ‘brainpower’ to defend cognitive liberty?

In this edition of Author Talks, McKinsey Global Publishing’s Raju Narisetti chats with Duke University professor of law and philosophy Nita Farahany about her new book, The Battle for Your Brain: Defending the Right to Think Freely in the Age of Neurotechnology ‎(St. Martin’s Press, March 2023). Farahany offers insight into the burgeoning discipline of neurotechnology and how its unregulated advancement could compromise mental privacy. An edited version of the conversation follows.

Define neurotechnology for your readers.

I’m a professor of law and philosophy at Duke University, where I direct the Duke Initiative for Science & Society. I’m the author of the new book, The Battle for Your Brain: Defending the Right to Think Freely in the Age of Neurotechnology.

This book talks about the promises and the perils of the coming age of neurotechnology, an era in which neurotechnology will become the universal controller for all of the rest of our technology. It offers great promise to society, but without putting safeguards into place now, it also introduces great risk.

Neurotechnology is the collection of devices that are sensors that can pick up our brain activity. That can be brain activity through the electrical impulses that happen as neurons fire in your brain. What does that mean? Anytime you have a thought, whether it be a math calculation or an emotion, neurons are firing in your brain.

They fire in characteristic patterns where hundreds of thousands of them firing at once discharge tiny electrical impulses. Those electrical impulses can be picked up by sensors that are worn on the skin or implanted inside the brain.

Wearable devices are ones that have sensors that can be picked up on the skin, behind the ear, through earbuds, and through headphones. They pick up that electrical activity in the brain. There are other devices that pick up light as it travels through tissue in the brain or blood oxygenation levels.

But the ones that I primarily focus on are [those that track] the electrical activity that is picked up through these sensors. That collection of devices comes in all forms, from small tattoos that are worn behind the ear to earbuds to headphones to little straps that are worn across the head or even embedded as different sensors inside of baseball caps or your everyday hat.

What about this neurotechnology moment makes your book timely?

I’ve been following neurotechnology for over a decade. It couldn’t be any timelier given all of the technologies that are launching in 2023.

What was remarkable about the technology was the form factor. Most of the different neurotechnology that I’d been following was in the form of headsets or different devices that were unlikely to be part of our everyday devices.

What was different was that it could be integrated into a watch, into a wearable device that had other multiuse functionality, not for niche applications, such as meditation, but as a controller for the rest of our technology.

Major tech companies have made huge investments to make this mainstream technology that will be integrated into all of the things we wear, from headphones to earbuds.

When those sensors are embedded into all of the rest of our technology, it becomes a suddenly unprecedented way to gain access to our brains, which has great potential, really exciting potential. But it also has the most significant potential to oppress us if not implemented in a way that recognizes the safeguards we need for society.

Aren’t our thoughts, unexpressed, still free and ours alone?

I think of our brains as our last bastion of freedom. In many other ways, it may be difficult, if not impossible, to keep data private. In the US, it seems like we’ve all but given up on nearly every other form of data privacy.

So we recognize that when we search and use search engines, the information that we put in is tracked. If we go to the doctor, that information is in our electronic health record, which might have greater protections. But it is also populated with a whole bunch of other information about us: our financial transactions, our GPS location, even our faces are up for grabs.

But our brains have always been special. They’ve been a place that people can have for mental reprieve, for really having private thoughts. You can think everything from the little white lies you tell your partner—“Oh, yes, I love that dress on you. It’s very flattering”—to the lie you tell when you walk into your friend’s house. They have a hideous new couch and you say, “Oh, it’s a delightful couch.” These are little white lies that we tell, but they’re also not just white lies that we keep private. We also keep private a new idea that we’re not quite sure has legs yet, but that we start to turn over in our mind.

[We also keep private] the personal change we want to undertake and the seeds of romance and interest we have in other people. [We even do that with] our self-identity, as we try to figure out who we are and what we want to be, and our sexual orientation before we want to reveal it to others.

Much of that information can soon be decoded with neurotechnology—particularly, continuous neurotechnology. When I say that even our thoughts may be at risk, in today’s world with today’s technology, it’s not as if the thought bubbles that pop into our head can be accurately decoded. But our preferences, our desires, our emotional reaction, even simple words, numbers, and images can be decoded using these consumer wearable neurotechnology devices.

Again, that gives us a great opportunity to be able to develop insights about health and wellness and well-being. It allows us to unlock the black box of the brain for ourselves and to discover more about our brains in the ways that we’ve been able to quantify the rest of our bodies. But in the wrong hands, and used in the wrong ways, that last bastion of freedom is at risk, which is why we need to put safeguards in place to maximize the benefits of this technology, while minimizing the risks.

How can we secure rights and remedies against the misuse of this information?

In general, our approach to privacy has been a failing attempt to stem the flow of information but also a misguided one. As a lot of people have said, information wants to be free. But there are reasons, for example, that if data was more free, if neural data, even, could be shared freely, the insights we could gain—especially as we start to correlate everyday activities with healthy brains with their brain data—could be mined and used to understand when cognitive decline has begun, when depression has taken its earliest seeds, even to decode our everyday experiences, and the benefits to humanity could be extraordinary.

But the risks are also profound. Those risks are primarily risks of misuse of data: using it to make discriminatory choices; using it to invade a person’s thoughts and preferences in the workplace that go well beyond any bona fide interest that an employer might have; using it to figure out what a person’s political persuasions are and to discriminate against them as a result; identifying people who are neuroatypical and discriminating against them in a hiring context.

It’s the misuse of data that we need to be most concerned about and giving people rights and remedies against that misuse. Sometimes that will be limiting the kinds of information that certain actors can and cannot have access to. A right to cognitive liberty would require, for example, that if an employer wants access to data, the default is that the employee has mental privacy, and only if there’s a legitimate interest—for example, to track fatigue levels—that an employer could have access to that information.

But in other contexts, it’s really misuse of information we should be safeguarding against. [Examples are] using it for discriminatory purposes, using it for government interference with individuals’ freedom of thought.

If we do that, we can maximize the possibilities and the benefits of people being able to confidently share their data and learn these tremendous insights while minimizing the harms that they should really be concerned about when the data is misused against them.

Don’t the US Constitution’s First and Fifth Amendments already give us protections?

I wish our existing US Constitution, at least in the United States, protected us against the misuse of data. A lot of my early legal scholarship really focused on marching through all of the possible legal protections, at least constitutionally, that would exist within the United States that could protect us.

Take the Fourth Amendment, which protects us against unreasonable searches and seizures, or the Fifth Amendment, which protects us against self-incrimination, or even the First Amendment, which protects freedom of speech. What I found as I explored each of those doctrines is that the protections they offer are quite narrow.

But second, there’s this doctrine that pervades most of law that treats the body, real evidence taken from the body, as different from evidence that is taken, for example, from papers or things. So, for example, you can be required to give a handwriting sample or a voice print or a blood sample to see whether or not you have alcohol in your blood system at the time that you’re driving.

Brain waves and functional data in the brain are blood flow or electricity in the brain. It’s hard to see how that would be treated truly as protected information, testimonial, uncompelled information. There are categories, like memories, that wouldn’t be evoked in response to government action.

So there are limitations to these doctrines, which made me believe that we have real gaps, that they’re unlikely to protect us because of the ways in which we’ve always had a distinction between the body and evidence that we think of as testimonial evidence. In certain situations, they may provide some protection. But certainly [they do not provide enough protections], not for the kinds of intrusions that I envision and talk about in the book.

Will a right to cognitive liberty survive commercial neurotech products and services?

A lot of what’s going to happen in neurotechnology is going to happen in the private sector. Whether that’s private companies that are using neural data for many of the same kinds of purposes that they have used other data, for targeted advertisements, for example.

The neural data that accompany, if Meta has access to our brain data, together with all the rest of our social-media data, they can provide us much more targeted advertisement. Or if an employer, for example, asks an employee to have fatigue monitoring and fatigue management in the workplace, or to track their brain wave activity to try to adapt the workplace to their cognitive load, they’ll have access to a lot of brain data.

Pragmatically, the question is how does a human right, which is what I’m advocating for—a right to cognitive liberty—get implemented in the corporate setting? The first way in which that happens is it changes the default rules.

That is, if there’s an international human right that says you have a right to cognitive liberty, which, as I describe it, includes an update to our existing right to privacy—an international human rights law, which is the right to mental privacy.

The second is a right to freedom of thought, which would encompass more than just religious freedom, but also freedom of private thoughts. And the third is a right to self-determination over your brain’s mental experiences. What that means is that the default rule is protection of the employee.

The default rule is protection of the consumer. The exception, then, has to be legally carved out. There are examples of how this works within the workplace setting. But it requires, for example, specific data minimization, a legal and bona fide reason for an employer to have access to the data.

Then it requires that they limit the kind of data that they get access to. So, if you are able to read my brain waves, you get a whole lot of information. But an algorithm can pick out just, for example, my fatigue levels. It may be that there is a bona fide legal interest in having access to that if I’m a commercial driver, if I’m a pilot.

My right to mental privacy will not outweigh the interest of society of being protected against my mental-fatigue levels. But that doesn’t give the right to the employer, or the right to the corporation, who is otherwise gathering brain data, to have access to my feelings, my general thoughts, the romantic interest that I have in the office.

Pragmatically, what it does is set the floor. It then creates mechanisms that exist within international human rights laws as they’re implemented within countries to create exceptions where those exceptions are valid and necessary for society.

How should organizations use neurotechnology with their employees, if at all?

There are incredible benefits that we can expect from neurotechnology. Those incredible benefits are ones that we should want to embrace, and there are ways that we can do so by adopting some best practices, for example, within corporations.

First and foremost, I believe we need to advocate for a right to cognitive liberty and then recognize that existing privacy laws and regulations may have some implications for corporate workplace use of these technologies. Biometric legislation, for example, within Illinois or Washington State or Texas will have implications for how we think about implementing this.

We need to advocate for a right to cognitive liberty and then recognize that existing privacy laws and regulations may have some implications for corporate workplace use of these technologies.

At a corporate level, I think it’s essential that as the technology is adopted, say, for example, employers decide to issue workplace earbuds that have brain-sensing technology in them for employees to use, it should be clearly disclosed in the corporate handbook exactly what data is being tracked, exactly how that data will be used.

For example, will it be used for an evaluative purpose? Is it purely data that lives on devices for the employees and that the employees are given and empowered to use to try to help their focus, or decrease their stress levels, which can substantially improve workplace morale and workplace productivity?

The terms of use should be ones that are designed to empower employees rather than be used for punitive or evaluative purposes. That is, give the tools to the individuals in order for them to use them to hone their attention, to decrease their stress levels.

If it’s being used in an adaptive workplace, for example, where AI working together with human employees reads brain wave activity and calibrates workloads to decrease the cognitive load of the individual, make sure that employees have clear, transparent information about how the data is being collected and whether or not it’s being stored on cloud, on device, [or] overwritten over time.

So disclosures are essential, as are clear policies about storage of brain data over time and how that data is being used. I think with transparency and with a policy that’s really designed to empower employees, this is technology that can substantially enhance the workplace by making it safer, less stressful, and empowering for employees to have more tools that could enable them to do the things and focus on the things that they want to focus on.

As an Iranian American, this is also a deeply personal book for you.

I put my heart and soul into this book. That’s because neurotechnology has had a very personal role in my life. It starts with my perspective. I’m Iranian American. My parents grew up in Iran. I grew up with a lot of conversations in our home leaving me to be much more skeptical and suspicious about ways in which technology could be used not just to empower people but to oppress people.

With that lens, and with that perspective, [I have an] understanding that the technology can be used for revolutions, can be used to unite people, but can also be used to monitor and surveil people, can be used to disenfranchise people. I think I’ve approached the technology with perhaps a more skeptical or concerned lens than some people would otherwise have done. I think that’s been really valuable.

Technology can be used for revolutions, can be used to unite people, but can also be used to monitor and surveil people, can be used to disenfranchise people.

I don’t look at technology just as an early tech adopter who’s excited about the upside. I really think about the global ways in which technology could be used or misused. But I also bring a personal lens to it.

I’m somebody who, from the youngest age, has suffered from the neurological impairments of debilitating migraines, and I have tried everything. I’ve tried every kind of medication, oftentimes off-label uses that neurologists have prescribed.

I’ve tried every single device, from transcranial direct-current stimulation to transcranial magnetic stimulation to EEG devices, fNIRS [functional near-infrared spectroscopy] devices. I’ve had fMRI [functional magnetic resonance imaging] scans. I’ve really had an intimate look at a lot of the technologies, which makes me realize it’s incredibly valuable.

It’s a field that we should be embracing, that we should find ways to help the innovation go forward. Because of that, I recognize the importance of the right to self-determination, the right to self-access, the right to access this technology and harness it for good, for individual use.

That goes from devices to drugs as well. I’ve used it in very personal ways as well. I have a chapter in the book [where] I talk about breaking the brain and how we really shouldn’t be thinking about enhancing and slowing down our brains differently.

If we have the ability to precisely erase memories, and I talk about a very personal and very painful way in which I tried to do so in the book, I think we ought to have the right to do so. So thinking about cognitive liberty for me through a personal lens, through a cultural-heritage lens, through a global perspective, and through a societal perspective, and as somebody who is a lawyer, a philosopher, and a neuroethicist, it’s given me, I think, a very nuanced way to approach the issue.

I think it’s a very balanced perspective. Oftentimes books about technology are dystopian. I really wanted to sound a balanced argument, to say, “We have a choice to make. That choice is still in our hands as this technology is about to explode and change the ways in which we interact with our everyday technology.”

The choice is what direction it will go. And that choice is a choice where the technology can empower us or oppress us. But it’s still the choice for us as a society to make. I believe with the right choices now, it’s technology that could be transformational in ways that we will want to embrace over time.

“The choice is what direction it will go. And that choice is a choice where the technology can empower us or oppress us. But it’s still the choice for us as a society to make.”

Watch full video

Subscribe to the McKinsey Talks Operations podcast

Explore a career with us