Why Your Brain Fights You
An argument between two scientists, forty years ago, sheds light on the divisions within us
I. Introduction
Psychology has a history. Some academics I know don’t appreciate this until, late in their career, they realize with a start that “hey, things are different now…” and a chilling clarity descends on them and they realize that most of what they felt, for decades, was the present, is suddenly history, and always was.
One way to vaccinate yourself against this inevitable epistemic shock is to just get comfortable with the idea that your field has a history and you are the inheritor of it. Psychology is not a morass of free-floating ideas—they connect in a temporal sequence and encode the concerns of the past within them. The web of scholarly research is more than just a bucket of Legos that you grab by the fistful to construct arguments that you want to make. It has stories to tell. It can whisper to you.
One particularly pleasant thing you can do as a researcher, then, is to get arms deep into the history of your interests. When you do that? It stops being isolated pieces of information and becomes stories. Stories are a comforting and intimate way to organize information. They lift you up. They remind you that you are not alone. And they point the way for you.
In the spirit of waypointing I want to introduce you to a paper by Richard Lazarus that lies at the nexus of a meaningful moment in psychological history. It was written in 1982, at a time when our understanding of the mind was changing, radically.
We refer to this period as the Cognitive Revolution, and textbooks record it as the point where psychology pivoted from trying to define everything in terms of behavior to acknowledging that it was both possible and necessary to study cognition.
This was supposedly a major victory in the field. I have my reservations: About 40% of Psychology got smarter after we moved past behaviorism. The other 60% got dumber. But in any case, less discussed is the massive fulmination our understanding of cognition went through at that time, as well as its effects.
Lazarus’ paper, Thoughts on the Relations Between Cognition and Emotion, is a great artifact of that time. If you’re keen on it, here’s a link to a copy of the original, housed at the website for Dr. Jane Gruber’s Positive Emotions and Psychology Lab. Or, alternately, here’s the APA record. And don’t worry—if it’s too dry for you, I’m going unpack it right now.
II. Lazarus (1980): Thoughts on The Relations Between Cognition and Emotion
I first encountered Richard Lazarus and his work midway through graduate school when I decided to focus my research on negative emotion and how it cripples productivity. One of the first questions you have to address, to understand negative emotion, is something like “So how do I know to be scared, in the first place?”
The normal pass-the-bong level answer to that question is something like “Oh, that’s simple. There’s a tiger chasing you and you don’t want to die” which is, of course, true. But that’s a description of what’s going on at a really high level of cognitive abstraction. Psychologists love that level just as much as (if not more than) the others, but if you want to address those weird edge cases (“Johnny jumps right out of his skin every time he sees a tiger plushie”) you need to get deep down into the mechanics of things.1
From an evolutionary perspective “I’m terrified of my upcoming term paper” is one of those weird edge cases. So, mechanics it was, for me. And just one level of abstraction down, you bump into Richard Lazarus, who is sort of the gateway theorist for understanding emotion.
Base-level emotion research from the 80’s and 90’s usually starts by trying to identify three properties of emotions:
What does a person have to interpret in the environment around them (e.g. draw conclusions about what they see or experience) in order for the emotion to be triggered? These are called appraisals.
What changes physically when a person experiences the emotion? How does it alter their body? How does it alter their face?
What does the emotion push a person. How does the emotion “want” them to act? These are called action tendencies.
Lazarus is well known for his work on the first part, appraisals2, and here? He was defending the importance of those appraisals against an eminent scientist, Robert Zajonc, who, two years earlier, published an article titled Feeling and Thinking: Preferences Need No Inferences—a twenty-two page manifesto aimed with millimeter focus toward a single goal, which was dismantling the argument that emotions (specifically preferences—think liking and disliking) are the product of thinking. The empirical and societal contradictions that cropped up because of this belief (I address some of these below) frustrated Zajonc and he wanted to correct the record, since a new, uppity branch of psychology had appeared that was once again arguing that cognition preceded emotion.
Lazarus’ response was much shorter—only six pages. And rather than a direct refutation of Zajonc, he made a simple argument that can be summed up in three points:
Zajonc assumed (incorrectly) that “appraisals” referred to a form of thinking that was willful, rational, and self-aware.
Zajonc assumed (incorrectly) that Lazarus and his colleagues were claiming that emotions were the end point of a long chain of cognitive meaning-making, starting from raw, meaning-less sensory input, and ending with a conclusion.
If readers understood (correctly) that Zajonc was actually talking about one small part of cognition—late-stage, rational, conscious thinking—then he and cognitive psychologists actually agreed with each other. The issue at hand was that Zajonc was ignoring the rest of cognition—the brain appeared to be doing quite a bit of cognitive work before conscious thinking emerged. Their disagreement wasn’t about emotion vs. thinking—it was about what to define as “thinking.”
If anything, Lazarus undersold his point: He suggested that the brain assembled interpretations from partial information, but research since then suggests that much more is happening than that. The work of the brain is only “partial” if you think of rational, conscious building blocks as the end-point of mental activity. In any other context it looks like the brain is doing a lot more cognitive work before you become aware of it. Most of this is structural—you can think of it as the pyramid base that the pointy capstone of rational thinking sits on top of.
This paper, then, is a snapshot of the transition from a model of mind that partitioned reason from passion—where the work of form and classification was done entirely by the conscious, thinking part of a human—to a model of mind that blended them. In this new model, form, connection, and meaning happened long before we became aware of them. Long before we thought, in any sense of the world that people would have recognized prior to the 20th century.
It was a strange new atlas of human meaning—turtles all the way down3. And accepting it has changed us.
III. A War of Titans
This whole paper must sound incredibly dry. I hope that it doesn’t, but I get insecure about these papers, because they sound like the things that eggheads argue about when they’re tucked safely away in their carton in the back of the fridge.
Let me try to communicate to you why I like these so much.
The big reason is that papers like these are not isolated from the world. They influence it, but also, they reflect its depths. Academic arguments form like bubbles atop the froth of human concern. For each major preoccupation of an era there is a small crowd of bright and inquisitive people who feel something about that preoccupation and say “I want to understand that better than anyone else alive.” And then? They actually try!
And then they argue about it! And in their arguing they recapitulate their roots. Or grapple with them. Or betray them altogether. But in any case, holy hell, what a drama!
Those arguments, in turn, reveal strange things to the rest of us. Not all of them, of course. Some of them really are just carton-level egghead debate. But, more often than you’d think, you have episodes like the argument between Zajonc and Lazarus.
That argument? It came at the end of a massive, century-long fight between titans—two enormous bodies of philosophy that wrestled for decades, jaws around each others’ throats, over a single question:
What does it mean that I am?
Each titan was an answer to that question. On one side, the clockwork automaton of the old Cartesians: “I think, therefore I am.” In that philosophy the human mind formed a nice, tight lattice, each mortise and tenon joined flush around a sturdy core: “I am the part of me that thinks, reasons, judges, and wills. The rest of me provides sense, and urge, but is subject to me.” This view was embodied in movements like rational actor economics, which posited that people act in their rational self-interest, and the laws governing that interest can be understood by eggheads.
The other titan was… weird. Lovecraftian, even—made up of fuzzy edges and unsettling angles. Stare too long at it and its contradictions could wreck you and refashion you in their image, so that you could no longer talk to normal people anymore—they’d sense the madness on you and leave you standing at the punch bowl, drinking alone. If the first titan said things like “I am reason. I am will. I am choice!” the second replied “Yes, yes, so are we. But we are also a trillion flickers of lightning, and only a few of those bother with reason.”
When these two titans fought? The first one broke. It was brittle. It splintered at the joints, collapsed inward into a heap of its own contradictions. We have been raised in its debris, wandering among clockwork gears that still fit together and function a little bit, but not with the same titanic force that used to animate them.
What does this look like? Well break away from the titan metaphor, and think of these instead as titanic meme-plexes made out of ideas, and it looks like regular encounters with old ideas that have lost the vital core that once made them formidable. Some examples:
For much of the 1970’s and 1980’s a large number of people were preoccupied with the idea that not only were humans selfish, but that it was impossible for them to be anything else. The chain of logic was clear: If everything passed through your rational mind, and your rational mind acted to maximize its own interest, then didn’t that mean that even apparently “selfless” acts must have been chosen for personal benefit?
For much of the early 1900s it was common to blame people for their own pain. Again, the chain of logic was clear: If something horrible happened but that experience was processed through your rational mind, shouldn’t you be able to control your emotions by controlling your interpretation of the event? And if you didn’t… what did that say about your character? This chain of thinking is part of the reason that the armed forces took so long to accept PTSD. In one of the most famous incidents, General George S. Patton slapped the helmet right off of a soldier who had been hospitalized for severe PTSD, accusing him of cowardice.4
These ideas lack teeth today. It’s not that the old world lacked compassion, or that the modern world disparages willpower. It’s that, until about 1970, we lacked a detailed, compelling neurological account of why normal human beings are internally divided, and why you can’t simply clamp down on and control your mind on a whim. Absent that, propositions like “soldiers with PTSD suffer from a medical condition, not a lack of character” or “compassion is a beneficial evolutionary adaptation but is not calculated to maximize personal benefit” were defensible, not victorious. The difference between those two in shaping institutions, and intuitions, is immense.
Without that body of knowledge, you could argue that nerves overwhelmed will, or that emotion was more responsible for altruism than calculation. But your opponent could also respond “lies; you acted that way because you considered it in your best interest” and the implication just hung there because it was too plausible to refute.
Now? We’ve accumulated such a massive body of neurological evidence for the primacy of non-conscious processing that willpower looks more like a brake than a steering wheel.5
This argument between Lazarus and Zajonc happened right at the tipping point of that transition. Here, in this article by Lazarus? You get to watch two guys bickering at each other as an entire tradition of human philosophy gets flipped on its head.
That’s why I love this stuff.
Want to read more? You can find my article on empathy and altruism at this link
IV. Understanding Work
I’ve gone long, here, but I’m going to try to bring this home for you in a way that is pertinent to the research on work that I’m doing for Moonshots this year. This fight between titans also influences how we think about work.
So, let me start with something that you know, deep down, but that you may not have thought about formally. You must know, on some level, that doing work you love is psychologically different from doing work you hate. They don’t just feel different, emotionally. The two forms of work have different psychological textures. They involve different actions. They have a different cadence, different aims, different outcomes, and draw upon different wells of energy:
Love-work is spontaneous and exploratory. It often unfolds without a plan and doesn’t require a top-down imposition of clarity or form in order to make it more manageable. It delights in tangles and puzzles. It is not subject to the same rules of fatigue that other forms of work are, or the same rules of habit. It selects and discards goals rapidly, on a whim, and yet strangely manages to progress in spite of this. It is often reflected in the quality of a final product, and leads to outcomes that are reflective of the creator’s personality and idiosyncrasies.
Hate-work is effortful and unpleasant. Unlike love-work, it often requires a plan, since some measure of clarity and partitioning is required in order to break a hated task into manageable doses. Hate work follows a classic fatigue curve, since it requires a lot of top-down monitoring to push through the unpleasantness. It proceeds by measured goals because it requires clear external signals of progress in order to feel worthwhile. It is also reflected in the quality of the final product, which is often curiously devoid of personality and emotion. Done poorly, it’s possible to spend days on it without feeling like any progress is being made at all.
So, one question that should pop immediately to mind when you think about work this way is, well, what if I take hate-work and do it in the way I would normally do love-work? Like, what if I did tasks I hate in an open, exploratory, spontaneous way? Couldn’t that make them more tractable? Even a little bit more enjoyable? And the answer is absolutely, it can.6
But also, you can put in a good faith effort to do this and find that you can never really sink completely into a piece of hate-work the same way that you do with love-work. Why?
Well, the answer to that goes back to Lazarus and Zajonc and the strange and wild things we discovered about the human brain during the 20th century. As it turns out, a lot of the decisions that your brain makes about a task are decided by parts of you that are outside of your immediate conscious control.
These parts of you aren’t reptile-brain parts. They’re all you, all modern: there was no paperwork on the ancient savannah, so your interpretations of that office report are probably governed by phylogenetically recent systems of cognition that can weigh modern life. But they still happen early in your cognitive loop, deep in your brain, on a level that you don’t usually work through consciously to draw conclusions.
And therefore, if you are facing major work motivation problems? You have to contend with the fact that there’s a second character in your head making decisions before they get to you.
That character isn’t quite the same genre that you, yourself are. Talking to it is not the same as geeking out with a friend over Discord. But it’s a being-made-from-brains, and you can find common ground with it—and, in fact, you have to in order to move forward. After all, it’s made from your brain: it’s the trillion flickers of lightning that the “I am reason! I am will! I am choice!” part of you ignores. It’s the second titan.
This is, at the root of it, the part that I try to address when I talk about things like toxic preconditions. One of the things that I found genuinely amazing in the response to my article on toxic preconditions is the way that so many people commented to let me know that it was causing them to re-think parts of their attitudes that had kept them paralyzed for a long time. These things, once brought to the surface, were perfectly legible—the kind of thing that could be thought about, and addressed, and changed. But also, these were almost certainly attitudes that were formed early, that were formed deep, separate from the light of conscious awareness.
And that, for me, is the most interesting thing. Thus far I’ve discussed these two titans as if they were fundamentally different creatures. But, of course, they’re not. They overlap—and that means that it is possible for at least some of those pre-attentional parts to be surfaced so that you can dialogue with them, and re-think them.
In a nutshell, I’m inclined to think that is the difference between Zajonc and Lazarus. You might think that, since they were on opposite sides of this debate, that one of them represented the old clockwork titan and the other represented its challenger. But that’s not the case. Both men were squarely on the side of the new understanding of mind. They were just quibbling over particulars.
But what wild particulars! Making allowance for the fact that I don’t know Lazarus and Zajonc—that I am taking two scholars more intelligent than myself and simplifying them to fit an argument—I imagine them saying something like this:
On the one side, Zajonc argues: There is our rational, calculating brain, and underneath it is a different system. An “affect” system. It is more primitive and basic, it responds to the environment with feeling according to patterns it recognizes within the environment. And these patterns are almost mechanical—not penetrable to our cognition. We’re stuck accepting their output and working with it at a later phase of cognition.
And on the other side, I imagine that Lazarus says: No, no, that’s not the case at all. The truth is weirder! Yes, whatever is down there has its own rules. Yes, we must accept them sometimes. But also? It’s a brain! It’s cognition, all the way down. We don’t know what we’ll find there, but we know that at least some of it is something that we can talk to, and isn’t that crazy?
And it is. It’s crazy, wild, and oh-so-very human. Because it’s a brain. Our brain.
Thoughts? Comments? Leave them here! I’m trying to strike up an ongoing dialogue with the people who follow my writing, so please share your thoughts if you’re inclined.
A final housekeeping note. I’ve been trying to figure out how to say thank you to my paid subscribers while staying true to my commitment to keep these stories free. And then I realized that paywalls hide footnotes. Soo… I’ve paywalled my personal commentary on this article. In future articles I will paywall footnotes and the occasional bonus section as a kind thank you to those who are helping support this work—all ten of you. ;-)
If you’re interested in supporting this work too, I’m changing and updating the structure of my work to include additional gifts for paid supporters. More on that over the coming weeks. For now, peace and blessings to you — the subscribe button is below if you’re keen on following.
James
FOOTNOTES
When I talk about levels of abstraction, what I mean is this—everything you see starts as a blur of light hitting cells in your retina. From this blur? Your brain learns that some parts of the blur are one continuous thing. If I look at a green light, then look to the left of it, the light is hitting different cells in my retina, but my brain interprets it as the same light.
From that level our brain moves up to inferring features, and then patterns, then concepts, etc…—at some point it then starts to interpret those concepts in terms of what they mean for things like not dying from tiger claws.
Why? When? How do you move from “experiencing an orange blur in my retina” to “seeing a mortal threat?” Those are the things you tackle as you move from a “motorist mode” of understanding psychology, where you interpret people in terms of narrative, causality, and benefit, toward a “mechanic mode” of understanding, where you look under the hood.
Just some background. It’s not strictly necessary, but you might find it interesting. Lazarus’ basic theory is that there are “phases” of appraisal. The first happens instantly and can be boiled down to the following:
Is this relevant to me?
Is this good or bad?
K how much tho?
Some of the more primitive emotions branch off at this phase—basic preferences like liking, disliking, and really ancient emotions like fear, anger, and want. These emotions can be modified by basic cognition but they’re old and simple from an evolutionary perspective.
After this comes phase-two appraisals which are much more nuanced. They also result in much more nuanced and complex emotions: pride, shame, guilt and others. These phases show up in surprising ways in everyday life. Phase two emotions? They show up later in childhood development; babies don’t appear to learn pride, shame, or guilt until later, after they develop deeper and more self-reflective cognition. Also, phase one emotions? We can trace them much further back on the evolutionary chain.
This is a fun metaphor, but is probably also overextending. Take it in the sense that it was meant; cognition, long thought to be the “top” function of the brain, appeared to be happening a layer below where we normally expected to see it. And a layer below that as well. And a layer below that. And so on. It changes at each level: The rules are different. The old metaphors we used to describe things are often wholly inadequate. But it’s there. Wow. Most exciting is this—we have so much more to learn.
This story is wild. Patton was visiting hospitalized soldiers behind the front lines in Italy, commending the wounded. Eventually he came upon a soldier with severe shell-shock—such a bad case that he was hallucinating the sound of phantom mortar attacks. Patton didn’t believe in shell-shock, so when the soldier explained his situation, sheepishly suggesting that it might be his nerves? Patton got pisssed, and backhanded him so hard that it knocked off his helmet lining. He stormed off and a few minutes later came back around and saw the soldier sobbing from the first strike, which pissed him off more, so he strolled up and slapped him again. A nurse tried to intervene after the first strike, lunging at Patton, but was restrained and led away, weeping.
Was this overkill? Clearly. But… if you start from the understanding that Patton really didn’t believe that shell-shock could happen? What must that soldier have looked like? Well, he’d have looked like a coward who told a dumbass lie about ghost sounds so that he could abandon his friends at the front, knowing that they would have to carry his share of the work, and be in more danger for his absence. I’d have been pissed too, if that’s what I saw.
But that’s not what we see. We know better. But back in 1943 they were learning better and that’s not the same thing. I share this as an example of how dramatic an effect these differing views of mind and will can have on how we interpret what’s right in front of us. Eisenhower, for his part, threatened Patton with removal if he ever did anything similar again, but kept him at his post since, well, he was Patton. Under orders, Patton apologized to everyone involved.
I don’t want to overplay the power of pre-conscious thought here, or to disparage the importance of choice and will. Clearly personal agency matters, a lot! But one way you can think about it is that overtime we have shifted from a conception of personal agency as a force imposed on an irrational body, to personal agency as skill at piloting a complex system. That is, we no longer assume that agency should work un-checked. There are limits that require skillful navigation.
Yes of course I’ll be writing about love-work and hate-work in the future. Not sure when. But I’ll be touching on it at some point.





Dude, hate work; that is such a thing for me. My god. What a way to describe it too! Also love the "pass-the-bong" response, I'm gonna start using that in my own work from now on.
Great article. The lines between “love-work” and “hate-work” are not clearly defined for me; my day job is obviously hate-work, nevertheless the underlying fear of letting colleagues down / not being good enough drives me through with surprising speed and motivation.