The What and How of Knowing
This book is making the case that “knowing” is the primary way we have been socialized to engage the world. Knowing basically relates to “what is.” In the book, I’m also inviting you to consider that this way of relating keeps us seeing things in the same old ways that aren’t working so well—and that it stifles our capacity to create new possibilities and continuously develop ourselves, our communities and, indeed, the world. I’ve introduced “non-knowing growing” as a qualitatively different way of engaging the world, one that relates not to “what is” but to “what is” and “what is becoming” at the same time. I’m aware that the idea that there could be something other than knowing as a way to be in the world is weird. After all, aren’t we called homo sapiens (Latin for “wise man”) because knowing is natural and innate to human beings?
So let’s take a look at the what and how of knowing. What does it mean to know these days? And how does that lock us into “what is”? What is it about knowing that keeps us dumb—given that we know so much? In this chapter, we’ll take a look at some of the foundations of knowing, what it means to know philosophically speaking. That shouldn’t scare you away, because I’m going to show you the way some high falutin philosophical conceptions get played out as we go about the business of living every single day. They actually shape how we do our thinking.
The “thought shapers” for this chapter, in philosophical language, are dualisms and dichotomies, and causality and linearity. They are ways to place things in time and space. They are ways we’ve come to characterize “what is” while ignoring or denying “what is becoming.” They’re how come our brains are so heavy.
Dualisms and Dichotomies
We’ll begin with dualism and dichotomy. These are ways of understanding the world by dividing it in two. They’re the philosophical underpinnings of “either-or” —and they cause some of the mental cramps Wittgenstein was trying to relieve. Dualism is the philosophical term for binary opposition. Dichotomy is the philosophical term for splitting something whole into two parts. Making a distinction between dualism and dichotomy is really hard to do even in philosophy. And for our purpose, either dualism or dichotomy will do. I introduce both terms so that you’re familiar with them.
Our everyday thinking and speaking are dualistic and dichotomous whenever we think, feel, see and relate in either-or terms, when we believe or are told that something is “this” or “that” but that it cannot be “both this and that.” And if it can’t be both this and that, then it can’t be both being (what it is) and becoming (what it’s becoming). It has to be “what is.”
Most of us think, see, feel and relate this way many times a day! You wake up with a headache and lie in bed not wanting to get up. You must, though, so you say to yourself, “Mind over matter” or whatever your version of that maxim (which separates mind and body) is. You turn on the TV and hear that scientists claim to have discovered that shyness is genetic, not environmental. You take your child to school and learn that recess is cancelled because the children need to learn to read, not play. At work you and your office mate get into a fight about whether Israel has a right to defend itself or the Palestinians have a right to a state of their own.
Underlying these everyday conversations are the really big dualistic and dichotomous thought-shapers —good and evil, mind and body, nature and nurture and, of course, right and wrong. Let’s call them the Big Four. They’ve been with us for centuries in myth, religion and philosophy. And for the last 500 years or so, they’ve been part of modern science, including social science. What it meant to see and think scientifically is to do so dualistically—animal or vegetable, stillness or motion, mind or body, subjective or objective, etc. (Developments in theoretical physics and what are called “postmodern” social sciences are rare exceptions.) Try as it might to distance itself from myth, religion and philosophy, most science is steeped in these same dichotomies. And since knowing scientifically is what all of us are supposed to learn to do in order to be considered smart and become successful, our everyday thinking is shaped toward seeing things as good or evil, right or wrong, mental or physical, and innate or environmental. [See Note 1 for where to read more about how Fred Newman and I see the “marriage” of science, myth and philosophy.]
As influential as these four dualisms and dichotomies are, they’re not the only ones. A lot of what we assume in life are versions of the Big Four. Let’s look at some others you might not be aware of that also shape the ways we live our lives, from the most mundane and workaday to the most unusual and awesome aspects of human life.
Let’s begin with how we make ourselves and our environments two separate things. By environment, I mean more than the physical environment. I mean the social, cultural and historical environment as well.
Human beings take things apart and then (try to) put them back together. We explore. We tinker. We play. We discover what something’s made of and how it works. We fix things that are broken. It’s wonderful that we do. We might still all be hunting and gathering if we didn’t. We wouldn’t be able to plow fields, make cars and ships and planes, create and read books, see through telescopes and microspores, build bridges, make medicines, do surgery, and much, much more.
But like a lot of our human capacities, this one can lead us into a trap if we misuse it. Because not everything is better understood by taking it apart, not by a long shot. Some things just can’t be understood if we try to take them apart, because then they’re not what we thought they were before we tried to take them apart! Sometimes, we destroy the very thing we treasure or want to understand better.
The primary and most serious example of misunderstanding something by taking it apart is human beings. There’s two very common ways that scientists and social scientists do this. First, they take us apart, literally and figuratively. They speak of and study our personality, our perceptual system, our cognitive skills, our IQ, our aggressive tendencies, etc. They assume that if they could understand how each part of us works then we would understand how the whole thing works. Unfortunately, human beings don’t “work” that way. Everything about us works together. You can’t add up all the ways you are and get YOU. You are not equal to the sum of your parts. A person is a different kind of entity from what it’s made of.
Second, the experts typically talk about, relate to and try to understand people as if we exist apart from and independent from our environment. Even our organs—about which science and medicine have made amazing and amazingly important discoveries by looking at them separate from each other and separate from our bodies—become different things from what they are in the environment and overall activity of our bodies-in-the-world. So, to speak about our hearts and brains and kidneys as if they lived independent of our bodies, to describe genes as if they had minds and bodies of their own, instead of existing inseparable from our minds and bodies—these ways of seeing, thinking and talking can be very misleading.
There never has there been a heart, brain, kidney, gene, personality or person that isn’t somewhere. Nothing exists independent of environment. We are never environment-free.
When we relate to people as if they are independent of whatever environments they’re in, when we relate to them as if they’re “nowhere,” we are distorting what it is to be human—which is to always be (and become) “environmentally” (physically, socially, culturally, historically). When we separate ourselves out like that, we begin to think either-or and say things like, “There’s something seriously wrong with that child. Is it him or are his friends a bad influence?” And, “His father was an alcoholic and his sons are too. It’s in their genes.”
I hope you see that these are versions of “nature-nurture” dualism, a false dichotomy that leads people to wonder, “Is it nature or nurture, inside or outside, biology or environment?” and “If it’s both, how much is biological (nature) and how much is environmental/experiential/cultural (nurture)?” Most people believe that these questions are worth arguing heatedly about and worth trying to find the answers to. They’re not. Nature and nurture are at play all the time, in complex and different ways, depending. That’s because we’re never without either of them.
One of the most destructive nature-nurture debates is about intelligence and race—are group differences in intelligence due to genetic factors or to environments factors, (such as quality of schools, family life, economics, “culture,” etc.)? The claim that intelligence is genetic and thereby passed on within families and ethnic and racial groupings is part of America’s shameful history (and that of other countries too). Today we’re familiar with the debate from all the talk about the “achievement gap” between African American and white children and efforts to “explain” why African American children tend to score lower than white children on IQ tests. But this destructive dualism, and the false scientific claims that it leads to, were used before. They rationalized discriminatory policies and justified laws that violated the most basic of human rights. It was legal to sterilize criminals, the “feeble-minded,” people with epilepsy, etc. throughout most of the 20th century. People labeled with these characteristics were barred from entering the US in the first two decades of the 20th century, and then by decree of The Immigration Restriction Act of 1924, the number of European Jews and Italians who were allowed to immigrate was greatly reduced—justified by the claim that these peoples had defective genes. [See Note 2 for more on this topic.]
Just like we’re never without “nurture,” we’re never without “nature” either. But how could anyone even think we are, what with a news story every day of the latest genetic discovery. Just this week I’ve read reports in science magazines that genes contribute to how religious a person is, what their political views are, and the ability to learn tonal languages like Chinese. The information is presented as if we now know something new, surprising and really important. But do we really? Genes, after all, need environmental input in order to be activated! We need visual stimuli in order to activate our genetic disposition for sight—meaning, we have to have things to see if we’re going to see! So if a person is never exposed to religion, politics or a tonal language, what then? What’s more, once genes are activated by environmental stimuli, they themselves can transform. They become! But dualistic understanding (genetic or environmental, nature or nurture) is all about what is. It doesn’t allow for becoming.
All of this is NOT to say scientists should stop doing genetic research. Not at all. But this research is being done in a particular environment (a social-cultural-historical environment, Vygotsky told us), and it is within this totality (the research-and the environment) that its meaning is created. It’s an environment obsessed with knowing that is powered by seeing and thinking and speaking dualistically and dichotomously. For what typically happens when something whole is split apart is that one side of the dichotomy becomes dominant and gets the attention, and the other one fades in importance and is often ignored. Look at this list and see if you agree.
- Product and Process (or What and How)
- Cognition and Emotion (or Thinking and Feeling)
- Science and Art
- Work and Play
If you’re having trouble seeing this, you only have to think of our educational system. What’s valued? Product. Output. Test Scores. Thinking and problem solving skills. Where does the money get spent and what dominates policy conversations? Science. STEM subjects (Science, Technology, Engineering, and Mathematics). What gets cut from the school day? Art. Drama. Music. Recess. Play.
Such misguided and destructive policies are the result, at least in part, of setting up oppositions like these, that lock us into a not very smart way of thinking about something we care deeply about like education. There’s no product without a process that produced it. There’s no thinking that happens without feeling. There’s art in science and science in art. Much play is very workaday and much work has characteristics of play. I devote an entire chapter later on to examples of how these unfortunate dichotomies get played out in education—wasting energy, getting us upset, keeping us blind to becoming and selling our children short.
Causality and Linearity
Why is the sky blue? Why does snow melt? Why do people die? Why is that man sleeping on the street? Why can’t I (have ice cream, play video games, go out and play)? Why do I have to (go to bed, do my homework, clean up my room)?
Young children are full of questions like these. They’ve learned from us that people ask why. A la Wittgenstein, they’ve learned to play a language game. And through playing this game over and over and over, they come to see and experience things and events causally and to expect that everything they encounter in the world is either cause or effect of something else.
At first, parents delight in the “wondering whys” (“Why is the sky blue?”)—proud of their child’s intelligence and curiosity. Sooner or later, though, most parents will tire of their children’s barrage of questions (many or most of which they have no clue how to answer). Adults, though, don’t much like the “whiney whys” (”Why can’t I…?”). That’s a different language game—whose next move is often the parental, “Because (I said so).”
The ability to ask why is taken as a stepping-stone to causal understanding. It’s considered a milestone of cognitive development, an indication that children are on their way to being knowers. Unfortunately, the ability to participate in question-asking conversations and to appreciate the growth that comes from the asking itself is not even made mention of, let alone considered a developmental milestone. Asking is seen as a means to an end, and the end is knowing. It’s another example of ignoring the process and focusing on the product.
Causality is one of the ways we know. It’s one of the twelve categories of thought identified by the 18th century German philosopher Immanuel Kant. (Centuries earlier, Aristotle had identified ten.) For Kant, causality and the other categories (such as reality and necessity) correspond to the forms of understanding that are the foundations of our conceptual knowledge. These categorical ways of thinking are a priori, meaning independent of experience—they’re the innate structures of the human mind. It’s these categories, the story goes, that then shape our experiences. The short version of Kant, in some phrases from our day, goes something like this—“We’re programmed that way.” “That’s how our brains work.” “It’s in our DNA.”
I’ve been fascinated by the role causality plays in our lives since my graduate school days. Back then I wanted to discover what cause-effect talk was like for children who were just beginning to speak, and to see how the activities they were involved in with others played a role in their coming to see the world causally. I didn’t buy the idea that our minds are structured to see and experience causality, a la Kant. I believed, instead, that seeing and talking about causes and effects is a social-cultural-historical creation. (So too, of course, is Kant’s notion that cause is a fundamental category of thought.)
At the time, people who studied children’s language development did so mostly by setting up experimental situations instead of observing or talking to children in their everyday life environments like their homes or on playgrounds. Then the researchers made sweeping generalizations not only about how children talked and what they knew about language but also about how they thought. For example, in studying causality, linguists and psychologists showed sequences of pictures to children and asked them “Why’d the man fall off his bike?” Most two-four year olds would say, “Cause he broke his arm” (focusing on what happened next—that is, the “consequence”) while older children would say things like, “Cause he rode into a tree” (focusing on what happened before falling off the bike—that is, the “cause”). The experts concluded from these answers to experimental questions that children didn’t have a concept of cause until they were about five years old.
This seemed like bad science to me! So, for more than a year, my colleagues and I played with toddlers (and sometimes their parents) in their homes, and got to participate in conversation with them. We found that how toddlers did causal talk in their everyday lives had everything to do with what they were doing (playing with trucks, doing a puzzle, dressing a doll, for example) and with the ways their mothers did causal talk. We were able to see first-hand the ways that they learned from and with others to play the language game of causality and begin to experience the world in terms of causes and effects—one of the entries into the “knowing” culture.
Another area that piqued my interest in causality was psychotherapy. The little I knew about it in my teens and 20s was that it was all about finding the cause for how a person was feeling. That, supposedly, if you could identify what was causing depression or anger or whatever (the cause usually being something in one’s childhood), that would change it, clear everything up, make a person better. I was very skeptical. But I was fascinated by the huge assumption of a causal relationship between past events and current emotions, and how unexamined this assumption was, given it was the foundation of a profession dedicated to helping people experiencing emotional distress.
Nowadays, I see how obsessed we all are with cause in everyday life. I know some people who truly believe that every single thing and event has some other thing or event that caused it to be, and they won’t rest until they believe they’ve pinpointed it. Some friends of mine do this with all things medical (as much or more concerned with identifying the cause of a mysterious rash as with getting rid of it, for example), others with relationships (she must be mad at me—that’s why she didn’t say hello when we passed each other in the hall, for example). Other friends worry incessantly about what will be the effects of something they do. And quite a few friends do both!
The hold that causality has on us was a favorite topic of conversation for Fred Newman and me. Fred’s understanding of the history of the notion of cause within philosophy and his appreciation of the role causality has played in science, industry and technology were invaluable. They helped me to see how misguided it is to apply causality to the psychological realm, to insist that all of human thought and action is best understood in terms of cause and effect. [See Note 3 for ways to read and hear Fred perform philosophy.]
We’d talk for hours about how quickly little children become socialized to be causal knowers and how traditional therapy reinforces a causal view of the world. Fred believed that much of people’s emotional pain comes from our thinking causally, and we were finding more and more evidence from his social therapy practice that challenging this way of thinking was extremely helpful to people. Here’s a classic example: Client says, “I stayed in bed all day because I was depressed.” Therapist says, “How do you know that? Maybe you were depressed because you stayed in bed all day. Or maybe one thing has nothing to do with the other.” By suggesting other ways of looking at the situation, the therapist opens the possibility for a new kind of therapeutic conversation—more a creative journey they’ll take together than a sharing of information for the therapist to come up with the correct cause-effect explanation.
It was within this decades-long conversation that Fred introduced Wittgenstein to me. We especially liked how he described causal language and thinking as one of the ways we get ourselves in intellectual-emotional muddles and give ourselves “mental cramps.” Over the years, Fred and I wrote about how we understood Wittgenstein’s philosophy and how useful it was to the development of our own work. Here’s a favorite passage of ours (from Wittgenstein’s book Zettel):
I saw this man years ago: now I have seen him again, I recognize him, I remember his name. And why does there have to be a cause of this remembering in my nervous system? Why must something or other, whatever it may be, be stored-up there in any form? Why must a trace have been left behind? Why should there not be a psychological regularity to which no physiological regularity corresponds? If this upsets our concepts of causality then it is high time they were upset. (Note 4.]
Is it upsetting you to even entertain the possibility that there might not be a cause of a memory? Have your concepts of causality been upset? I hope so! Because as upsetting as it might be, it’s actually good for us to see, feel and imagine in new ways. That’s what Wittgenstein is inviting us to do with his example. He’s showing how normal and pervasive it is to make causal connections, and then he takes apart the connection. He isn’t denying that neurological, cognitive and physiological processes are going on. They go on whenever we do anything, so of course they’re going on when we recognize or remember. But, he provokes us with his questions (“And why does there have to be a cause of this remembering in my nervous system?” for one). He invites us to join him in questioning our assumption that there is a causal connection or correspondence between these processes and what we’re recognizing or remembering or, for that matter, to the human activity of recognizing or remembering. “Why does there have to be a cause?” is one of my favorite questions! I hope it becomes one of yours.
Well, OK (you might be thinking), I’m ready to at least consider the possibility that everything isn’t causal. But everything is linear, right? One thing comes after another, right? This happens and then that happens. Since everything exists in time and space, how else can it be?
There are arguments against linearity in philosophy, computer science, physics, and history and elsewhere, arguments that seem to me to require a fairly good background in theses fields to appreciate—something I don’t have. So I won’t go into them here (but I invite those who have some expertise to share it with us in a Comment). Instead, I offer a fictionalized illustration of how absurd it is to hold fast to “this, and then that, and then that.” The illustration is an imaginary meeting between Lev Vygotsky and Jean Piaget, the world-renowned Swiss psychologist, when they were both 19 years old and at the very beginning of developing their different ways of thinking about human development. [See Note 5 for more on Piaget, his theory and influence.] It’s the last scene in a play Fred Newman wrote entitled, Life Upon the Wicked Stage. Piaget and Vygotsky are sitting in a café talking about stages—which is how Piaget sees things—and zones—which is how Vygotsky sees them. They discover that they both love to tap dance, and they get up and dance together for a minute or two. They sit down again and the conversation continues.
Vygotsky: Now, tell me, Piaget. What have we just done? Let us study the relationship between what we have just done and the characterization of what we have just done.
Piaget: I have actually thought about this often…My understanding is that tapping begins in the feet. The feet move first and the rest of the body follows.
Vygotsky: Aha! To me nothing moves first. Everything moves at once; the body—not just the feet—taps. Our obsession with stages—with what comes first—distorts history where there is no beginning and no end. A zone, it seems to me, is a methodological construct for examining the processes of life and history as process. We must not make things stand still in order that they might be studied. [Note 6]
Vygotsky disagrees with Piaget about tap dancing and a lot more. He sees history. He sees process. He sees life as process and history as process. He doesn’t see linearity, beginnings and endings. He doesn’t divide the process of life or the process of history into discrete stages that temporally or chronologically follow each other. He doesn’t see “products” without seeing them as inseparable from the processes of which they are a part.
What Vygotsky doesn’t see, Piaget sees. And Piaget ‘s stage theory of development won the day. Our culture is, indeed, “obsessed with stages.”
At the beginning Vygotsky says to Piaget, “Let us study the relationship between what we have just done and the characterization of what we have just done.” He doesn’t say, “Let’s see which of us knows what happened. Let’s see who’s right about this.” He’s inviting Piaget to explore with him how each of them sees/understands/characterizes what they did and what they might discover from doing this exploration. I think it’s a lovely invitation, and one that should be made often in everyday life and not just on the theatrical stage.
Notes to Help You Go Deeper and Broader
Note 1. The marriage of philosophy and psychology is the subject of Fred’s and my book, Unscientific Psychology: A Cultural-Performatory Approach to Understanding Human Life. Although we wrote the book twenty year ago, everything in it is as relevant today as in the past. We argue there, in historical and philosophical terms, that psychology is a pseudo-scientific myth and, worse, a scam. It’s a challenging read, but if you’re truly intrigued I invite you to try it.
Note 2. Eugenics, the term for attempts to “improve” the human race through selective breeding, was part of psychology since its beginnings in the late 19th century. While the American Psychological Association refers to eugenics and racial bias as part of its “dark past,” that is surely not the case. It is alive and well. If you’re interested in the history of racism in psychology through most of the last century, a good source is the 1987 book, Even the Rat Was White: A Historical View of Psychology. Written by Robert Guthrie (who later was a founder of the Association of Black Psychologists), it’s an expose of how psychology was used to legitimize the oppression of African Americans and promote the idea of black inferiority.
Note 3. In a way, Fred performed philosophy in whatever he did, but here’s some recommendations to get you started.
To read (found at http://eastsideinstitute.org/resources/chapters-articles-presentations/):
A Therapeutic Deconstruction of the Illusion of Self
Where is the Magic in Cognitive Therapy? (A philo/psychological investigation).
Vygotsky’s Place in the History of Science
To listen (found at http://eastsideinstitute.org/resources/multimedia/audio/):
Can There Be Honesty in a World Without Truth?
Note 4. This passage is from Wittgenstein’s Zettel, a collection of remarks on philosophy and philosophical psychology. It appears as paragraph 610 on page 160 in the 1967 edition by G. E. M. Anscombe and Georg Henrik von Wright.
Note 5. Jean Piaget was a Swiss biologist, psychologist and philosopher who studied children’s cognitive development for nearly the entire 20th century. He broke new ground in showing how very young children aren’t inferior thinkers—it’s that they just don’t think the way adults do. Children come to know the world (what he called genetic epistemology) and pass through specific stages of development. His studies of children’s conceptions of time, space and causality, among others, have shaped early childhood education the world over. So too has his belief that human beings are initially individual and gradually become social. The Wikipedia entry for Piaget is quite extensive should you want details.
Note 6. The entire script of Fred’s play, Life Upon the Wicked Stage, can be found in my edited book, Performing Psychology: A Postmodern Culture of the Mind. It was first performed in 1997 at the American Psychological Association annual convention. In addition to the fantastical meet-ups of Piaget and Vygotsky, you’ll have the treat of listening in on conversations between Freud and Wittgenstein, Trotsky and Lenin, and Freud and Kafka.