Sunday, March 24, 2013

in the belly of the bell curve, or how to disappear completely

What defines normal? According to studies that collect and analyze demographic data, statistics define norms. And statistics presume, and find, that a normal distribution exists in many groups of data, and that they can be represented visually by the bell curve. Normal, whatever that may suggest, is represented in the area bisected by the second quartile. This is where the median exists. And in a normal curve the median, mode, and mean coincide at the same location on the curve and are represented by the same number. Visually, the bell curve evenly rises from an x-axis asymptote to a peak and lowers to an asymptote on the other side. Bisecting this curve at the mean, median, and mode point creates a bilaterally symmetrical curve. One side mirrors the other.

About this mean, median, and mode point resides the largest group of data. This is the 50th percentile. At least 50% of the data points collected reside within this area of the curve. An interesting component of parametric statistics is that the collection and analysis of statistical data gets plotted onto a graphical space, and the area under the curve is what gets analyzed for the probability that one or more variables will coincide. I repeat, data points are fit into a graphical space with an x axis and y axis (at least) and the line that connects these data points establishes an 'upper bound' under which exists a graphical volume, i.e., the area under the curve. To put it bluntly, data about people, for example, is transformed into a curve on a graph whose area is measured to assess probabilities, which are at the heart of this form of analysis.

I am no statistician, and I probably mangled some aspects of this discussion thus far. I get a sense that while statistics is a powerful tool for measuring and analyzing data for activities such as policy making the very thing studied is simply a shape. What defines what is in this statistical abstraction must coincide with some area under a graphed shape. A curious thing occurs. Statistics is a powerful tool for measuring things, but it requires whatever that is studied to be transformed into data that fits the tool. To sum up what one writer mentioned about the differences between quantitative data collection and its qualitative counterpart is that quantitative data collection sacrifices the anecdotal, the story, the social meaning, for impersonal 'data.' If you've ever answered a data collection questionnaire you've probably had to make an executive decision about how strongly you felt about a particular statement on a Likert-type scale. And that executive decision consisted of you reading the question, allowing it to hail some aspect of your experience to be representative of that question and to assign it a value, represented by a number. How you interpret the question and how your relative conviction about the experience you reference to answer that question presents some of the ambiguous and potentially problematic aspects of the interface between experiences had and data collected. The human narrative is left out.

Now I wanted to present an anecdote that at least shows how statistics isn't necessarily an objective tool for analysis that presumes to reflect the 'real' world. Each of us occupies a 'real' world that markedly differs in content and meaning from others, and our social interactions only multiply the meanings and contents of our worlds. None of them necessarily concretes a reality that is shared. That's a counterfactual claim that communication, as a pragmatic tool for 'making common,' asserts as a goal inherent to interaction. Let's assume that we agree for our own reasons that we've reached this goal, and that's simply all that matters. I won't attempt some wry French existentialist language to wax upon this topic any more. Let's get to that anecdote.

In my first semester of my doctoral program, we had a course that introduced us to the discipline and the faculty in our department who represented aspects of this discipline: namely its topical areas and its methodologies. One of the faculty, John Jackson, focused his research on race in social science. He shared with us a letter he received from a Western Ontario university faculty member defending his groups' statistics-based inquiry into the connections between IQ and race. Without trotting out the boogeyman that is race, let me stick to the details. The faculty member, who was presumably white, asserted that the research was not biased or racist because: first, it was rigorous statistical research; and second, it found that whites fell in the middle of the distribution. Asians fell on the higher end. Africans/blacks fell on the lower end. Since whites weren't 'the smartest' this showed that the test wasn't biased.

It took me a while to come to this conclusion, but instead of quibbling with this Canadian researcher's data I simply want to point out who is conducting it, and what is presumed normal. IQs are historically white tests. They were created by whites (i.e., Europeans) and administered by whites in order to slot people into a society conceived, in its intentional or planned aspects, by whites. That a white researcher could find his group in the middle of the test only measures its internal validity. The test results determine that it has been normalized to white IQ distribution, and so white IQ distribution falls within the 50th percentile. And the test segregates Asian and African into their respective distributions. That the test simply gives scientific teeth to a rather stereotypical observation is not a coincidence. The test adds a layer of scientific reality to what many white lay-observers see in their worlds: Asians outsmarting them and Blacks playing their intellectual foils. And they are right in the middle with their observation, just as they are in this test, and just as they are observing their intellectually framed world through this test.

This is an anecdotal observation upon the nature of being an observer both on others and upon oneself. If one wants to disappear completely, one simply needs to normalize the distribution of data points around one's self. The observer simply disappears by being the commonest data point and, more perniciously, becoming the baseline for judging others. Observation from the commonest data point allows others with operationally defined differences to become outliers and, by extension, outstanding. I would think that the test measures differences that everyone perceives to exist, but to the extent that these are accurate depictions of differences is too far removed from the final product, the normal curve. Conforming the world to this curve requires taking any number of accurately delineated categories of difference, their carefully operationalized component variables, and performing a fine-toothed analysis of semantic categories that have been mapped onto different entities in the world and brought into the test as data sets. In the process of data collection, the words chosen and the context in which they are gathered shade the reflexive strategies of the test participant. That bell curve and its upper bound represent an impermeable membrane between researcher interests and the world, with those interests representing the most prominent protuberance along that skin of research with the world being reduced to stringently codified variables, denatured into remotely observed data sets under that skin. The research sets the answers to the research questions posed in the scene by predefining their status as variables, testing specifically for them in participant responses, and reposing those predefined model conditions as measures of the validity of the test and subsequently accurate depictions of those phenomena being studied as they exist au milieu.

Now let us return to two questions, What is normal? And what constitutes privilege? I ask these questions because they insinuate themselves into the test described above and other types of tests used to abstract people and groups as data and extrapolate about them. First, normal, as was discussed is the most populous in a statistical or normal distribution; "normal" hangs out around the 50th percentile--the camel's hump in the curve. Normal is a loaded term, and while statistics has reduced that baggage to something explicitly statistical in nature it still leads us to question how persuasion and the insidious reaches of power influence what is normal. Because between statistics' use of 'normal' and the lay person's use of normal we will have some metaphorical cross-pollination in a very specific way. Statistics finds normal in its largest data set. People find normal in a world view that they share with their social network, their family's common ethnic groups, their generational cohort. From that they often presume that because they share this world view with others then most everyone shares it as well. Then, normal becomes, if not universal, then the presumptive backdrop from which spring one's thoughts and actions. One's very identity is couched in a self-normative worldview, no matter how tenuous it may be.

As was discussed with the statistics-based questionnaire people must self-report their answers. Normally, a simple sentence is all that goads them to be truthful. But a sometimes tricky intervention happens with people, owing to their reflexivity and to the things that influence how they perceive themselves among others. People can and will self-report what they consider to be more beneficially representative of themselves or their member group. Likewise, we cannot discount the ways that these studies get influenced at the questionnaire-creation level and at the funding level by the modes of inquiry and content areas encouraged by society at large. Funding streams all over academia tend to gorge some forms of inquiry while starving others. This leads to what Heidegger calls in his essay, "Science in the Age of the World Picture," 'mere research.' Mere research, as Heidegger claims, is caught into the cycle of inquiry set into motion by the institutionalization of science under academe and in institutes. Organizations, by their nature, tend to place survival at a premium over the long-term. Organizations, like fields of study, can and will set into place the structures that both encourage and facilitate the modes of inquiry that they know will get funded. And so very little if anything happens outside that mode of inquiry. So, in both the example of the self-report and in the study itself presumptions about what are important and what is normal bleed in because that which is outstanding is outside the norm and often worthy of study. This framing occurs both among us, interpersonally, as lay observers and as researchers using parametric statistics, which establishes a numerically delineated norm as the basis of its function as an analytical tool. One need look no further than research into the genetic basis of addiction, gambling, homosexuality, and any number of socially framed behaviors to recognize a robust stream of funding setting the tone for what's worthy of study and the conclusions drawn. All of these influences shape how one pursues normal, which gets translated into the 50th percentile of the normal distribution that is also presumed for the sake of population-based statistics.

I've gone afield of my question a bit, but let us look at it another way. How would have the 50th percentile of attitudes concerning Jews have differed in, say, Bonn, Germany and New York City, USA during the 1930s? Power and influence in those distinct geographical locations says something important about normal. It isn't universal. Yet, that which is normal insinuates itself into the design of the study by setting itself up along the 50th percentile. The studies, their methodology, and their results function as an epistemological argot for "authoritative" and "scientific." Decontextualized from the framing of the questions, the design of the study and the calibration of norms, which spring directly from the stance and affect of the researcher his or herself, these studies and their results become free-floating facts signifying social concerns and forming the basis for action. Functional magnetic resonance imaging studies of adolescent decision making demonstrate one way that powerful interest mobilize science to legitimize one's worldview. The results of this research reached the highest court in the U.S. and determined that adolescent brains are different, so their sentencing should follow suit. It also lays bare the non-statistical fact that money streams fund research, which generates data used to create facts that are then supportive or unsupportive of a set of attitudes and actions towards a grouping of people. Statistics are about data sets, data sets are groups, and the most functional yet pernicious category groups descriptive of society pertain to race, gender, or ethnicity. What gets lost between between being a Latino and the 'attitudes and behaviors of Latinos toward political issues' should clearly demonstrate how one living fact about the world--one's cultural beliefs and identity--becomes a blank, functionalist data category for analysis. At the core of that cultural identity is a power to name, and when used in the name of family legacy and cultural heritage the power can be both benign, yet specific to keeping alive a provisional set of ideas and behaviors that demarcate what makes one a member of a specific group. Naming is power. The name your parents gave you, when uttered, forces a specific action out of you. You turn, you listen, and you abide. This leads me to a rather quick conclusion about what constitutes privilege.

In science and in society what constitutes privilege is having one's views taken as 'the common sense.' Normal is what the privileged group believes; all the other, more marginal groups must live with their 'reality.' Likewise, privilege remains invisible in institutions and institutional discourses because it's the 'default setting.' This becomes more clear as we move from society at large to the World Wide Web. In her seminal book on race on the Web, Lisa Nakamura recognized something about websites that organize discussions based upon race. While these were early studies and fly-by-night websites by current standards, something became apparent in her participation on these websites. "White" wasn't a choice; all other races were. Simply being "white" was the default or presumed racial category to the web, and from this and other examples Lisa Nakamura coined the term cybertyping, or the way that technologies shape our views and participation with and as race. Being something of a white tool, the web had white identity built into it as a default layer ... at that time.

Another anecdote: "Blacks don't surf. Whites do." A couple of early black Internet entrepreneurs recognized this and sold the web to their constituents as 'cruising the web.' Another simple, yet pervasive metaphor that is couched in one world view in exclusion of another. It would be like substituting the use of a mouse with chopsticks to interface with the graphical space of your computer. Many uninitiated Westerners would fumble with it.

So the next time that someone who identifies as 'white' complains to you about the existence of the Black Miss America pageant or some other, by title, all-black competition or institution be prepared to ask this question.

"How has the existence of "X" affected your opportunities as a white person?"

An all-black college, an all-black scholarship, or an all-black beauty pageant has very little to do with a white person's ability to participate in society. But it does lead to another observation that stems from the existence of an all-whatever institution or competition. Normally, people exclaim that "we can't have an all-white "X."

Yes, and no.

Addressing this observation is a matter of framing. It falls upon the theme of whiteness as default. The answer becomes simple.

"Miss America is the white pageant." "X College is the white college." And so on. White is default; it goes without being explicitly stated. When it is, it is explicit about race as a privilege it intends to hold on to. White power is best left as the sub rosa element, the unstated prefix. The non-white women who compete to become the next Miss America win on their merits of being a member of the dominant worldview and, more importantly, conforming to its notion of beauty and femininity.

I'll leave you with this anecdote, which answers the question about normal and privilege. I was in line at a local grocery store when I spotted a magazine called "Black Hair." And not one woman on that cover had 'black hair.'

No comments:

Post a Comment