Journal of Applied Missiology, Volume 7, Number 1


Warren Roane
Montevideo, Uruguay

This paper seeks to show the importance of appropriately using and creating surveys, a task often desirable and required (Mathews 1992:31; Shipp 1990:12). However, let the reader be at ease; no statistical jargon in used; a future paper will discuss statistical terms and issues. Four errors will be examined that occur when creating, using, and interpreting surveys.

Four Errors of Surveys

It has been said that research by missionaries is "essential" (Mathews 1992;31), even essential to evangelism (Shipp 1990:12). However, when faced with information gathered from diverse sources such as chambers of commerce, government officials, and census bureaus, one should exercise caution. In the classic and very readable "How to Lie with Statistics," the example is given of two surveys conducted in a region of China. In the first census, for tax and military purposes, the population was 28 million. In the second survey, for famine relief, the population soared to 105 million (Huff 1954:133-134). Most statistical discrepancies are not this obvious, but there are four errors that even the non-mathematically inclined can avoid.

Error 1: Take Data at Face Value.

The most common error is thinking that the question asked is the same as the question answered. Often, a survey does nothing but produce a report full of what Huff termed "semi-detached figures" (Huff 1954:3). A number is thrown out that is meant to impress the audience, but has very little meaning. Or, a chart of numbers is generated, but it does not really tell you what you want to know. Two examples illustrate the point.

On November 16, 1995, it was reported on the local news that one-fifth of the world had no food (Canal 4, 1995). As no reports of massive starvation have occurred as of this writing, I can only conclude that the news was wrong. What was meant (I assume) is that a large portion of the world's population has very little food.

A leading pollster in Montevideo recently revealed how sloppily data was collected on poverty levels in Paraguay. He was hired to conduct surveys for the World Bank and other organizations in Asuncion. As time ran out for his project, his supervisor instructed him to join him on the roof of a tall building. From that vantage point they counted the number of tin roofs that they could see (Entrevista 1995:1-3). The survey assumed that tin roofs equaled poverty, when it is quite likely the opposite is true; those living inside at least enjoyed some protection from the elements.

Of course, who conducts the survey can influence the outcome. A survey by the Catholic church found 76 percent of Uruguay to be Catholic, while a survey by a government entity found it to be only 36 percent (Rama 1964:12-13).

Error 2: If it is in Print, it is Scientific.

Huff mentions several of the classic tricks used to deceive the public, including the "gee-whiz graph" a device used to impress the reader by distorting figures to fit an objective (Huff 1954:3). A tall graph of yearly conversions looks more impressive than a short, squatty graph with the same information. A recent U.S. News and World Report (Consulting 1995:52-58) lists modern tricks: slanted questions, double negatives, and forcing you to give an opinion (even if you do not have one). How a question is framed plays a big role in how someone responds to it (Paulos 1988:87). A six percent tax increase sounds better than a $91 million increase. The question of semantics gives rise to confusion, i.e., the recent controversy about the estimated number of homosexuals in the U.S. population (1.4% or 10%) which grows out of different political, social, and religious orientations.

Two checks to see if a survey is scientific: 1) does it measure what it says it does? and 2) can it consistently measure what it says it does? So there are some questions we would ask before we conduct a survey or rely on survey results. does the survey find out what we want to know, or is it the best thing we happen to have at hand? Will this survey work in a different culture, just by translating it? We will explore these issues in the last section of this paper.

Error 3: Data Collected Scientifically are Accurate.

Now that we have acquired a scientific, reliable, accurate survey, have we fool-proofed our survey? No, we still have to conduct our survey responsibly and scientifically. For example, response to written surveys (or newspaper ads) may reflect literacy rates rather than attitudes about the Gospel. As mentioned above, the way a question is worded makes a world of difference. One example: "Do you consider yourself a homosexual?" is different than "Have you ever had an attraction for someone of the same sex?" Wording probably accounts for the Kinsey 1948 survey result of 10% incidence of homosexuality cited above. Better wording in recent surveys (as well as use of scientific methods) indicate that the true rate is 1.4 to 2.8% (Sex 1996:3).

Error 4: Once True, Always True.

Data are not static. As part of my dissertation, I conducted a survey of college professors and asked them about their jobs (percent of time allocated to teaching, conducting research, etc.). Because I asked them to reply, and then respond again later in the semester, I discovered that most of their answers changed over time. In fact, one could conclude that only name, rank, and serial number were reported consistently over time. I conclude that this type of survey, last given at the national level in 1987, was not reliable enough on which to base institutional policy (Roane, 1993a). Despite the "obviousness" of this conclusion, most surveys are revered as true, once for all, given for all time.

To put is another way, if you ask the respondent the same question two weeks from now, will the answer be the same, or does the response depend on time? Perhaps church surveys are like a snapshot, never to be repeated or duplicated (and thus no basis on which to establish policy).

What This Means for the Missionary

Here are three suggestions that can help missionaries avoid the four errors listed above.

1. Check official data with informal methods.

For example, Mexico is 88% Catholic, while Uruguay is 76%, according to one "official" survey (Rama 1964:12). Counting people who genuflect on a bus may not be scientific, but it may give an indication if the official figures are accurate. An informal bus survey conducted by me (Mexico 1988 and Uruguay 1993) indicate that while most (90%) Mexicans genuflect, Uruguayans do not (10%). This does not mean the official figures are wrong; it just causes me to want to investigate more, perhaps with my own, scientific, survey.

2. Examine data carefully.

My own informal word count for the book of Jonah shows that "salvation" or "forgiveness" occurs five times, while "destroy" or "die" is used 13 times. Does this mean that the theme of Jonah is the destruction of the wicked, or that God is unmerciful? Take Huff's suggestions on "how to talk back to a statistic": ask who says so, how does he know, what is missing, and does it make sense (Huff 1954:3).

3. Be careful in preparing your own surveys.

a) Scenario one: borrowing an existing survey in English.
Translation of "perfect" surveys from English to another language does not make the new survey perfect! Culture, language, and the psychology of testing all play a role in creating a survey. I recently showed an American teacher a survey given to the director of the Uruguayan teachers' institute. On the "easy" question of race/ethnicity he was puzzled; should he mark "White" or "American Indian" (because his grandmother was half-indian). It never occurred to him to mark "Hispanic," although his mother is from Spain, he has a Spanish surname, and the only language he speaks is Spanish (Roane 1993b). Once translated and corrected for cultural differences, the survey has become a different entity, basically your own (see below for caveats).

b) Scenario two: creating your own instrument from scratch.
If you design your own survey instrument, you should perform statistical tests of reliability (does it give consistent results) and validity (does it measure what I want it to). You should also field test it, to see what problems you might encounter during the actual survey. You may even need to give pre- and post-surveys to measure the effects of time on the results. Above all, the person who collects the data should be trained and have experience with that particular survey instrument.

Despite these warnings, it is possible to create, conduct, and analyze a survey. It is not an easy task, nor is it an easy solution to any problem. Each survey best answers the questions it was created to answer, and it requires time to think up the questions, and time to get the answers. But once conducted, a good survey can be a rich source of information to target a people group to evangelize or to decide what to teach in Sunday school--but not both!



HUFF, Darrell
PAULOS, John Allen
RAMA, Carlos
ROANE, Warren

SHIPP, Glover

This site mirrors the JAM site at the ACU web site.
Mirrored by permission of ACU Missions Personnel
Direct questions and comments to Ed Mathews,

Return to JAM Home Page   Return to OVU Missions Home Page   Return to OHIO VALLEY UNIVERSITY Home Page
Last updated on February 4, 2013
Page maintained by