Which of the following most directly resulted from the change in the Native American population?

1The American Indian population was one of the first subpopulations of the United States recognized by the federal government as vulnerable and in need of state management [1]. From a pre-contact population variously estimated at between one and ten million, the American Indian population in the coterminous United States declined to approximately 600,000 in 1800—when estimates become more reliable—and continued its rapid decline in the nineteenth century, reaching a nadir of 237,000 in the decade 1890-1900 before recovering in the twentieth century (Thornton, 1987, 32). Resisting the widespread belief that American Indians were doomed to extinction, nineteenth-century reformers successfully pressed the government to take an active role in assisting the population. Reformers believed that severing tribal bonds and promoting the private ownership of land would give Indians an incentive to work and, ultimately, save the population from continued decline. Under their urging, federal policy shifted from military subjugation, land cession, and removal efforts to policies promoting acculturation and assimilation. The apogee of assimilationist policies was reached in the late nineteenth century, when Congress passed legislation that allotted reservation land in severalty to individual Indians, promoted Indian citizenship in the United States, and enrolled Indian children in boarding and reservation day schools, where they were taught English and vocational skills (Dippie, 1982; Hoxie, 1984; Prucha, 1976, 1979).

Show

2Despite what might have been good intentions on the part of some reformers, the campaign to assimilate Indians resulted in substantial economic and cultural costs for American Indians [2]. Between 1887, when the government passed the sweeping General Allotment Act (or Dawes Act) and 1934, when the Indian Reorganization Act (or Wheeler-Howard Act) ended allotment, the amount of land owned by American Indians declined by 62 percent. According to historian Janet McDonnell, two-thirds of American Indians in 1934 remained “either landless or did not own enough land to make a subsistence living” (McDonnell, 1991, 121). Allotment also failed in its goal of converting substantial numbers of Indians into self-supporting farmers or ranchers (Prucha, 1985, 48). The reformers could point to some signs of “success,” however. Between 1887 and 1920, the percentage of Indians wearing “citizen’s clothing” increased from 24 to 59 and the percentage speaking English increased from 10 to almost 40 (U.S. Interior Department, 1887,1920). In addition, the percentage of Indians “Taxed” (i.e., recognized as citizens of the United States and thus liable to taxation) increased from 23.7 in 1890 to 72.9 in 1910 (U.S. Bureau of the Census, 1915, 284).

3Although several historians have examined the impact of removal policies on the Indian population (e.g., Thornton, 1984; Campbell, 1989), remarkably few researchers have studied the impact of late-nineteenth century assimilationist policies. When the Wheeler-Howard Act ended allotment, John Collier, Commissioner of Indian Affairs in the Roosevelt Administration, argued that in addition to being the cause of Indian spiritual and material decline, allotment was responsible for an Indian death rate twice that of the white population (Dippie, 1982, 308). Collier’s inference that mortality differentials between the Indian and white population resulted from allotment, however, ignored the multiple environmental, economic and social factors affecting mortality and long-standing “race” differentials in mortality. In his population history of American Indians, American Indian Holocaust and Survival, Russell Thornton notes that allotment produced a “further deterioration of American Indian economies, societies, and cultures,” but he does not otherwise connect the policy with Indian demographic change (1987, 102). More recently, Nancy Shoemaker’s study of twentieth-century population recovery among the Cherokee, Yakima, Seneca, Red Lake Ojibway, and Navajo suggests that reasons for population stabilization and recovery varied by tribe, with no discernable relationship between allotment, education, and demography. The Cherokee and Yakima Nations, both in the process of being allotted in the late nineteenth century, exhibited dramatically different demographic patterns, while the Red Lake Ojibway, the Seneca Nation, and the Navajo Reservation were never allotted. Likewise, the Cherokee and Seneca had the highest percentage of students enrolled in school and the highest percentage of individuals with the ability to speak English, but had dramatically different fertility levels (Shoemaker, 1999, 59-62). Shoemaker’s sample, however, in addition to being limited to five tribes, was not large enough to analyze the impact of allotment, education, and English-speaking ability at the individual-level. Thus, the effect assimilationist policies may have had on Indian population stabilization and recovery remains an open and intriguing question.

4This paper examines the impact of federal policy on America Indian mortality with a new source: a 1-in-5 public use microdata sample of the 1900 census of American Indians. The sample, part of the Integrated Public Use Microdata Series (IPUMS), was created at the Minnesota Population Center and publicly released in June 2005 (Ruggles et al., 2004). The census included several questions useful in analyzing the impact of federal assimilationist policy on the population, including each individual’s degree of “white blood,” tribal affiliation, whether he or she spoke English, lived in a movable or fixed dwelling, was a citizen by allotment, and lived in a polygamous marriage. In addition, the census included questions common to the enumeration of the non-Indian population, including occupation, literacy, marital status, children ever born, and children surviving, which allow detailed description and analysis of mortality and fertility. Although these data were collected by the Census Bureau, they were never analyzed or published. Thus, the public use sample of the 1900 census of American Indians represents an important new source for the study of American Indian demography at the population nadir.

Background

5Pre-contact estimates of the American Indian population—constructed by applying depopulation ratios to the population nadir figure, extrapolating the nineteenth-century rate of decline to 1492, or estimating the region’s carrying capacity—are at best rough guesses. Over the course of the twentieth century, researchers have estimated the Indian population of the coterminous United States as low as 720,000 (Kroeber, 1939) and as high (for all of North America) as 18 million (Dobyns, 1983). Most estimates fall in the range of 2-7 million, implying a population loss between 1492 and 1900 in excess of 85 percent. The American Indian population stabilized in the late nineteenth century, experienced modest growth in the early twentieth century, and very rapid growth in the last few decades. Much of the recent growth, however, stems from changes in self-identification in the census. A large proportion of those identifying themselves as “American Indians” are not enrolled in American Indian tribes (Thornton, 1997).

6As Thornton notes in his population history, all reasons for American Indian population decline stem in part from European contact and colonization, including introduced disease, warfare and genocide, geographical removal and relocation, and destruction of ways of life (Thornton, 1987, 43-4). Most scholars agree that diseases introduced from the Eastern Hemisphere, including smallpox, measles, and influenza, were the overwhelming cause of population decline (Cook, 1998). The relationship between epidemic disease and American Indian population decline is relatively well documented in the nineteenth century. Qualitative evidence points to at least 27 epidemics among American Indians, including 13 epidemics of smallpox (two of which were major pandemics), 5 epidemics of measles, and two epidemics of influenza (Dobyns, 1983). Smallpox was especially destructive. The 1801-02 pandemic all but destroyed the Omaha, the Ponca, the Oto, and the Iowa, and killed a large percentage of the Arikara, the Gros Ventre, the Mandan, the Crow and the Sioux. The 1836-40 smallpox pandemic may have been the most severe episode of disease experienced by North American Indians, killing 10,000 American Indians on the upper Missouri in a few weeks, including virtually all the Mandans, and one-half of the Arikara, the Minnetaree, and the Assiniboin (Thornton, 1987, 91-92, 94-95).

7Warfare and genocide were much less important reasons for population loss, although wars had a large impact on some tribes. With some notable exceptions, such as the Creek (1813-14) and Seminole Wars (1835-42), most nineteenth-century Indian wars were fought west of the Mississippi. The Cherokee, who formally sided with the Confederacy in the American Civil War (1861-65) but contributed soldiers to both sides, may have lost as much as one-third of their population during the war. The Plains Indians and the United States were engaged in nearly 50 years of constant warfare, culminating in the military subjugation of the Sioux and Cheyenne late in the century and the massacre of several hundred Indians at Wounded Knee Creek, South Dakota in 1890. Removal and relocation policies, especially after Congress passed the Indian Removal Act in 1830, also led to the deaths of thousands of American Indians. The removal and relocation of the “Five Civilized Tribes” of the American Southeast—the Cherokee, Chickasaw, Seminole, Creek, and Choctaw—are perhaps the best known, but most tribes in North America experienced removal and relocation at some point in their history. Justified as the only means to protect Indians from encroaching whites (while securing valuable land for white settlement), removal often resulted in substantial population loss. Thornton estimates that the Choctaws lost 15 percent of their population during removal, the Creeks and Seminoles lost about 50 percent of their populations as a combined result of war and removal, and the Cherokee —along the infamous “Trail of Tears”—lost an estimated 4,000 out of 16,000 individuals in their relocation to Indian Territory in the late 1830s. When indirect losses are included, Thornton estimates that the Cherokee suffered a net loss of 10,000 individuals (Thornton, 1984, 1987).

8Finally, changes in ways of life contributed to Indian population decline. Loss of wild game, relocation, and confinement on reservations resulted in abrupt changes in traditional means of subsistence, leading to poverty, malnourishment, and greater susceptibility to disease. The near total destruction of the nation’s buffalo in the late nineteenth century resulted in widespread starvation of many Plains Indians. Over the course of the nineteenth century, the number of buffalo declined from approximately 40 million to less than one thousand (Walker, 1983, 1255). On the other hand, some changes in ways of life may have been positive. The Navajo, for example, benefited from rapidly expanding herds of government-provided sheep and goats and selling blankets and jewelry to tourists. Despite the inhospitable climate of the Navajo reservation, population growth was positive after the 1868 U.S.-Navajo treaty established the Navajo Reservation. Today the Navajos are the largest tribe in the United States (Jones, 2005, 174; Shoemaker, 1999, 32-5).

9Federal policy, while intermittently focused on the military subjugation of various tribes, the cession of tribal lands for acquisition by non-Indian settlers, and the removal of Indians to areas in the west outside the path of the expanding non-Indian population, shifted in a more humanitarian direction as the Indian population declined. The devastation inflicted by smallpox in the 1801-02 pandemic prompted Thomas Jefferson to have a delegation of Indians visiting Washington, D.C. vaccinated. Congress first allocated funds for vaccination and health care in 1832. Despite initial resistance on the part of some Indians, failures of the vaccine, neglect of Indians beyond the Mississippi River, and a lack of interest on the part of many Indian officials, smallpox vaccination was extensive enough in the late nineteenth century to reduce the severity of epidemics (Jones, 2004, 113-118; Thornton, 1987, 100-01, 172). The rise of intemperance and venereal diseases among Indians in the mid nineteenth century convinced the government to expand its efforts at Indian health care beyond combating epidemic disease (Massing, 1994). Funds for Indian health care, however, were extremely limited until 1955, when the government transferred responsibility for Indian health care from the Bureau of Indian Affairs to the Department of Health, Education, and Welfare (Thornton, 1987, 237).

10In 1865, a joint special committee of Congress was called to investigate Indian depopulation. Its report, while placing most of the blame for Indian depopulation on the expanding non-Indian population, concluded that the Indian population must be “civilized” or ultimately disappear (Thornton, 1987, 133). Although the report’s recommendations for expanded inspections were largely ineffective, historian Francis Prucha argues that the committee’s report marked “a beginning of a new approach to Indian affairs emphasizing peace and justice, that was strikingly in contrast with the demands of some military men for the rapid subjugation of the tribes and military control of the reservations” (1976, 15-16). The two decades following the Civil War witnessed an upsurge in humanitarian concern for American Indians. Although some reformers fully expected continued Indian population decline and the eventual extinction of Indians—and were thus only committed to palliative care—others were convinced that population decline could be halted through “amicable relations” and assimilation policies (Jones, 2004, 138-40). Reformers, many of whom were Protestant Christians, believed that “civilization was impossible without the incentive to work that came only with the ownership of a piece of property” (Prucha, 1975, 228). Doing so required severing tribal bonds and eliminating the communal ownership of land. Thanks in part to their urging, the great majority of treaties enacted after mid century included clauses allotting land in severalty to individual Indians. Despite mixed results, reformers were unsatisfied with this piecemeal approach and pressed for a general act.

11Congress responded in 1887, when it passed the General Allotment Act (commonly known as the Dawes Act after its sponsor, Massachusetts Senator Henry L. Dawes). The Dawes Act authorized the President to allot 160 acres of collectively-owned tribal land to each family head, 80 acres to each single person over 18 years of age, and 40 acres to each single person less than 18 years of age. To protect Indians during the period in which they were learning to be self-supporting, the government was to hold title to the allotment in trust for 25 years, after which ownership of the land would be transferred to the individual or his or her heir in the form of a fee patent. Indians accepting an allotment were made citizens of the United States. A second provision of the Dawes Act was to make “surplus” land available for non-Indian purchasers. Shortly after the Dawes Act was passed, it was amended to allow Indians to lease their land to non-Indians. The 25-year trust period was effectively ended with the Burke Act of 1906, which allowed Indians judged “competent and capable of managing his or her affairs” to sell their land (Hoxie, 1984, 165).

12According to Leonard A. Carlson, reformers “hoped the Dawes Act would accomplish at least six specific things: break up the tribe as a social unit, encourage individual initiative, further the progress of Indian farmers, reduce the cost of Indian administration, secure at least part of the reservation as Indian land, and open unused lands to white settlers” (1981, 79). Or, more succinctly, in the words of the Commissioner of Indian Affairs, the act was intended to turn “the American Indian” into “the Indian American” (quoted in Washburn, 1975, 242). By some measures, in 1900 allotment was a moderate success. Between 1890 and 1900 the total number of allotments increased from 15,166 (5,554 families) to 58,594 (10,835 families) and the total number of acres cultivated by Indians increased from 288,613 to 343,351. The increase in acreage cultivated was not commiserate with the increase in the number of allotments, however, and was down from the 369,974 acreages cultivated in 1895 despite an additional 19,000 allotments made in the period 1895-1900 (Otis [1934], 1973, 139).

13Eventually, allotment was recognized as failing to obtain most of its objectives. Many Indians sold or leased their land and never turned to farming or ranching. In his study of the impact of the Dawes Act on Indian farming, Leonard Carlson bluntly observes that “allotment as a means of promoting self-sufficient farming among Indians was a failure” (1981, 159). Explanations for its failure are varied, but include the unsuitability of much allotment land for farming or ranching; the lack of capital assistance given to Indians to support the purchase of farm machinery and needed irrigation projects; ineptitude, corruption, and mismanagement by the Bureau of Indian Affairs; and cultural resistance of many Indian men to farming. In its goal of opening unused land to white settlement and use, however, the Dawes Act was a success. In the ten allotted states, Indian land in trust declined from 82 million acres in 1881 to 16.8 million acres in 1933. In Oklahoma during the same period, land in trust fell from 41 million to 2.9 million acres. Most of this land passed out altogether of Indian ownership (Carlson, 1981, 170).

Data

14We rely on the public use microdata sample of the 1900 U.S. Census of American Indians (Ruggles et al., 2004) to estimate American Indian childhood mortality in the late nineteenth century and to assess the impact of federal assimilationist policies on mortality. Because the U.S. Constitution mandated that only “Taxed” Indians (i.e., Indians severing tribal relations and living among the general population) counted toward congressional representation, censuses prior to 1890 had excluded the vast majority of “Non-Tax” Indians. Growing interest in management of the American Indian population, however, resulted in Congress approving resources to enumerate all Indians on special forms in the 1890, 1900, and 1910 censuses. Although the Census Office published basic population data collected in the 1890 enumeration—along with ethnographic studies of various Indian Nations—the original census manuscript records of the 1890 census were destroyed by fire. Interestingly, although the 1900 census of Indian inhabitants was collected, the data were never analyzed or published. The 1900 census is the first surviving census to enumerate all American Indians (Jobe, 2004) [3].

15Nancy Shoemaker has noted that the enumeration of American Indians in 1890, 1900, and 1910 created special challenges and potential methodological problems related to two causes: cultural differences and misunderstandings between enumerators and Indians, and the politics and bureaucracy of colonization (1992). In addition to potential language difficulties, one likely area of misunderstanding was household structure. Shoemaker notes that in many Indian societies, a child’s “father” was what Euro Americans would call an uncle, his or her “mothers”—of which there could be many—what Euro Americans would call aunts, and many of his or her various other kin relations would be recognized by Euro Americans as unrelated or “fictive kin.” Although it may have been the case that Indians and well-trained enumerators recognized these cultural differences and accounted for them on the census forms, we cannot be entirely sure. Moreover, census instructions to follow the traditional Euro American patriarchal family structure by entering the household head first and wife second was at odds with the matrilocal structure of many tribes. Other potential problems arose from the politics and bureaucracy of colonization. Shoemaker judges the census question on polygamy as useless, for example, because Indians knew that the Bureau of Indian Affairs (BIA) took a dim view of the practice, sometimes punishing offenders and making them give up all but one of their wives. Indian parents might also fear losing their children to government-run boarding schools—giving them a potential incentive to hide children—while those eligible for allotment might have counted deceased children and pregnancies in the hope of acquiring additional acreage. Finally, if an enumerator happened to be an employee of the BIA, he or she may have relied on the agency’s population listings to help conduct the census. These listings relied on a more restricted definition of family than the residential household.

16Despite these problems, we have no other source on the American Indian population comparable in coverage and scope to the 1900 and 1910 censuses. For the most part, enumerators were chosen for their familiarity with particular tribes and appear to have been diligent in their efforts. Johansson and Preston, for example, noted that of the seven enumerators hired to count the Apache and Navajo, four were teachers at local Indian schools. They concluded that “enumerators were a reasonably accomplished and mature group of men and women whose occupations gave them some familiarity with the Indians they were interviewing” (1978, 7). In her study of the Cherokee, Yakima, Seneca, Navajo, and Red Lake Ojibway in 1900, Shoemaker noted that enumerators were usually mixed-blood Indians, white men married to Indian women, or BIA employees who were familiar with the language and the culture of the groups they enumerated, thus minimizing the potential for misunderstandings and error (1999, 108). She also noted that although there is evidence that some enumerators tried to fit Indians into Euro American family patterns, the census still managed to capture distinctive residence patterns for cultural groups. Shoemaker concluded that the “census is a reliable source so long as users are aware of potential biases” (1992, 11).

17The 1900 Indian IPUMS sample is a 1-in-5 sample of all households in the Indian Census. All individuals living in sampled household—whether Indian or non-Indian—are included and data maintained on an individual level. Altogether the sample includes 45,651 individuals identified as members of 226 unique tribal groups. Some of the groups are quite small and are represented by a small number of individuals in the sample (e.g., the number of Indians identified as members of the Clatsop tribe in the sample include just 2 men and 3 women). To work with these data, we followed the general classification scheme used by the IPUMS project and the number of cases to reclassify tribal affiliation into 21 general tribal groups and one group for all other tribes. For example, we considered individuals identified as “Apache,” “Jicarilla Apache,” “Lipan Apache,” “Mescalero Apache,” “Payson Apache,” and “White Mountain Apache” as members of the general group “Apache.” Table 1 tabulates the sample by general tribal affiliation and sex. The Cherokee and Sioux Nations had the most members, each representing about 11 percent of sampled population.

Tab. 1

Number of American Indians in 1900 Indian IPUMS Sample, by Sex and General Tribal Group

Number of American Indians in 1900 Indian IPUMS Sample, by Sex and General Tribal Group

Mortality Estimates

18The United States Census of 1900 asked questions on the number of live births that an ever-married woman had in her life (i.e. parity or children ever born) and also a question on how many of those children were still living (i.e. children surviving). In addition to the standard questions on sex, age at last birthday, and marital status, the census also asked about the number of years in the current marriage (i.e. duration of marriage). Unfortunately, it did not ask about the number of times married. This information can be used to make indirect estimates of childhood mortality (United Nations, 1983, ch. 3). A study of mortality in the United States in 1900 employed these methods and the original public use sample of the manuscripts of the 1900 census (of approximately 100,000 individuals) as the source of data (Preston and Haines, 1991).

19The indirect estimates make use of the proportion dead among women of different age or marriage duration categories. These are transformed into a standard life table parameter, q(x), which is the proportion of children dying before reaching age “x”. The methods use model life tables as the standard. In this case, Coale and Demeny (1966) Model West level 13 was used as the standard. It has an expectation of life at birth of 48.5 years for both sexes combined. There are three approaches: one uses women by age group (the “age model”), one uses women by marriage duration group (the “duration model”), and one uses the age distribution of surviving own children (the “surviving children method”).

20The results from these methods applied to the original 1900 public use sample and to the new American Indian sample are given in Table 2. The table gives the relevant q(x) value for each age and marriage duration category, the number of children ever born used to make the estimate, the relevant date in the past to which the estimate applies, and the expectation of life at birth (e(0)) indicated by that level of child mortality in the West Model life table system. The West Model was chosen because it fit the American experience in 1900 very well (Preston and Haines, 1991, ch. 2). For the surviving children method, there is only one life table that fits the data, so the q(x) values and the e(0) for that life table are given. The surviving children estimates are only given for the total, white and black populations because, unfortunately, the estimating procedure would not converge on a solution in the computer program designed for this for the American Indian population. The cause is likely age misstatement among children. The use of the duration model presented some serious problems for the American Indian population (and for the black population as well). The use of marriage duration as a proxy for the exposure to risk of childbearing assumes, first, that marriage is the appropriate situation in which almost all childbearing occurs and, second, that remarriage is not common. The first assumption is reasonable for the United States in 1900 but the second assumption creates problems when there is no question on the number of times married. In particular, when mortality is high, there is a good deal of widowhood and potential remarriage of widows. Thus older women who have had more children and a longer period of exposure to risk of child death would be included in the shorter marriage durations. This problem was noticeable in the American Indian population for adult women. A partial solution for the problem in the duration model was to select women who were younger than age 35 at the estimated time of marriage (age minus duration of current marriage), which is why estimates of the longer marriage durations are not included in Table 2.

Tab. 2

Estimates of Child Mortality in the Late Nineteenth-Century United States by “Race” Using the Age, Marriage Duration, and Surviving Children Estimation Methods

Estimates of Child Mortality in the Late Nineteenth-Century United States by “Race” Using the Age, Marriage Duration, and Surviving Children Estimation Methods

It is probably best to focus on the value of q(5), the probability of dying before reaching age 5, since that appears to be the most stable. This estimate applies on average to about 1893 or 1894. This implies an expectation of life at birth for the American Indian population overall of 39-41 years, whereas it implies an e(0) 50-51 years for the white population and of about 42 years for the black population. Thus the American Indian population was at a very serious mortality disadvantage to the majority white population and even a slight disadvantage to the black population.

21In the age model, there is some evidence for all groups that the mortality level was higher for dates further back in the xixth century, at least beyond the estimate for q(3). This is consistent with a situation of improving mortality over the last two decades prior to the census. This was true for the American Indian population as well. Overall, however, the mortality situation for the American Indian population was serious with about 30 percent of Indian children dying before reaching age 5 (in contrast to about 17 percent for the white population) and with an implied expectation of life at birth approximately 10 years shorter than that for the white population.

Mortality Estimate by Group

22In order to simplify the presentation of the mortality estimates by group, a mortality index has been created (Preston and Haines, 1991, 88-90; Haines and Preston, 1997). Another reason is to create a variable suitable for multivariate analysis at a micro level. To achieve this, an index was created that combines the childhood mortality experience of women of marriage durations 0-24 years. It consists of the ratio of actual to expected child deaths and can be calculated either for individuals or for groups of women. Actual child deaths are available directly from the census. Expected child deaths are calculated by multiplying the children ever born of each woman or group of women by the expected proportion dead for the duration group of the woman or group of women (that is, marital durations 0-4, 5-9, 10-14, 15-19, and 20-24). The expected proportion dead is calculated from a standard model life table, in this case Coale and Demeny (1966) West Model level 13, which has an expectation of life at birth of 48.5 years for both sexes combined. The procedure involves taking the appropriate q(a) for each duration group (q(2) for women married 0-4 years, q(3) for durations 5-9 years, q(5) for durations 10-14 years, q(10) for durations 15-19 years, and q(15) for durations 20-24 years) and converting it to an expected proportion dead by rearranging the equations which are used to estimate q(a)’s from actual proportions dead and average numbers of children ever born. The procedure allows for differences in the pace of fertility among the women (United Nations, 1983, 82). The details of the creation of the mortality index may be found in Haines and Preston (1997). West Model mortality is an appropriate standard for the whole American population since American data were used in the construction of the original model and since West Model replicates the experience of the 1900-02 Death Registration Area quite closely (Coale and Demeny, 1966, 14; Preston and Haines, 1991, ch. 2).

23The index has the advantage of summarizing into one number the child mortality experience of a whole group of women of varying ages, marital durations, and parities. It has been investigated elsewhere and found to be robust and econometrically well-behaved when used as a dependent variable in a regression model (Trussell and Preston, 1982). It is not sensitive to a situation in which fertility has been declining in the recent past, and it is readily interpretable. A value of unity means that the woman (or group of women) was experiencing child mortality at about the national average, while values above or below unity mean that the woman (or a group of women) was experiencing child mortality worse than or better than the national average, respectively. The disadvantage is that, if mortality was declining in the past, the index will give a weighted average of the past mortality regimes, with the weights depending on the marital duration composition of the group in question. Since groups may not be homogeneous with respect to marital duration composition, this can lead to some bias. Overall, however, the index seems robust. Further, any mortality parameter desired can be obtained by multiplying the index by q(5) in the standard table and then finding the appropriate table in the West Model life table system.

24Table 3 gives the dimensions of “race,” rural-urban residence, census region, literacy of husband and wife, English language ability of husband and wife, occupation of husband (using the 1950 basic occupational stratification scheme), woman’s labor force status, husband’s unemployment during the previous year, home and farm ownership, and (for the American Indian Population) tribal group and citizenship status. As was seen in the previous table, the mortality index shows that the American Indian population had childhood mortality substantially higher than either the black of the white populations in 1900, although it was approximately equal to that of the Hispanic population in 1910. Although the American Indian population was overwhelmingly rural in 1900 (98.2 percent rural in the sample), this did not seem to operate to advantage. For the overall American population, rural mortality was about 18 percent lower than urban mortality; it was actually higher for the American Indian population (although the sample of American Indian urban dwellers was very small). Although there were regional variations, only the South Atlantic region exhibited any particular advantage. Mortality was basically high throughout the nation, regardless of region. Within the American Indian population, both literacy and English language ability of either the husband or wife did convey some advantage, as was true of the American population in general. Despite the fact that the American Indian population was almost entirely rural and predominantly agricultural (as indicated by the occupation of the husband), no particular advantage was attached to being a farmer (“Agricultural (excluding laborers)”), in contrast to the population in general. In this respect, American Indians were similar to the black population in 1900 (Preston and Haines, 1991, Table 3.7). The experience of the husband with unemployment at some time in the twelve months prior to the census also seemed to have little relation to the child mortality of the American Indians, unlike the experience of the overall population. Another difference is that the fact that a woman in the labor force was an advantage among American Indians while it was a decided disadvantage to child survival among the mothers in the population in general. Ownership of a home or farm also conveyed little edge in terms in terms of child survival to American Indian mothers, while it did for the majority population. The American Indian population was almost all native born (98.8 percent), and little can be inferred from nativity. Among the white population, the native born had a child mortality advantage of about 23 percent over the foreign-born population. Finally, known citizenship by allotment of either the husband or wife was associated with higher childhood mortality, although status as a “taxed” Indian did not appear to be associated with higher or lower mortality.

Tab. 3

Child Mortality by Race, Residence, Region, Literacy, Occupation of Husband, Labor Force Status of Wife, Husband’s Unemployment, Farm and Homeownership, and Nativity, Currently Married Women Married 0-24 Years, United States, 1900 (a)

Child Mortality by Race, Residence, Region, Literacy, Occupation of Husband, Labor Force Status of Wife, Husband’s Unemployment, Farm and Homeownership, and Nativity, Currently Married Women Married 0-24 Years, United States, 1900 (a)

The final panel of Table 3 presents the tribal groups of the women used to estimate childhood mortality. It is evident that there were substantial differences in the sample sizes from different groups, with the Sioux, Pueblo, Navaho, Cherokee, Chippewa, and Choctaw being more substantially represented. There seemed to be advantages to the Cherokee and the Chickasaw, who had been longer settled in the Indian Territory (later part of the state of Oklahoma) and a real disadvantage to the Sioux, more recently placed on reservations. The very low index values for the Navaho and Kiowa appear unrealistic and may be due to data problems [4]. Overall, the heterogeneity of experience among the tribal groups is intriguing and merits much further study.

Multivariate Analysis

25Group differences in child mortality noted above, of course, may reflect other factors. American Indians who spoke English, for example, may have had higher socioeconomic status than Indians with no ability to speak English. Lower mortality among the English-speaking group, therefore, may simply reflect socioeconomic status and be otherwise unrelated to language ability. On the other hand, the ability to speak English may have facilitated and be associated with a greater willingness to accept non-Indian medical care and medicine, public health measures, and changing standards of personal hygiene. To distinguish the relative importance of social, economic, and residential factors on Indian mortality we employ a multivariate analysis with the mortality index described above as the dependent variable. Following Preston and Haines (1991), we weight the regression by children ever born. Inclusion of variables unique to the Indian Census in the multivariate model also allows us to evaluate the impact of federal assimilationist policies on child mortality. Dummy variables for general tribal group allows us to control for potential unobserved covariates of child mortality unique to tribal affiliation, such as each group’s economy, interaction with non-Indian peoples, particular process of allotment, and history of removal and relocation.

26Table 4 shows the means of variables used in the analysis—weighted by children ever born—by general tribal affiliation. The mortality index ranges from 0.52 for the Navajo (an unrealistically low value, implying half the childhood mortality of the non-Indian population) to 2.23 for the Sioux and 2.38 for the Blackfoot. Values for the independent variables vary widely among tribes. Mother’s literacy ranged from an average of 0 to 71 percent, ability to speak English from 0 to 84 percent, “full-blooded” Indians from 40 to 100 percent, and polygamous unions from 0 to 13 percent. Citizenship by allotment in the sample ranged from a low of 0 percent among the Apache, Blackfoot, Cherokee, Iroquois, Kiowa, Navajo, Osage, Pima, Pottawatomie, and Seminole to a high of 43 percent for the Chippewa (Objibwe) and 52 percent for the Pottawatomie. Overall, 16 percent of households contained a mother or spouse who was a citizen by allotment in 1900. Because of the potential of remarriage to bias construction of the mortality index, we include mother’s age as a control variable in the model.

Tab. 4

Weighted Means of Variables Used in Regression Model: American Indian Women in 1900 (a)

Weighted Means of Variables Used in Regression Model: American Indian Women in 1900 (a)

All else being equal, we would expect that mother’s literacy and the ability to speak English would confer advantages in child survival, while urban residence, participation in the paid labor force, and spouse’s unemployment would increase child mortality. Plains Indians suffering from recent wars, confinement on reservations, and the near total destruction of the buffalo are expected to have higher child mortality (see e.g., Jones, 2004). In contrast, tribes with either little contact with non-Indian populations or long-settled on reservations such as the Navajo, Cherokee, and Chickasaw are expected to have lower mortality. Given the large differentials in child mortality between the white and Indian populations observed earlier and the suspected differences in socioeconomic conditions between full-blood and mixed-blood Indians, mothers with higher percentages of “white blood” and mothers married to white and mixed-blood men are expected to have lower child mortality than the reference groups of full-blood mothers and full-blood spouses.

27Despite the rhetoric of nineteenth-century humanitarian reformers, we have no expectations for the impact of allotment on child survival. Ordinarily, ownership of land would confer an advantage in child survival. In addition, there is evidence that some tribes, such as the Comanche, tended to disperse from traditional bands after being allotted, potentially decreasing the risk of contracting infectious diseases (Kavanagh, 1989). On the other hand, dispersion of the population may have disrupted traditional social support networks, potential resulting in poorer childcare and reduced capacity to manage economic stress. It is important also to emphasize that allotment was not a uniform process. The selection of allotments among the Crow and Cheyenne suggests that preservation of kin connections was an important goal (Hoxie, 1997; Moore, 1987). Among the Osage, full-blooded Indians tended to select allotments clustered near traditional Osage villages with little farming potential, while mixed-blood Indians selected allotments based on soil fertility and crop potential (Vehik, 1989). At White Earth Reservation in Minnesota, the allotment process was characterized by massive land fraud and limited choice of allotments by the Anishinaabeg, resulting in dispossession and poverty (Meyer, 1991). Widespread evidence of allotment’s ultimate failure to convert a substantial number of Indians to farming and ranching suggests that allotment for most American Indians was associated with economic hardship, and thus higher child mortality. Interestingly, frontier farming also was associated with higher child mortality for the white population, perhaps reflecting limited food supplies and the hardships of clearing new land and constructing new homes (Steckel, 1988).

28Results of the multivariate analysis are shown in Table 5. Model 1 includes all Indian women reporting one or more children ever born, married less than 25 years, with a spouse present in the household. Because of probable data quality problems with the Navajo and Kiowa enumerations, Model 2 excludes women from those tribes. The results indicate that some of our expectations were met while others were not. Mother’s literacy, labor force participation, and urban residence proved to be unrelated to child mortality. As expected, however, the ability of mothers to speak English was associated with lower child mortality. The most significant factor appears to have been mother’s and husband’s percentage of white blood. Indian mothers with 50 percent or more white blood were associated with a .326 lower child mortality index and spouses with 50 percent or more white blood were associated with a .333 lower index. In contrast to the non-Indian population, there appears to be no penalty associated with spouse’s unemployment.

Tab. 5

Weighted O.L.S. Regression Model of Acutal to Expected Child Mortality, American Indian Population of the United States in 1900

Weighted O.L.S. Regression Model of Acutal to Expected Child Mortality, American Indian Population of the United States in 1900

All else being equal, children of Sioux mothers suffered higher levels of mortality than the control group of children of Cherokee mothers. The Blackfoot and Seminole also experienced significantly higher child mortality than the Cherokee, while the Kiowa, Navajo, Paiute, and Pima experienced lower child mortality. While it is likely that the Navajo enjoyed relatively low mortality, the results are so extreme as to suggest a probable quality problem with the data. As noted above, Johansson and Preston contend that Navajos may have underreported infant deaths for cultural reasons (1978). The results for the Kiowa—which are based on the reported number of children ever born and children surviving by just 32 Kiowa women—also appear unrealistic. If these data are unreliable, it is possible that the inclusion of Navajos and Kiowas in the model biases the overall results. Model 2 in Table 5, however, which excludes the Navajo and Kiowa populations, returns similar coefficients for each parameter. Finally, the model suggests that Indian households receiving allotments experienced significantly higher child mortality. Children of parents receiving an allotment suffered over 20 percent than other children, all else being equal. Thus, in addition to having a significant economic and cultural cost, allotment had a significant demographic cost as well.

29Unfortunately, the results in Table 5 are biased to some extent by a lag between the measurement of the dependent and independent variables. Although child mortality occurs in years preceding the 1900 census—centered in Model 1 in 1889—most of the independent variables are measured only at the time of the census (the exception is the allotment variable, which provides the year citizenship was awarded). Although some of the variables in the model are time invariant (e.g., tribal group, percentage of white blood), others may have changed (e.g., labor force participation, literacy). Model 3 attempts to reduce this probable bias by restricting the universe to women with marital durations of less than 15 years. Mortality is therefore estimated to center around a reference year of 1894.2, approximately six preceding the census. Although based on a much smaller number of cases, the results of Model 3 are similar to Model 2.

30So what was the net impact of assimilation and federal assimilationist policies on American Indian child mortality? Perhaps the most intuitive way to evaluate their impact is to use the model to predict the result of a 45.6 percent increase in the number of Indian mothers able to speak English (the mean value for mothers in the model) and a 14.8 percent increase in the number of Indians receiving citizenship by allotment. Combined, the results (using Model 1 in Table 5) suggest a very modest 0.06 decrease in the child mortality index, equivalent to a decline in mortality of approximately 4 percent. We therefore conclude that as of 1900, the government campaign to assimilate Indians had not resulted in a significant decline in Indian mortality. Assimilation policies, however, continued for another 33 years. Though unlikely, it is possible that results from the 1910 Indian IPUMS sample—due for public release in 2006—will suggest that assimilation policies were more beneficial in the period after 1900.

31In addition to the ability to speak English and the achievement of citizen by allotment, the increasing rate of Indian intermarriage with the white and black populations can be seen as an indirect result of assimilation. As shown in Table 5 above, children of Indians that intermarried with whites enjoyed significantly lower mortality. Unions between “full-blood” Indians and “mixed-blood” Indians or whites also produced more children (U.S. Census Bureau 1915). Although intermarriage is clearly an index of de facto assimilation, it was not in itself encouraged by federal policy. In fact, cross-tabulating the percentage of the population recorded as “full blood” Indians by age suggests that intermarriage between the white and Indian populations had been increasing steadily throughout the nineteenth century, well before the assimilationist campaigns of the late nineteenth century. While we do not therefore consider its impact, we know that increasing intermarriage led to significantly lower child mortality. A more generous definition of what constitutes the results of federal assimilation policy that included the impact of intermarriage would suggest a more substantial, though still modest, 13 percent reduction in childhood mortality.

Notes

  • [1]

    A version of this paper was first presented at the IUSSP conference on Vulnerable Populations in Paris, France on 16 July 2005. We would like to thank Patrice Bourdelais for organizing the conference and participants for helpful comments. The paper also benefited from suggestions from Nancy Shoemaker, Melissa L. Meyer, Chad Ronnander, John W. W. Mann, and David S. Jones. Other nineteenth-century U.S. populations deemed “vulnerable” and deserving special attention from the federal government included widows and orphans of Civil War soldiers (Skocpol, 1995) and recently freed slaves (Cimbala and Miller, 1999).

  • [2]

    Recent scholarship has been critical of reformers’ stated concerns for native suffering. The campaign to assimilate Indians was consistent with contemporary racial attitudes and ultimately served the state’s need to manage its colonized peoples while simultaneously freeing Indian land for white development (Hoxie, 1984).

  • [3]

    The Bureau of Indian Affairs collected various data on the Indian population by reservation or agency in the nineteenth century, but the coverage and quality of these data varies enormously (Jones, 2004). A sample of the 1910 Census of American Indians is currently under construction at the Minnesota Population Center.

  • [4]

    Shelia Johannson and Samuel Preston (1978) suggests that a deep cultural reluctance to speak of the dead may have biased Navajo’s response to the children ever born and children surviving questions. See also Clyde Kluckhohn, who cites a “peculiarly morbid Navajo fear of the dead” (1944, 242) and Gary Witherspoon, who contends that while the Navajo did not fear the experience of death, contact with the dead was to be avoided “in order to prevent unnatural illness and premature death” (1983, 571). The Kiowa results are derived from a small sample—32 mothers of 107 children—and appear unrealistic. Although the results for the Navajo also appear unrealistic, childhood mortality rates were no doubt low. The Navajo experienced a period of relative prosperity in the late nineteenth and early twentieth centuries, a low incidence of tuberculosis, and a positive rate of population growth from as early as 1868.

What was the result of a population increase among Native Americans?

The result of a population increase among Native Americans was... An increase in exploitation by the Spanish. Tupac Amaru led an almost-successful rebellion in what place?

What is a direct consequence of the decreased Native American population?

A direct consequence of the decreased Native American population. The Europeans brought African slaves to the New World.

What led to the decline of the Native American population quizlet?

The primary reason for the decline in the Native American population after the arrival of European settlers was that European settlers brought diseases into Native American villages. Prior to making contact with the first settlers, most Native Americans had never been exposed to the diseases the Europeans carried.

What impact did the exchange have on natives quizlet?

The main effect of the Columbian Exchange was diseases that were carried by the explorers killed 90% of Native Americans. After the Native Americans died off who did the the explorers use to grow their crops? Due to the death of so many Native Americans, the demand for African American slaves increased.