The dawn of the twentieth century found Native North Americans on the verge of extinction, the population reduced to less than a quarter of a million in the U.S. portion of the continent. What remained of the Natives' land base was rapidly dwindling under the impact of the General Allotment Act, their jurisdiction was usurped by the Major Crimes Act, their spiritual practices largely prohibited by federal regulations, and their children increasingly removed from their families, communities, and societies, for placement in government-run residential schools. There they were systematically stripped of their cultural identity and indoctrinated to view the world—and themselves— in terms preferred by the Euro-Americans who had subjugated their peoples.
All of this occurred as part of an officially proclaimed policy of "assimilation," taking as its goal the elimination of all American Indians, culturally recognizable as such, by some point in the late 1930s. In his first annual message (1901), President Theodore Roosevelt, to the enthusiastic applause of his Indian Commissioner, Francis Leupp, touted assimilationist initiatives, allotment, and compulsory "education" in particular, as "a mighty pulverizing engine to break up the tribal mass" (Dippie, 1982, 244; Leupp, 1910, 79–95). Small wonder that the literature of the period was suffused with references to American Indians as a "vanishing race."
Two years after Roosevelt's message, the U.S. Supreme Court, by declaring federal authority over Indians to be "plenary in nature"—i.e., full, complete, and judicially unchallengeable—in its 1903 Lone Wolf v. Hitchcock opinion, affirmed the government's prerogative to "pulverize" native cultures in virtually any manner it saw fit. At a stroke, the Lone Wolf opinion formalized the status of indigenous peoples in the United States as essentially interchangeable with that imposed on the populations of Hawaii, Puerto Rico, Guam, the Philippines, and other newly acquired U.S. colonies abroad.
Thus licensed by the high court, both Congress and the officials charged with implementing whatever legislation it enacted, displayed less concern than ever for Indian rights. On the reservations, the already "near-dictatorial powers" delegated by the secretary of interior to his agents in the field (Deloria and Lytle, 1983, 10) were substantially reinforced. Under such conditions, the crushing weight previously placed on Native cultures increased dramatically. While there are any number of additional lenses through which effects can be assessed, the following focal points seem especially useful.
Loss of Land through the "Alchemy of Race"
Between 1887, when the General Allotment Act (25 U.S.C.A. § 331) was implemented, and 1934, when it was finally rescinded, the aggregate landholdings of American Indians were diminished from some 138 million acres to barely 52 million (McDonnell, 1991, 121). Of the residue, "nearly 20 million [acres] were desert or semiarid and virtually useless for any kind of . . . farming ventures" (Deloria and Lytle, 1983, 10). Of the 86 million acres lost, 60 million had been declared "surplus" under a provision of the Act by which the government had empowered itself to take such action, once every Indian on any given reservation had received his or her individual parcel of land (McDonnell, 1991, 2, 121;Washburn, 1986, 68–69). Originally, the parcels allotted averaged 160 acres apiece, but the size was reduced to 80 acres in some locales by amendatory legislation passed by Congress in 1891 (26 Stat. 794), each of them to be held in trust by the government for twenty-five years after allotment.
A key question with regard to the whole process is how the government determined who qualified as an "Indian" for purposes of allotment, thereby becoming "eligible" to receive a land parcel. On this, the Act itself is mute, stating only that the policy would be carried out by "special agents appointed by the President for such purpose, and the agents in charge of the respective reservations on which the allotments are directed to be made, under such rules and regulations as the Secretary of Interior may from time to time prescribe" (Washburn, 1986, 69–70). Nor, to date, has any analyst addressed the issue, either leaving it unmentioned or declaiming that "[w]hat it means to be an 'Indian,' legally and culturally, is and has always been a complicated question, but is a question I do not address here" (Banner, 1981, 8).
Plainly, the matter must be addressed, since it has a direct bearing on the number of parcels allotted and therefore on the quantity of "surplus" acreage remaining for non-Indian acquisition once the allotment process was declared to be complete on each reservation. In this regard, the fewer Indians there were, the better; such, unquestionably, was the perspective of many—perhaps most—of those, in government and out, who supported the Act in 1887. Indeed, a systematic reduction of the Native land base appealed to settlers, railroad magnates, merchants, populists, southerners, apostles of western expansion, and even the Indians' new "friends." More pointedly:
Such desires coincided quite neatly with the fact that the sponsor of the Allotment Act, Massachusetts Senator Henry L. Dawes, was heavily influenced in his views on Indians by John Wesley Powell, founding director of the U.S. Bureau of Ethnology, as well as the ethnologist Alice Fletcher. Dawes, together with most of his "loyalists" in the Senate, also belonged to the so-called Friends of the Indian, a Christian reform group subscribing to the social evolutionary theories set forth by the anthropologist Lewis Henry Morgan and embraced by both Powell and Fletcher, as well as such prominent ethnographers as James Mooney and George Bird Grinnell.
The most powerful business interests in the West were becoming convinced that they had a direct interest in the expansion of agricultural production. As a result they were increasingly interested in both gaining entry to and reducing the size of tribal holdings (Hoxie, 1984, 46).
Although it has been argued that, as a social evolutionist, Morgan "explicitly rejected racial classifications" (Hoxie, 1984, 116), quite the opposite is true. An "emphasis on blood and its use as a vehicle of inheritance of [cultural] traits . . . continued to be an important theme in Morgan's . . . writings," even in such late works as Systems of Consanguinity and Affinity of the Human Family (1871) and Ancient Society, or the Researches in the Lines of Human Progress from Savagery through Barbarism to Civilization (1871). In substance, "Morgan believed that culture was hereditary" and therefore that the best—or at least most humane—"solution to the Indian problem" would be to simply "breed" the race out of existence (Bieder, 1986, 223, 226, 233; Hilden, 1995, 149–150).
Indeed, Morgan held that a dash of Indian "blood" would be of benefit to those he described as "Anglo-Saxons," serving to "improve and toughen our race" (Morgan, 1959, 55). The commonality of such thinking with that marking the eugenics movement, which was then gathering momentum in the United States and reaching its ugly culmination in Nazi Germany during the 1930s and 1940s, is obvious. Although Powell refined Morgan's theory by adding a heavy dose of environmental conditioning, it is instructive that he "considered Ancient Society so valuable that, as director of the American Bureau of Ethnology, he made it required reading for fieldworkers of the bureau" (Bieder, 1986, 243).
Through selective breeding, Morgan thought, Indians could be absorbed into the white population with little or no negative effect [upon the latter]. Morgan concluded from his observations that the half blood was inferior to the pureblood Indian both physically and mentally, "but the second cross, giving three-quarters Indian, is an advance upon the native; and giving three fourths white is a still greater advance" resulting in near equality with the white ancestor. "With the white carried still further, full equality is reached, tending to show that Indian blood can be taken up without physical or intellectual detriment" (Bieder, 1986, 231; Morgan, 1871, 207).
While it may be that "the impact of anthropologists on Indian policy making was cumulative" rather than direct, they nonetheless "set the terms for informed discussions of Indian affairs in the 1880s" (Dippie, 1982, 167; Hoxie, 1984, 28, 23).
What is most important in the connection at hand is that Morgan, Powell, and their colleagues advocated an Indian policy "built on the science of ethnology" (Bieder, 1986, 241). It follows that they adhered to an ethnological understanding of who should be classified as Indians and that their views were shared both by the legislators who formulated federal Indian policy and, in some ways more importantly, by the bureaucrats who implemented it.
The implications are obvious. As Felix Cohen observed in his magisterial Handbook on Federal Indian Law, "If a person is three-fourths Caucasian and one-fourth Indian [by 'blood'], that person would not ordinarily be considered an Indian for ethnological purposes" (Cohen, 1942, 19). Although Cohen goes on to explain that "[r]acial composition is not always dispositive in determining who are Indians for purposes of Indian law" and that a number of nonethno-logical legal definitions of "Indianness" have in fact been effected over the years, it will be recalled that no such definition was advanced in the Dawes Act, as the General Allotment Act is often called.
It is well-established that traditional Native North American methods of determining group membership were usually kinship-based and quite inclusive; "naturalization" by marriage, adoption, and other such means was rather common, irrespective of racial pedigree. In fact, as the noted Santee author and activist Charles Eastman observed in 1919, since no concept of "race" was present in any American Indian tradition, Indians were traditionally "color blind," exhibiting "no racial prejudices" at all (Hertzberg, 1971, 186). It follows, quite apart from a considerable "cross-pollination" between indigenous peoples themselves over untold generations, that both whites and African Americans had been steadily incorporated into many Native societies for two centuries or more by the time allotment became an issue.
Put bluntly, Indians were given to understand that "Congress could make or unmake an Indian, regardless of genealogy, ethnological data, Treaty commitments, or tribal preference. So could an employee of the Indian Bureau, acting under interpretation of federal law or the directive of an administrative superior" (Unrau, 1989, 3). It was abundantly obvious, however, that both Congress and the federal executive would rely on ethnological concepts for this purpose, as they had since the 1817 Treaty with the Wyandots (9 Stat. 904), the eighth article of which specified that persons "connected with said Indians, by blood" would be treated differently than "the Indians" themselves. Variations on the theme had been reiterated in fifty-three treaties before treaty making with Indians was ended through an 1871 initiative spearheaded by Dawes (Churchill, 2003, 212; Hoxie, 1984, 32).
The Curtis and Burke Acts
Ultimately, it was not until passage of the Curtis Act (30 Stat. 495–519) in 1898 that the entitlement of "mixed-bloods"—as opposed to "half-breeds"—to receive allotments was legally acknowledged. Here, the focus was on the "Cherokees, Creeks, Choctaws, Chickasaws, Seminoles, and Osage, Miamies [ sic] and Peorias, and Sacs and Foxes, in the Indian Territory" of Oklahoma (Washburn, 1986, 72), peoples who had been specifically exempted from the Dawes Act itself because they consisted in large part of persons of less than one-half degree of Indian blood and there was an at least tacit understanding that they would be subject to subsequent legislation (Hoxie, 1984, 154; Unrau, 1989, 22).1
Allotment of mixed-bloods under the Curtis Act was, however, attended by a quid pro quo amounting to the dissolution of the so-called Indian Territory, as well as the indigenous nations situated therein. Hence it was strenuously resisted by traditionalist "full-bloods," especially the Keetoowah NightHawks led by RedBird Smith among the Cherokees and Chitto Harjo's Crazy Snakes among the Creeks, exacerbating divisions within the native polities affected that continue to this day. Nonetheless, the job was completed by 1907, at which point the supposedly "permanent" Indian Territory became the state of Oklahoma.
This proved such a boon to white homesteaders and more especially to a range of corporate interests that in May 1906, even before Oklahoma statehood was effected, the Dawes Act itself was amended in such a way as to remove federal trust protection from many of the allotments already made on reservations outside Indian Territory. Championed by South Dakota Representative Charles Burke, the amendment (34 Stat. 182)—usually referred to as the Burke Act, in honor of its sponsor—"authorized the Secretary of Interior to issue a fee patent to an allot-tee at any time, upon determination that the individual was 'competent and capable of managing his or her affairs.' Upon issuance of one of these premature patents, the land was [removed from trust and thereby rendered] expressly subject to alienation, encumbrance, and taxation," while U.S. citizenship was conferred by the same transaction upon the Indian involved (Deloria and Lytle, 1983, 10).
Though Federal jurisdiction over the Indians and their [individually-allotted] lands was reserved, the destruction of tribal governments and the aggressive actions of white Oklahomans resulted in the passage of an act of May 27, 1908 [35 Stat. 312–316], repealing the restrictions on the sale of classes of land hitherto protected by the Federal Indian relationship, and imposing taxes on such lands (Washburn, 1971, 136).
A year later, Congress went further still, amending the Burke Act with a statute (34 Stat. 1018) granting Indian Commissioner Leupp the power to sell allotments belonging to "noncompetent" Indians, and in 1908 he was authorized through another pair of amendatory statutes (34 Stat. 1015–34; 35 Stat. 70) to begin the long-term leasing of such allotments as might still be nominally "owned" by allottees. In any event, of 2,744 premature fee patents issued between 1907 and 1910, some two-thirds had passed from Indian ownership; by 1917, the number of such patents had climbed to more than 8,000 with a loss rate of roughly 90 percent (McDonnell, 1991, 89–90, 100, 110).
Despite these devastating statistics, newly installed Indian Commissioner Cato Sells announced in 1917 that federal policy would be "liberalized" by issuing fee patents to "all able-bodied adult Indians of less than one-half Indian blood," while relaxing still further the manner in which fee-patented land could be acquired by non-Indians, even though it was situated within ostensible reservation boundaries (Commissioner of Indian Affairs, 1975, 214). In March 1919, Sells "liberalized" the rules still further, including "half-breeds" themselves among those subject to receiving "forced-fee" patents to their allotments (McDonnell, 1991, 107). The racial bar was thus raised: While land parcels were typically allotted only to those of at least "one-half degree of Indian blood" under provision of the 1887 Act, by 1920 a blood quantum of more than one-half was required for an Indian to stand a reasonable prospect of retaining his or her property (Hoxie, 1984, 182).
Although the latter group as a whole was legally defined as being "incompetent" to manage its own affairs, individuals within it could nonetheless be declared competent by the Indian Commissioner for purposes of receiving a fee patent. Commissioner Sells therefore "liberalized" the procedures employed by the so-called competency commissions created by his predecessor, Robert G. Valentine, to make case-by-case assessments (McDonnell, 1991, 90, 94, 98–102). Using Sells' tidy race-based formula, the commissions issued 17,176 fee patents from early 1917—about 10,000 of them on a forced-fee basis, over the objections of the Indian owners—through the end of 1920, more than twice the number issued over the preceding decade (Hoxie, 1984, 182; McDonnell, 1991, 110, 116).
More than a million acres were involved, and everywhere the results were the same (Hoxie, 1984, 183). On the Crow Reservation in Montana, as many as 95 percent of all patentees lost or mortgaged their land at rates running as high as 12 percent by the end of the decade (McDonnell, 1991, 106–107). At Fort Peck, the figure was 90 percent; at Flathead, 75 percent; at Winnebago and Blackfeet, much the same. And on the Sioux Complex of reservations in the Dakotas, at Cheyenne River, 75 percent of the land subject to fee patents was lost, on Standing Rock and Rosebud even more (McDonnell, 1991, 106–107, 114).
While Pine Ridge may in many respects be emblematic of the whole, undoubtedly the most egregiously racist of all the various competency proceedings occurred on the White Earth Reservation in Minnesota, following a 1914 Supreme Court holding that the issuance of fee patents and consequent sale of land parcels allotted to "full bloods" had been fraudulent and that land thus lost to allot-tees would have to be restored (U.S. v. Nichols-Chisolm Lumber).2 This raised the questions of who the full-bloods were and whether/how persons claiming such status could "prove" their status. The quandary was "resolved" when assistance was offered by "Dr. Ales Hrdlicka, director of anthropology at the Smithsonian Institution, and Dr. Albert E. Jenks, an anthropologist at the University of Minnesota, [who, together] claimed to have devised certain scientific tests capable of distinguishing between mixed-bloods and full-bloods" (Meyer, 1994, 168).
[On the] Pine Ridge reservation . . . most of the Natives were full bloods, unable to read, write, or speak English, who could in no way be classified as competent. Yet they received fee patents and soon sold their land. The proceeds often went for provisions and "worthless trinkets." Many patentees got a loan or mortgage on their land, and without any business experience or understanding of the need to pay taxes and interest on loans, they were often swindled out of it when white lenders fore-closed (McDonnell, 1991, 106).
"Pioneer[s in] the field of eugenics" and believing they could make a "determination of blood status on distinguishing physical characteristics," Hrdlicka and Jenks, assisted by Hal Downey of the University of Minnesota's Department of Animal Biology, set to work on the "White Earth Case" in 1916 (Meyer, 1994, 168–169). In short order, they commenced to examine their subjects' physiognomies using methods virtually interchangeable with those employed by German "racial hygienists" only a few years later. Diameters of hairs were measured—ten per person were required to establish a "scientific standard"—as were foot sizes, nose and cranial proportions. Skin was scratched, blood was typed, ear wax texture was assessed, urine was analyzed, and skin tone and amounts of body hair were catalogued.
All told, Hrdlicka and his colleagues examined 696 of about 800 Indians claiming to be full-bloods, verifying the "pedigree" of only 126. Although subsequent analysis revealed that the experts' application of the latest techniques in "racial science" had led them to attribute "full-blood" children to "mixed-blood" parents and to identify siblings born of exactly the same parents as being of different "racial compositions," the results were employed not only in "settling" the land fraud issue but in compiling White Earth's so-called Blood Roll, approved by a federal district court in 1920 (Meyer, 1994, 170–171). By the latter year, roughly 90 percent of all White Earth patentees had been dispossessed (Banner, 1981, 284; McDonnell, 1991, 106).
Somewhat ironically, Charles Burke, who was appointed Indian Commissioner in 1921, finally put a stop to Sells' wholesale rush to separate American Indians from what little remained of their 1887 estate, in no small part because those rendered landless, destitute, and for the most part unemployable had been applying for welfare assistance at a dramatically increasing rate. The toll taken on Indian land was by then staggering, however. In all, the issuance of fee patents under the provisions of the Curtis and Burke Acts had resulted in the loss of some 23 million acres of the land remaining to Native people after the reservation "surpluses" had been stripped away (Banner, 1981, 282–283; McDonnell, 1991, 121). Additionally, Commissioners Leupp and Sells in particular had forced the sale of some 3.4 million acres of allotted land to "noncompetent" Indians. In all, some 100,000 native people had been left landless by the time Burke took office (McDonnell, 1991, 121).
The competency commissions were abolished, along with the policy of issuing forced-fee patents to all allottees evidencing "one-half or less Indian blood," and Burke "became the first commissioner to cancel a fee patent" (McDonnell, 1991, 111, 112). In 1927, after he had facilitated a judicial finding that fee patents "issued during the trust period without the application or consent of the allottee" were illegal (U.S. v. Benewah County), he was instrumental in convincing Congress to enact a statute (U.S. Statutes at Large, Vol. 45 at 1247) authorizing cancellation of all forced-fee patents, "provided the Indian owner had not sold or mortgaged any part of the patent" (McDonnell, 1991, 117).
Such gestures were mainly for show, of course: Under the 1927 statute, in combination with a 1931 follow-up (U.S. Statutes at Large, Vol. 46 at 1205), passed after Burke left office and authorizing the cancellation of patents "on any portion of an Indian's land that had not been sold and was not presently encumbered by mortgage [emphasis added]," only about 470 forced-fee patents were ever actually cancelled (McDonnell, 1991, 117–118). The main effect of Burke's posture was thus not to restore land to Indians from whom it had been wrongfully taken, but rather to radically scale back the rate of loss. By 1930, when Charles Rhoads replaced Burke as commissioner of Indian Affairs, the number of fee patents issued for the year had dropped to 113, and in 1933 Rhoades' successor, John Collier, effectively halted their issuance altogether (McDonnell, 1991, 120).
Of the allottees who retained their property by that point—a total of 246,579 people, holding 40,848,172 acres on a hundred reservations (Kicking-bird and Ducheneaux, 1973, 23; McDonnell, 1991, 121), very few enjoyed the benefit thereof. "World War I [had afforded] the Indian Office an excellent excuse to expand its leasing policy" established by Commissioner Leupp under the earlier mentioned 1908 amendments to the Burke Act, in the name of the war effort (McDonnell, 1991, 47).
Actually, the policy was expanded considerably after the war, largely on the basis of a 1919 statute (U.S. Statutes at Large, Vol. 39 at 128) allowing both allotted and unallotted irrigable reservation lands to be leased, essentially at the discretion of the Indian commissioner, for as long as ten years. Thus empowered, in 1920 Commissioner Sells approved approximately 40,000 leases, covering some 4.5 million acres of the best remaining reservation land (McDonnell, 1991, 47–48). This enforced a situation, created in large part by chronic undercapitalization, wherein fewer than 20 percent of the Indians still owning allotments on most reservations were able to use their land for agricultural or grazing purposes (Berthrong, 35; McDonnell, 1991, 46, 64–65, 123). Further, since the leases were steeply discounted—annual rates were as low as 8 cents per acre—and, since the Bureau of Indian Affairs (BIA) made little effort to collect even that pittance, many leaseholders never bothered to pay even that; few, if any, allot-tees could live on their "rental income." Hence, many of them ended up working as hired hands on their own property simply to survive (McDonnell, 1991, 60–70; Moore, 1996, 131–136).
During the war the Indian Office leased huge tracts of land to individuals and corporations . . . without the consent of the Indians and then closed its eyes as lessors violated the terms of those leases. . . . The push for leasing was an important part of the general push by whites for control of Indian land during the war. In the name of wartime emergency the Indian Office approved leases to large cattle companies and sugar beet companies that violated Indian rights. Moreover, the leases, often forced on Indians as essential war measures, were left in place after the war ended (McDonnell, 1991, 62, 47).
Commissioner Burke not only did nothing to halt Sells' leasing policy—of which he heartily approved—he streamlined it in 1921 by authorizing the superintendent on each reservation to approve farming and grazing leases without approval from Washington. By 1925, the fourth year of Burke's tenure, the number of such leases was slightly higher than the benchmark set by Sells in 1920, although total allotment acreage involved had declined to "only" 4 million (McDonnell, 1991, 48–50). As the extent of mineral deposits on many reservations became increasingly apparent over the course of the decade, moreover, the leasing of huge tracts for "resource development activities"—i.e., mineral extraction—became increasingly frequent (Ambler, 1990, 44–46; Hoxie, 1984, 186–187).
Commissioner Burke's leasing policy was further reinforced by passage of the General Mineral Lands Leasing Act of 1920 (41 Stat. 437), the Indian Oil and Gas Leasing Act of 1924 (43 Stat. 244), and a 1927 statute (44 Stat. 1347) specifying that reservations or portions of reservations established by executive order rather than by treaty should be handled like any other "public lands" for purposes of resource development (Ambler, 1990, 39–41). Plainly, there was a campaign afoot to "integrate native resources into the American economy" (Hoxie, 1984, 187) and to do so in a manner both conceptually and structurally indistinguishable from the forms of internal colonialism imposed on indigenous peoples by European immigrant states the world over (Nietschmann, 1994, passim; Thomas, 1966–1967, passim). Just as plainly, this implied a fundamental change in federal Indian policy.
Indian-owned mineral lands were developed without the Indians' permission and with little economic benefit to them. . . . The formal policy of leasing mineral lands began in 1891 when Congress authorized mining leases on treaty reservations for up to ten years [26 Stat. 794 § 3]. As the demand for oil and other resources grew in the early twentieth century and as new mineral deposits were uncovered in the Southwest, the call for more liberal leasing legislation grew louder, [resulting in passage of] a law on June 30, 1919 [41 Stat. 3] authorizing the secretary of interior to lease reservation land in Arizona, California, Idaho, Montana, Nevada, New Mexico, Oregon, Washington, and Wyoming for the mining of metalliferous minerals [on a twenty-year timebase] (Ambler, 1990, 40; Hoxie, 1984, 186; McDonnell, 1991, 50–51).
For the mineral wealth of Native North America to be exploited in the most efficient manner, so the thinking went, it had to be administered in trust, subject to the centralized authority of federal economic planners. Since what remained of reservation lands already occupied the necessary trust status—a matter confirmed in the Lone Wolf opinion as deriving from the government's "plenary power" over Indians (Ambler, 1990, 44–48; Clark, 1999, 97; Coulter and Tullberg, 1984, 198–203)—it was suddenly imperative that they not be dissolved, at least not until the extent of their mineral assets could be fully assessed.
From this standpoint, Commissioner Burke's policy of abruptly curtailing his predecessors' enthusiastic liquidation of Indian landholdings, while placing a steadily increasing emphasis on large-scale mineral leasing, makes perfect sense. By 1928, the government's longstanding goal of forcing Indians and their reservations to "vanish" had been quietly abandoned in favor of "reorganizing" them for a sustained existence as little more than "domestic" resource feeders contributing to the profitability of U.S. state and corporate enterprise.
While it is true that the vast erosion of the American Indian estate precipitated by allotment was effectively halted by 1930 and was to some extent reversed over the next several decades, this in itself did not imply a corresponding restoration of use and benefit of the remainder to Native people. To the contrary, one of the legacies of the allotment era is that the massive sell-off of forced-fee patents left many reservations so checkerboarded—honeycombed, really—with non-Indian property holdings that it has proven all but impossible to consolidate the land necessary to attain anything resembling economic self-sufficiency. The problem is compounded by the fact that, on some reservations, all irrigable land was acquired by whites before the process was halted.
Serious jurisdictional problems also attend non-Indian ownership of alienated fee patent allotments within reservation boundaries.
It is exceedingly difficult to create . . . grazing or farming units on allotted reservations because quite often there are not enough allotments contiguous to one another to make up an economically feasible block of land for [such] use [and the] tribe has to overcome previous allocations by the courts in order to begin new development using their [reserved] water rights (Deloria and Lytle, 1983, 255).
Even where allotments remain in trust status, the checkerboard effect has been exacerbated by the increasingly fractionated ownership of individual parcels, otherwise known as the heirship problem. The name derives from the fact that the whole thrust of allotment was, from the outset, to strip away as much reservation land as possible, as rapidly as possible, under the premise that Indians, as such, would soon be "extinct." With no consideration of the prospect that the native population might eventually rebound, no land was reserved to accommodate future increases in the number of Indians. Hence, the descendents of the original allottees were consigned to receive "equitable shares" in their ancestors' land parcels, a practice conforming to the requirements of Anglo-American rather than indigenous law (Kickingbird and Ducheneaux, 1973, 24).
Civil and criminal jurisdiction depends upon the existence of trust lands. Whenever an allotment goes out of trust, the tribe loses jurisdiction over that area and must rely on negotiated agreements with state and county governments in order [to regain it]. Zoning for economic development and housing and enforcement of hunting and fishing codes is exceedingly difficult when the area under consideration is not wholly trust land (Deloria and Lytle, 1983, 255–256).
The upshot is that it has become necessary to employ "complicated computations involving common denominators as large as 54 trillion" to decipher the individual interests in a given allotment. With hundreds of heirs now holding interest in a single piece of property, it is all but impossible to obtain agreement among the owners concerning its use or disposition. By 1970, nearly six million of the roughly 10 million remaining acres of allotted lands were tied up in this fashion, with at least one-half of a million acres standing idle, and another 1.5 million leased to whites (Deloria and Lytle, 1983, 255–256; Kickingbird and Ducheneaux, 1973, 24–30; Washburn, 1971, 150–152).
Overall, consolidation of land on the reservations—that is, a reversal of the effects of allotment— remains "the major unsolved economic problem of Indian tribes" in the first years of the twenty-first century, just as it was at the beginning of the twentieth. "Until tribes are able to own their lands in one solid block," Vine Deloria, Jr., observed during the early 1980s, "they cannot reasonably make plans for use or development of their resources." Since then, however, "the Indian land situation [has grown] increasingly more serious, with no prospect of relief" (Deloria and Lytle, 1983, 255–256).
Both the magnitude and the urgency of this allotment-induced circumstance are readily apparent in the sheer depth of the poverty afflicting Indian Country today. The data are more reflective of conditions customarily associated with the Third World than areas in a country boasting "the world's most developed economy." At the end of the twentieth century, American Indians comprised by far the poorest population category recorded in the U.S. Census, with per-capita income on some reservations averaging about $3,000 per year, unemployment running into the 80th percentile range, approximately the same percentage of houses classed as "substandard" or "uninhabitable" (Strickland, 1997, 52–53; Wilkins, 2002, 158–159; Wilkinson, 2005, 22, 348–349).
Indians also suffer infant mortality and die of malnutrition, exposure, and "accidents" at rates far beyond those evidenced by non-Indians. Reservation Indians have a life expectancy of barely fifty years, a noticeable increase from the forty-two years an Indian could expect to live during the early 1960s, perhaps, but still a lifespan one-third shorter on average than that enjoyed by the general population (Steiner, 1968, 197–200; Strickland, 1997, 53; Wilkinson, 2005, 22).
The Indian health level is [also] the lowest and the disease rate the highest of all population groups in the United States. The incidence of tuberculosis is over 400 percent higher than the national average. Similar statistics show that the incidence of strep infections is 1,000 percent higher, meningitis is 2,000 percent higher, and dysentery is 10,000 percent higher. Death rates from disease are shocking when Indian and non-Indian populations are compared. Influenza and pneumonia are 300 percent greater killers among Indians. Diseases such as hepatitis are at epidemic proportions. Diabetes is almost a plague. And the suicide rate for Indian youths ranges from 1,000 to 10,000 [percent] higher than for non-Indian youths; Indian suicide is epidemic (Strickland, 1997, 53).
It must be borne in mind that these conditions prevail despite the fact that, by 1990, the unavailability of land, in combination with the extremity of their peoples' destitution, had displaced 56 percent of all federally recognized Indians from their reservation homelands (Indian Health Service, 2006). Were they to have asserted their right to stay, things would be far worse. Such data lend substance to Sartre's contention that colonialism is inherently genocidal (Sartre, 1968, 63; Sartre, 1964, 30–47).
Creation of the Internal Colonial System
Notwithstanding the foregoing realities, it is currently the official position of the U.S. government that American Indians exercise a form of "internal self-determination," established under the 1934 Indian Reorganization Act (or IRA, ch. 576, 48 Stat. 934) and subsequently reinforced through legislation such as the Indian Self-Determination and Educational Assistance Act of 1975 (88 Stat. 2203), which not only fulfills their rights under international law but presents a model appropriate for emulation on a global basis (National Security Council, 430). Such claims bear scrutiny.
As stated in the United Nations' 1960 Declaration on the Granting of Independence to Colonial Countries and Peoples (U.N.G.A. Res. 1514 [X]), and elsewhere, "All peoples have the right to self-determination; by virtue of that right, they freely determine their political status and freely pursue their economic, social, and cultural development." The 1966 International Covenant on Economic, Social and Cultural Rights (U.N.G.A. Res. 2200 [XXI]), as well as the International Covenant on Civil and Political Rights (U.N.G.A. Res. 2200 [XXI]), also set forth in 1966, not only reiterate the 1960 Declaration's legal definition, but offer further clarification.
Both of the 1966 covenants go on to require that "All States Parties to the Present Covenant, including those with responsibility for the administration of Non-Self-Governing and Trust Territories, shall promote the right of self-determination, and shall respect that right [as defined above], in conformity with the provisions of the Charter of the United Nation." The 1960 Declaration adds that the "pretext" of purported concern over the "inadequacy of political, economic, social and cultural development" is legally invalid as a basis for denying, delaying, or qualifying any people's right to self-determination (Weston, Falk, and D'Amato, 1990, 344, 371, 376).
All peoples may, for their own ends, freely dispose of their natural wealth and resources without prejudice to any obligations arising out of international economic co-operation, based upon mutual benefit, and international law. In no case may a people be deprived of its own means of subsistence.
The Indian Reorganization Act
The template on which the IRA appears to have been constructed was the Interior Department's creation of what it called the Navajo Grand Council in 1923. This was done after the previous council, organic to the Navajos themselves, declined to approve an exploratory lease of a 4,000-acre tract on the reservation by a subsidiary of the Standard Oil Corporation. Having unilaterally devised "a new form of government" for the Navajos, chartered it, vetted its members, and required that representatives of the Indian Bureau be present at all meetings, Commissioner Burke was able to pronounce the problem solved: The Grand Council simply delegated authority to sign oil leases on its behalf to a federal commissioner.
Also during 1923, Interior Secretary Hubert Work commissioned a National Advisory Committee, usually called the Committee of One Hundred because of the number of prominent business and church leaders involved, to reexamine the "Indian Question" and make policy recommendations. The committee's report, delivered in January 1924, urged that a high priority be placed on conferring U.S. citizenship on all Indians who had not been previously made citizens by other means, including those who had refused to accept it. The Indian Citizenship Act (8 U.S.C.A. § 1401 [a] ) was effected shortly thereafter, consummating a tension over the question of national allegiance among many native people that has persisted into the present.
Other items on the committee's agenda, overlapping as they did with those advanced in several ambitious studies completed by the Bureau of Indian Affairs itself over the next several years would figure significantly in the formulation of the IRA. The best-known of the latter efforts, The Problem of Indian Administration—published in 1928 and usually referred to as The Meriam Report after its principal author, Lewis B. Meriam—catalogued the ugly panorama of material degradation to which Indians were being subjected and called for a range of policy reforms, especially with regard to the manner and environments in which native children were being "educated."
Many of the recommendations made by Meriam and his colleagues were ultimately included in the IRA. However, future Indian Commissioner John Collier, who would be the legislation's moving force, seems to have been influenced most powerfully by a proposal advanced before a Senate committee during hearings conducted in 1930. This involved a scheme to convert the Klamath Reservation in southern Oregon into a federally chartered Klamath Indian Corporation, in which tribal members would be "stockholders." The resulting enterprise, devoted mostly to timber harvesting, would be "governed" under federal supervision by what amounted to a corporate board elected by—and from among—the stockholders (Deloria and Lytle, 1983, 49–50, 144).
Although he helped scuttle the plan for technical reasons, it is instructive that Collier described the underlying idea as "the most important new step in Indian affairs since the general allotment act" and asked the senators gathered in consideration of it to "imagine the Menominees, and again the Chippewas of Minnesota, and again the Navajos [as] incorporated tribes." In Collier's view, the major difficulty with the concept—apart from "qualifying . . . tribal members to vote in corporate elections"—was deciding how "some Federal agency"—by which he obviously meant the BIA—might be restructured in such a way as to assure efficient "supervision" of as many as two hundred and fifty disparate entities of the sort proposed (Deloria and Lytle, 1983, 50–51; Taylor, 1980, 63–91).
While the IRA is routinely touted as having been a New Deal for Indians, ushering in a renewal of indigenous sovereignty, self-governance, and economic revitalization, even the brief summary of "ongoing effects" of the federal allotment policy suggests that precisely the opposite has proven true. Examining certain of the Act's provisions in more detail will reveal why (Taylor, ix–xiii, 92–118, passim).
Illusions of Democracy
A standard myth is that, rather than being forced, the Native peoples who underwent federal reorganization did so voluntarily. As an "exercise in the transplantation of democracy," however, the IRA was in its very conception a travesty. Indeed, its first presumption was that the governments of every American Indian people would be reorganized on a common model involving the adoption of a constitution, bylaws, and electoral procedures, other than those explicitly refusing to do so. The law allowed refusal (and some Native nations exercised this right). This, of course, placed the burden of action on those who sought to preserve or restore their own forms of governance, a polar reversal of standard democratic procedure (Deloria and Lytle, 1983, 141, 171).
Since the only form of refusal accepted as valid by federal authorities was a "majority vote" of "eligible voters," moreover, opponents of reorganization were placed in a contradictory and degrading Catch-22 position, having no viable means of expressing their opposition other than by participating in the very process they opposed—i.e., voting against it—an act many of the peoples considered antithetical to the traditionally consensus-based modes of governance they were seeking to preserve.
This was so because Collier, although he was certainly aware that this was the typically polite way of saying "no" in many native societies, opted for the most part to simply ignore the implications attending mass abstentions. A classic illustration is the referendum conducted at Hopi in 1936, when roughly 85 percent of the eligible voters, deeply committed to maintaining their time-honored Kikmongwe form of government, actively boycotted the referendum, only to have Collier falsify the tribal census as a means of casting the impression that reorganization had been decisively approved (Tullberg, 35–37). More egregious still was Collier's practice of counting abstentions as affirmative votes to foster such illusions.
Topping it all off was the fact that those deemed eligible to vote in the referenda were the persons and/or descendents of persons inscribed on tribal "base rolls" by federal authorities as a concomitant to determining allotment eligibility. With the parameters of the native polity thus preestablished in accordance with their own rather than native criteria, officials were able to utilize other techniques— imposing or removing residency requirements, for example—to manipulate the outcomes of various tribal referenda (Barsh, 1982, 45; Deloria and Lytle, 1983, 141, 164–165). The deck was thus stacked in every possible manner to ensure that Indians "voluntarily" reorganized. Under such circumstances, the mystery is not why 181 peoples ended up doing so, but that 77 managed to make their rejections stick (Deloria and Lytle, 1983, 172).
For seventeen tribes, comprising a total population of 5,334, this [practice] reversed an otherwise negative vote. That is, in each instance the actual vote cast indicated that the majority of those Indians who participated in the [referendum] had opted to reject the act, but when the votes of the Indians who did not participate were added in favor of adoption, the act was construed as having been accepted. On the Santa Ysabel Reservation in California, for instance, 43 Indians voted against the Indian Reorganization Act and only 9 voted to accept it. Still, the Santa Ysabel tribe came under the act because 62 eligible tribal members who did not vote were counted as being in favor of adoption. Hence, the final tabulation was viewed as 71 in favor of adoption, 43 opposed (Deloria and Lytle, 1983, 172).
On the Matter of "Self-Governance"
Claims that the IRA imbued native peoples with a genuine form of "democratic self-governance" usually begin with the argument that the governing bodies created under the Act were and are "constitutionally-based." Such notions are readily dispelled by the nature of the constitutions themselves, however. While the Act set forth the principle that each people was entitled to write its own constitution, each such document was subject to approval by the secretary of interior (although, as always, secretarial authority was delegated to the commissioner of Indian Affairs). Those, like the Yankton Sioux who tried to exercise their prerogatives in this respect, found themselves checkmated by an inability to secure approval.
This led to a series of "tribal constitutions," which were remarkably similar to one another and in many ways interchangeable for the simple reason that they were not written by the people to be governed under them, but instead "by attorneys within the Department of Interior" (Deloria and Lytle, 1983, 173). Quite predictably, the resulting boilerplate "foundational documents" empowered the governments based on them to do very little without approval of the Indian Commissioner (O'Brien, 1989, 83).
The question thus becomes, as Vine Deloria, Jr., once posed it, "If these powers could be delegated or withheld from tribal governments completely at the discretion of the secretary, what true authority was left to the tribe?" (Deloria and Lytle, 1983, 143). The answer, of course, is virtually none. Clear indication that the tribal governments created under the IRA were never really conceived as exercising actual governing authority will be found in Section 16 of the Act, which requires that each people adopting a constitution simultaneously adopt a set of bylaws. The phrase "constitution and bylaws" is associated more with corporate than with sovereign governmental entities (one will search in vain for the governmental bylaws of any United Nations member state).
Given this, it is unsurprising that the tribal governance provisions found in Section 16 of the IRA are followed in the very next section by provisions for chartering each native people adopting a constitution as "a defined class of Federal corporations, to wit, incorporated Indian tribes or communities" (Deloria and Lytle, 1983, 156). In at least one instance, the Interior Department held that a corporate charter might be considered "equivalent" to a tribal constitution, a tribal business committee the equivalent of a tribal government. In effect, then, the intended function of IRA "governments" was to serve essentially as corporate boards, the authority of which was directly subordinated to that of the Interior Department.
Bluntly put, the main purpose of the "reorganized" governments was to sign off on leases and other contracts, thereby signifying "tribal consent" to various "economic development" ventures established "in their behalf" by the federal government (Robbins, 132, 1979; Ortiz, 1979, 70).
A Question of Trust
As an enticement to convince Native peoples to reorganize themselves on the federally preferred corporate footing, the IRA provided that those who did would be able to draw on a revolving credit fund to underwrite the start-up of enterprises designed to revitalize tribal economies. An initial sum of $10 million was allocated to this purpose in the Act's Section 10, while a further $2 million per year was promised in Section 5 to assist reorganized peoples to reacquire the acreage necessary to consolidate their landholdings and to establish reservations for several peoples left landless by allotment, and yet another half million per year was designated to subsidize educational vocational training and the like. Congress never delivered on these commitments, however.
By 1945, a full decade after the Act was implemented, only $5,245,000, just over half the promised up-front capitalization, had been paid into the credit fund (Kelly, 1976, 311). Thus starved for liquidity, most of the eagerly anticipated tribal enterprises foundered or never really got off the ground, and the Indians were left with little alternative but to watch as huge swaths of land they'd hoped to use for other purposes were leased to mining corporations and the like as a means of generating at least some income, thereby establishing certain of the more mineral-rich areas as veritable "resource colonies."
The House Subcommittee on Interior Appropriations, led by Congressman Jed Johnson of Oklahoma, provided only about one-quarter of the $12.5 million authorized by the IRA. In the following years, Johnson was instrumental in cutting the revolving credit fund to $2.5 million, the annual land purchase fund to $1 million, the funds allocated for tribal organization to $150,000, and education loans to $175,000 (Deloria and Lytle, 1983, 174).
This drift toward enforced dependency was sometimes augmented by large-scale federal impoundments of Indian livestock, devastating what remained of many indigenous subsistence economies. While such programs were ostensibly carried out as a means of curtailing environmental damage caused by overgrazing, the alternate use to which the land was put by authorities turned out in some cases—on the Navajo Reservation, for example—to be the vastly more destructive practice of strip-mining coal and other minerals.
The terms of the leases negotiated by federal officials to allow the extraction of reservation minerals by major U.S. corporations are telling. Pursuant to the acts of 1920, 1924, and 1927, as well as an Omnibus Tribal Lands Leasing Act passed in 1938 (52 Stat. 347), such contracts "ensured that [tribal] revenues generated from these nonrenewable resources [were] only a fraction of what it should have been," with royalty rates often pegged at less than 20 percent—sometimes as low as 2 percent—of market norms.
The already inordinate profitability of non-Indian mining operations in Indian Country accruing from such steeply discounted rates was, by the end of the 1960s, enhanced dramatically by the federal practice of releasing the corporations involved from meeting the occupational safety and environmental protection standards applicable in other locales, permitting often huge savings in overhead expenses.
As it turns out, Indians were never allowed to use their own money to better their circumstances. Instead, recent litigation has revealed that somewhere "between 300,000 and 500,000 Indians have been deprived of between ten and forty billion dollars as a result of over one hundred years of trust fund mismanagement by the federal government" [emphasis added] (Bowman, 2004, 543–544). Among other things, it has been shown that, beginning at least as early as 1887, the government has consistently "lost, dissipated, or converted to the United States' own use the money of [Indian] trust beneficiaries" and has "destroyed records bearing upon [its] breaches of trust" [emphasis added] (Bowman, 2004, n10; Cobell v. Babbitt, 1999; Cobell v. Norton, 2004; Cobell v. Kempthorne, 2006).
This concerns only the trust accounts assigned to individual Indians. If the accounts of the peoples ostensibly "compensated" for land losses since the early nineteenth century were subjected to the same scrutiny, along with lease and royalty payments accruing from the 1890s onward—and especially since passage of the IRA—the total would likely be far greater, well over $100 billion by some estimates (Heilprin, 2006; Schneider, passim).
The Continuation of Racial Alchemy
Perhaps the most insidious of all the IRA's lingering effects has been its entrenchment of the racial definition(s) of Indianness, first applied in a comprehensive fashion under provision of the General Allotment Act in 1887 and thereafter solidified under the Curtis and Burke Acts. While the 1934 Act provides that all persons appearing on the federally created base roll of each Native people, as well as their descendents, "regardless of [their] degree of blood," could be entered on the membership roll of each people as it reorganized, the reality was that blood quantum criteria had been employed in establishing the base rolls themselves (Deloria and Lytle, 1983, 150–151).
The Interior Department attorneys who wrote the boilerplate constitutions, through which the ground rules of reorganization were spelled out to Native peoples, were, moreover, advised by a team of anthropologists steeped in ethnological methods. It is thus not especially mysterious how many of the peoples on whom a constitution was bestowed discovered—often to their surprise—that their own traditions required that members meet Indian Commissioner Collier's preferred standard of at least one-quarter-degree Indian blood. Any change to this baseline racial requirement for tribal enrollment could be made only if approved by the Secretary of Interior.
Thus, a survey of enrollment criteria pertaining to 162 of 306 federally recognized tribes conducted during the late 1980s revealed that 131 required a blood quantum of one-quarter or more. Of these, one required that enrollees be of at least "three-eighths Indian blood," while seventeen would enroll no one of less than half-blood, including the children of a duly enrolled parent (Snipp, 1989, 362–365). A survey of 302 tribes, conducted during the mid-1990s, recorded 204 as asserting such requirements (Thornton, 1997, 37). In many cases, it was also required that the applicant's quantum accrue from the specific people with whom she or he sought to enroll (Snipp, 1989, 312; Thornton, 1987, 190–191), a stipulation known to produce bizarre results.
Were the situation not vexing enough, the racial definition of Indians advanced in the IRA has since multiplied into at least thirty-three, and perhaps as many as eighty, different—and often conflicting— definitions of Indianness in U.S. law, all of them formulated for the convenience of federal authorities rather than Indians (O'Brien, 1991, 1481).
[Consider the situation of a child] who is one-eighth Lower Brule Sioux, one-eighth Cheyenne-Arapaho, one-eighth Blackfoot, and one-eighth Turtle Mountain Chippewa. She is . . . one-half Indian. But each tribe of her ancestry requires its citizens to document a one-quarter blood degree from that tribe only. From the perspective of each of her tribes, therefore, this child is ineligible for citizenship; she is simply a non-Indian. . . . Indeed, even children of exclusively Indian ancestry can find themselves denied tribal citizenship due to similar circumstances (Garroute, 2003, 19–20).
Since this welter of statutory definitions affects everything from eligibility for federal health and education services to the exercise of rights under statutes such as the Indian Child Welfare Act (92 Stat. 3069) and the Indian Religious Freedom Act (92 Stat. 469), the matter is hardly insignificant. While tribes have for some time been technically free to abandon blood quantum requirements—and a minority of them have—the government's ongoing imposition of a "quarter-blood minimum" as the normative requirement for receipt of federal services presents an all but insurmountable barrier to their doing so. Since funding is allocated on the basis of a per-capita computation wherein only those meeting the quarter-blood standard are included, provision of medical and other services to tribal citizens failing to meet federal requirements can only be underwritten at the expense of those which do.
Ultimately, the present drift toward redefining Native identity exclusively in terms of enrollment in a federally sanctioned tribe, reduces to little more than a reassertion—or continuation—of the allotment era agenda wherein the smallest possible number of Indians would be recognized. In this regard, we would do well to recall that the goal of federal policy during that era was to make Indians, as such, ultimately "vanish" altogether. Here, the implications of the normative "quarter-blood standard" of Indianness now enforced by federal and tribal officials alike must be considered.
By the year 2080, should long-standing trends continue, "persons with [half] or more Indian blood quantums" are projected to comprise only 8 percent of the identified Native population—as compared to 87 percent in 1980—while one-third will fall somewhere between quarter- and half-blood. The remaining 59 percent will be of less than one-quarter Indian blood, while full-bloods will have all but disappeared (Snipp, 1989, 166–167; Thornton, 1987, 237).
Arguably, then, should appreciable sectors of Native North Americans continue to embrace the "alchemy of race and rights" concocted by federal authorities during the nineteenth century, they will have engaged, however unwittingly, in a form of "autogenocide." As Cherokee demographer Russell Thornton explained nearly twenty years ago, given the ever-increasing proportion of the Native population displaced from the reservations to urban locales:
In effect, such trends project "a scenario in which tribes will find themselves redefined as technically 'extinct,' even when they continue to exist as functioning social, cultural, political, linguistic or residential groupings" (Garroute, 2003, 58). Put another way, insistence on tribal enrollment as the sine qua non of Indianness, along with quarter-blood quantum as the normative requirement for a place on the rolls, leads unerringly to the prospect that "American Indians as Indians may eventually end, in the words of T. S. Elliot, 'not with a bang but a whimper' " (Thornton, 1987, 239).
Intermarriage will further reduce the relative numbers of American Indians by reducing the blood quantum of further generations. This [will] likely increase intermarriage rates, since there will be fewer potential American Indian mates. It may [thus] be that the demographic effects of less natural increase, more intermarriage, and less tribalism will ultimately eliminate American Indians as a distinct population, whereas 400 years of population decimation after European contact did not (Thornton, 1987, 239).
1Although Euro-American racial vernacular was widely adopted by American Indians during the late nineteenth century, it is important to note that they generally employed it in a very different manner. The terms "full-blood" and "mixed-blood," for example, have tended to be used much more as cultural than as biological signifiers. Thus, a person of racially mixed ancestry is often referred to as a "full-blood" if she or he holds to a traditional outlook, while a person embracing Euro-American values is referred to as a "mixed-blood," even if of "pure" Indian lineage. Such usage continues at present, although more conventional—i.e., explicitly racial—definitions have attained a steadily increasing traction in native discourse since the 1930s, with the result that both sets of connotations are simultaneously at play when such terms are used by Indians (Fowler, 311–352; Harmon, 12–13; Meyer, 1994, 118–122; Pickering, 82–83; Sturm, 56–57, 72).
2The situation resulted not from the Burke Act, but from the so-called Clapp Act (U.S. Statutes at Large, Vol. 34 at 353, 1034), thus named for its sponsor, Minnesota Senator Moses Clapp, passed as a rider to the 1906 Indian Appropriations Act and applicable only to White Earth. Clapp's initiative prefigured Indian Commissioner Sells' 1917 racial definition of "competency" by declaring that all "mixed-bloods" on the reservation would henceforth be considered legally competent and therefore issued fee patents to their allotments, whether they wanted them or not. The only problem was that he neglected to explain what was meant by the term "mixed-blood." On judicial challenge, the federal district court ruled, rather arbitrarily, that "at least one-eighth degree white blood" was required. The ruling was appealed and overruled by the circuit court, which held that any admixture of white "blood" was sufficient. The latter view was upheld by the Supreme Court on June 8, 1914 (Meyer, 1994, 153, 167).