Better transportation and triage were made possible by improved triage systems dating to World War I. By 1939, military surgeons recognized the importance of early management of battlefield trauma. Medics trained and equipped to control blood loss and administer analgesics at the initial site of injury were attached to individual combat units. Injectable morphine dispensed on the battlefield lessened both pain and shock during evacuation. Taking the model developed by the Royal Army Medical Corps, transport from the field was organized in stages, with each step assigned a specific range of duties.
The first stop after field stabilization was the regimental collecting station, where hemostatic bandages and splints could be applied. The collecting stations also had plasma or blood available to manage shock, and staff there could secure adequate ventilation, including tracheostomies if needed. From the collecting station, men could be moved to semimobile field hospitals equipped to perform emergency surgery. The field hospitals were the first stage fully staffed with physicians. From the field hospital, men could be moved to fixed-station or general hospitals, where more complex procedures (neurosurgery, chest surgery, orthopedic reconstructions, and the like) could be performed and where men expected to return to duty could convalesce. General and rehabilitation hospitals in the zone of the interior could provide major reconstruction (predominantly orthopedic and plastic surgical procedures) and long-term rehabilitation if necessary. Depending on the severity of the injury, a wounded man could exit the system and return to duty at any point in the chain.
Semimobile medical units were linear descendents of the French auto-chir, a World War I attempt to create motorized field hospitals. A high level of mobility was less important in that relatively stationary war, and development of the freely movable hospital only blossomed in World War II. These "auxiliary surgical teams" remained somewhat difficult to transport and were limited in the services they could provide, but they were an important interim step toward the mobile army surgical hospitals (MASH units) of Korea and Vietnam.
Aeromedical evacuation, although tried in a crude way in World War I, came into its own in World War II. The process largely involved the use of fixed-wing aircraft (for U.S. forces, mostly C-47s, C-54s, and C-54As) to move men from field or general hospitals to facilities in the zone of the interior. Medical air transport became especially sophisticated in the long distances of the Pacific war, where the U.S. Army Nurse Corps developed the expertise in managing patients during prolonged transit that engendered current civilian and military flight nurses.
The island war in the Pacific presented unique problems in medical evacuation: the distances were inordinately long, and there was almost never an accessible general hospital to augment basic field hospital care. Although hospital ships had been used since the mid-1800s and had reached a high degree of sophistication under the Japanese in their war with Russia, the U.S. Navy employed them to unprecedented advantage in World War II. At the beginning of the war, the navy had only two hospital ships (the Relief and the Solace), and only one of these was in the Pacific. During the war, the United States commissioned an additional eight hospital ships and developed an entire class of troop transports equipped to provide limited hospital services. The navy also deployed a series of adapted landing craft—LST(H)s (landing ships, tank [hospital])—manned with 4 surgeons and 27 corpsmen and capable of serving as a field hospital for up to 350 wounded. Hospital ships, required by the navy to be held well back from areas of direct combat, served essentially the same role as land-based general hospitals.
Although some new surgical techniques, particularly in vascular surgery, were developed during the war, the primary advances were in the early management and treatment of physiologic effects of trauma. As noted, medics made almost immediate hemostasis and pain management a standard. Understanding of the mechanisms and treatment of shock came early in the war. Shock is clinically characterized by a fall in blood pressure, a rise in the pulse, coolness and discoloration of poorly perfused extremities, and mental changes ranging from anxiety through confusion to coma. The syndrome's common denominator is failure of the heart and circulatory system to supply adequate blood to the body's organs. In the early years of the war, clinicians first realized that poor perfusion was the common factor in various types of shock. Blood loss, loss of body fluid such as that caused by weeping burn wounds, sepsis with its toxic bacteriologic by-products, and extreme cold can all cause the circulatory system to fail. The physiologic effects of that failure can (at least temporarily) be ameliorated by increasing the amount of fluid in the system.
Soviet scientists in the 1930s had shown that plasma—blood with the red cells removed—could be effectively used to treat shock. Plasma had two signal advantages: unlike whole blood, it was not type-specific, and it could be readily stored for long periods. Plasma could be started by medics at the front, and it was widely administered from the early days of the war.
Recognizing the need for blood and plasma, the British started a national blood-banking program early in the war, a collection and storage system the Americans later adopted, enlarged, and improved. As the war progressed, plasma's limitations as a replacement for lost blood became evident, and the use of whole blood to treat shock became more prevalent. Although civilian donors played a major role in supplying the blood banks, most donations came from combatants themselves, with medics providing a disproportionate share. In addition to whole blood, military surgeons had cardiac stimulants and vasoconstrictors, such as adrenaline and ephedrine, to augment perfusion.
Military medicine also saw significant advances in the management of infectious diseases during World War II. These improvements primarily involved the treatment and control of tropical diseases, control of diseases resulting from poor sanitation aggravated by dietary deficiency, and chemical treatment of infections.
The Pacific war forced Japanese and Allied soldiers to fight in areas where tropical infections, especially malaria, were endemic. In the latter part of 1942 and early 1943, American soldiers in the Solomons were hospitalized for malaria at a rate of 970 per 1,000 per year, with 100,000 men ultimately contracting the disease. The unacceptable loss of fighting men led General Douglas MacArthur to form the Combined Advisory Committee on Tropical Medicine, Hygiene, and Sanitation. The committee used preventive measures to bring an 85 percent decrease in the hospitalization rate within six months.
When the war started, quinine was the agent of choice in treating malaria, but the Japanese captured the drug's major sources of supply. The antimalarial Atabrine was developed as a synthetic substitute, and although soldiers had to be forced to take it because of its bitter taste and tendency to turn the skin yellow, 2.5 billion doses had been dispensed by war's end.
Vigorous efforts were used to control the anopheles mosquito that carried the disease, including oil coating of breeding ponds and spraying with the newly developed chemical insecticide DDT. As a result of these preventive measures, less than 1 percent of hospitalized American personnel had malaria by the end of 1943. The Japanese did not do as well. They had access to quinine but were ineffective in distributing it. In addition, their soldiers were often underfed, and their rice-based diet resulted in vitamin deficiencies—particularly B1 deficiency or beriberi—that increased susceptibility to infectious disease.
Besides malaria, soldiers in the Pacific Theater suffered from dengue fever (an untreatable, incapacitating, but usually self-limited viral disease), various forms of infectious diarrhea, and fungal skin diseases (collectively termed "jungle rot").
Typhus, a louse-borne rickettsial disease, was the most threatening infectious disease in the European Theater. When the Western Allies landed in French North Africa in November 1942, the area was in the midst of a typhus epidemic that ultimately infected over 500,000 civilians. The U.S. Army received a vaccine mass-produced by a process developed in the U.S. Department of Agriculture, and only 11 men from a force of nearly half a million contracted typhus. Allied troops arrived in Europe vaccinated against typhus, typhoid, paratyphoid, and smallpox, but malaria remained a significant problem, especially in Italy, because Allied soldiers resisted taking Atabrine.
In addition to preventative vaccination, DDT was used to kill body lice that carried the infection. The new chemical agent stayed in clothes, so people could be dusted without removing their garments (an important factor in the North African Muslim culture), and the clothing no longer had to be sterilized or destroyed. Effective delousing was also important in limiting typhus outbreaks in the USSR, eastern Europe, and Germany as the war ended; it was especially useful in controlling epidemics among concentration camp survivors when those facilities were liberated.
One of the most important advances in the treatment of wartime trauma was the use of antibiotics. German scientist Gerhard Domagk, working at I. G. Farbenindustrie, had synthesized Prontosil in 1935, and scientists at the Pasteur Institute in Paris had adapted that chemical to the more effective sulfanilamide, an antimicrobial that could be applied directly to a wound or taken orally. Sulfa drugs were used prophylactically and therapeutically from the first months of the war. Sir Alexander Fleming had accidentally discovered the antibacterial properties of penicillin in 1929, but the drug was difficult and expensive to make and was not widely used until 1940, when scientists at Oxford produced a concentrated form suitable for clinical application. The drug remained so expensive that it was routinely recovered from patients' urine, purified, and reused.
Improvements in aviation and submarine technology outstripped the ability of humans to adapt to newly accessible environments. Frenchman Paul Bert had described the physiologic effects of extreme altitude on balloonists in the previous century, but the extremes of temperature, pressure, and oxygen tension became acute concerns as great depths and nearly stratospheric altitudes were reached. Warplanes had service ceilings well beyond the survival capabilities of humans without artificial pressurization. Supplemental oxygen was required above 10,000 feet, and daytime bombing missions at altitudes in excess of 25,000 feet were often the rule. At these altitudes, temperatures ranged as low as 50 degrees below zero, posing significant risk of frostbite or even hypothermic shock. A dive-bomber could descend at 30,000 feet per minute (compared with a commercial airliner's usual rate of 400 feet per minute), a rate that introduced serious risk of barotrauma to the middle ear, sinuses, or intestines. Pooling of blood in the extremities due to extreme gravitational forces from rapid acceleration caused loss of vision and unconsciousness. Allied pilots taped their legs and abdomens to protect against blackouts, and the Germans manufactured the first pressurized body suit with the same goal. Rapid ascent either from the ocean to the surface or the surface to high altitudes produces intravascular nitrogen bubbles, leading to the syndrome of joint pain and stroke collectively referred to as the bends.
Human experimentation carried out by the German and Japanese military medical establishments led to permanent changes in standards for scientific research. The Japanese used Soviet and Chinese prisoners of war and civilians in biowarfare experiments in Manchuria and performed anatomy experiments on living American prisoners in Japan. The Germans used concentration camp inmates to perform experiments on pressure and cold tolerance that, although undeniably inhumane, provided information that remains unique. Allied military surgeons were called on to help assess the Axis physicians' behavior after the war, and the resulting Nuremberg Code still sets the standards for ethically appropriate human experimentation. Jack McCallum
Cowdrey, Albert. Fighting for Life: American Military Medicine in World War II. New York: Free Press, 1994.; MacNalty, Arthur. Medical Services in War: The Principal Medical Lesson of the Second World War. London: Her Majesty's Stationery Office, 1968.; Rawicz, Jerzy. Nazi Medicine: Doctors, Victims and Medicine in Auschwitz. New York: H. Fertig, 1986.; Talliaferro, William. Medicine and the War. Chicago: University of Chicago Press, 1944.; U.S. Army Medical Service. Preventive Medicine in World War II. Washington, DC: Office of the Surgeon General, 1955.