Category Archives: energy

Chornobyl, 988-1986

From A Journey into Russia, by Jens Mühling (Armchair Traveller series; Haus, 2015), Kindle Loc. 705ff:

The Yakushins were a family of priests. Nicholas’s great-grandfather had served in the Church of Saint Ilya, Nicholas’s grandfather as well. Then the Bolsheviks came. They hammered on the church door and cried: stop praying, Father; man has no soul. The grandfather did not agree: man, he said, most certainly has a soul, and it is immortal. The Bolsheviks detained the grandfather. When he was released, he was old. That was his good luck. He died early enough to escape Stalin’s terror, which hardly any clerics survived. The grandfather’s son, Nicholas Yakushin’s father, did not become a priest. The times were not right.

Nicholas was nevertheless baptised, secretly, at home, the way most Orthodox were. Those who baptised their children in the church had to reckon with work-related harassment. When Nicholas was born, shortly after the end of the war, the church was closed anyway; the local kolkhoz used it as a grain silo. Thus Nicholas got to know his forefathers’ church: filled to the dome with wheat. On the ceiling a besieged Christ faded away, his hands spread over the grain as if in self-protection, not in blessing.

The town of Chernobyl, or Chornobyl, in Ukrainian, is old, ancient, even if it does not look it anymore. None of the original buildings are left. First the Mongols razed the city; later came Lithuanians, Poles, Bolsheviks, finally the Germans. Today there are only a few wooden houses standing between the concrete blocks, none of them older than two centuries. But Chernobyl was founded at the same time as Kiev, and when prince Vladimir had his subjects baptised in the year 988, the citizens of Chernobyl were amongst the first Christians of the Slavic world.

To those for whom this past was still present – despite the futurist ecstasy of the Soviet period – it was no surprise that here, in Chernobyl, 1000 years after the Slavs’ baptism, time should come to an end, just as it had been proclaimed in the Book of Revelation:

The third angel sounded his trumpet, and a great star, blazing like a torch, fell from the sky on a third of the rivers and on the springs of water. The name of the star is Wormwood. And a third of the waters turned bitter, and many people died from the waters, because they had been made bitter.

This John wrote in Chapter 8, verses 10 and 11. But in Ukrainian ‘wormwood’ means: Chornobyl [lit. ‘black stalk’, Artemisia vulgaris ‘common mugwort, wormwood’, to distinguish it from the lighter-stemmed wormwood A. absinthium].

Leave a comment

Filed under energy, language, religion, Russia, Ukraine, USSR

Romania’s Energy Crisis, 1984-85

From The New York Review of Books, 23 October 1986, by “a writer who frequently travels in Eastern Europe and whose name must be withheld”:

When it comes to political independence, the Romanians find out about it through rumors. They can judge the country’s energy independence from what they see. When darkness falls, the cities are plunged into shadow—paradise for burglars—and in the daytime, in some cities, buses run only between 6-8 AM and 3-5 PM. Electric energy and water services are interrupted daily, at irregular intervals and for periods that can exceed four hours. As a result, refrigerators defrost in the summer and in every season residents of Bucharest avoid using elevators so they won’t be caught between floors: elderly people laden with packages old grandmothers crying babies prepare for the return to their homes on the tenth or eleventh floors as though for a mountain climb. The strongest light bulb is forty watts, and it is illegal to use more than one lamp per room; television programming has been cut back to two hours during the working day; each official organization is allowed to use only a limited number of the cars assigned to it (of course, there are exceptions, but not in favor of emergency hospital ambulances).

The private use of cars is now banned for the winter months and during the remaining nine months the lines to buy limited amounts of gasoline can last from twenty-four to forty-eight hours. The procession that crosses the city four times a day as the president moves between the presidential palace and the one in which he works is made up of nine cars, not to mention the unknown number of automobiles not officially part of the retinue but assigned to protect it. (A doctor I talked to said that if the presidential cortege were to be cut back to four automobiles and the gasoline thereby saved turned over to ambulances, dozens of people might be spared death each week.)

Some bolder citizens, I was told, began to complain and to say that the vaunted energy independence had gone far enough. They were wrong. The proof came with the polar temperatures of the winter of 1984-1985, when heat was virtually cut off in every city. At twenty below zero people were freezing at home and in theaters, and, most of all, in hospitals. Schools were closed; women who had to go to work in the morning learned to do their cooking after midnight, when the power would occasionally be turned back on for one or two hours, on forbidden electric plates: the fine for doing so is five thousand lei, equivalent to the average salary for two months.

The regime tried to alleviate the situation. By the late 1970s it had become clear that the replacement of the easygoing Shah by the inflexible Ayatollah in Iran would require, in Romania, the replacement of the easy flow of Iranian oil by something more dependable. Here the president’s philosophy—that history cannot really be changed without also changing geography—came into play. Ceausescu had earlier ordered construction of the canal from the Danube to the Black Sea, which soaked up immense sums of money but which foreign ships still refuse to use. He ordered the demolition of a third of Bucharest in order to build a new presidential palace flanked by a triumphal boulevard cutting across the entire city (2.5 million inhabitants). Now, in the same intrepid spirit, he issued the order that Romania was to become a great coal producer. In the country’s principal coal-producing region more than thirty thousand miners went on strike.

A new decree announced that henceforth the principal coal-producing regions would be elsewhere, nearer the president’s native village, where the local coal, according to experts I talked to, had a caloric-energy content below the economically or technologically tolerable limits. Although they did not go out on strike, the new miners did not prove to be up to the tasks assigned them. Thus the first version of the plan had called for a production level of 86 million tons of coal by 1985, its second version set a goal of 64 million, whereas the reported actual production was 44 million. In the end, energy independence based on Romanian coal turned out to be not all that different from energy independence based on Iranian oil.

One might think that the Romanian energy shortage is the worst on the Continent. Nothing could be more erroneous. During the late 1970s, when they were still obtainable, official statistical data showed that at that time Romania’s electrical energy output—2,764 kilowatt hours—was nearly equal to that of Italy, greater than that of Hungary (2,196 kilowatt hours), Spain, and Yugoslavia, twice that of Portugal, etc. If, notwithstanding, such signs of extreme energy shortage were not being observed in Lisbon, but were all-too-evident in Bucharest, this is because Romania, instead of squandering its electrical energy on the needs of its people, was allocating it to industries that consume large amounts of energy.

Given its mineral resources, Romania’s iron and steel industry had never been very efficient. ln 1965, when Ceausescu came to power, it already had the remarkable steel-production rate of 180 kilos per capita, each year. Under the new leader, that figure in fifteen years took a jump that few economies have ever managed to duplicate: over 600 kilos of steel per capita in 1980—in other words, more than France, Great Britain, East Germany, or the United States.

Unfortunately, however, the rapid expansion of the Romanian steel industry occurred at a time when established Western iron and steel industries were sharply cutting their production and the international steel market was collapsing. As a result, today Romania is suffering from an imbalance between its capacity to produce steel and its ability to make use of it. A newcomer has a hard time finding out a place for itself on the market when even old-timers are overproducing. To do so successfully, there is not much choice: one either relies on technology to improve the quality of the product or one relies on economic measures to bring about a substantial reduction in price. Thus it was hardly surprising to see Romanian producers being accused of dumping steel on the market, and the American market at that. The Romanian iron and steel industry went right on producing mountains of steel that its domestic industries were unable to digest and that the international market did not seem keen to acquire.

The Far Outliers spent the grim winter of 1983-84 in Romania, and that was bad enough. We had a 4-burner gas stove that only supplied enough gas to keep one burner lit at a time. Hot water hours were limited to two in the evening and one in the morning. And our radiators were barely warm. Romania seems only to have gotten worse after we left.

Leave a comment

Filed under economics, energy, industry, nationalism, Romania

Civil War “West”: Rivers and Rails

From The Civil War at Sea, by Craig L. Symonds (Praeger, 2009), Kindle Locs. 1304-1325:

It is important to acknowledge that during Civil War, “the West” referred not to places like Arizona and New Mexico, or even Texas and Arkansas, which constituted the “trans-Mississippi West,” but instead to the expanse of territory between the Appalachian Mountains and the Mississippi River. The Ohio River marked its northern boundary, and the Gulf of Mexico its southern, and it encompassed all or part of six states: Louisiana, Mississippi, Alabama, Tennessee, Kentucky, and Georgia. It may seem odd to think of Georgia as part of the West since it borders the Atlantic Ocean, but strategically much of Georgia—especially Atlanta—was more closely tied to the West than the East. This vast western area got less public attention than the epic battles in Virginia both at the time and subsequently, and until recently Civil War literature tended to treat it as a secondary theater, though a good argument can be made that this expansive region was the decisive theater of the war.

Moreover, there were important differences in the way the war was fought in the West. First of all, the western theater was simply much larger. In the East, which contained both of the national capitals, most of the headline-grabbing battles took place in an area bounded by the Allegheny Mountains to the west and the Chesapeake Bay to the east. Gettysburg marked its northern limit and Petersburg its southern. Though it seemed enormous to the soldiers who had to march across it from place to place, it was a relatively small area, roughly the size of Massachusetts. By contrast, the war in the West ebbed and flowed in an area nearly 20 times as large. Given those dimensions, railroads were critical. Confederate General Braxton Bragg moved his army over 1,000 miles by rail to outflank a Union army in 1862; James Longstreet took two divisions across five states by rail to reinforce the Confederate army on the eve of Chickamauga in 1863; and Joseph E. Johnston and William T. Sherman fought an entire campaign over control of the Western & Atlantic Railroad in 1864 in what may have been the decisive campaign of the war.

Even more critical, however, were the rivers. The rivers in the West were essential not only to the movement of armies, but also to the transport of the supplies necessary to sustain those armies. Transport ships could carry more men and goods, and do so more quickly and efficiently than railroads. And while rampaging cavalry might be able to interrupt railroad traffic by tearing up rails and burning bridges, they could not stop the flow of the rivers. Of course, transports could be ambushed by parties on shore, such as the battery the rebels briefly established at Commerce, Missouri, and for that reason, gunboats were necessary to escort the transports and keep the rivers secure.

In addition, the rivers were geographical realities that affected the strategic planning of both sides. In the East, where the main field armies of both sides slugged it out between Richmond and Washington, the rivers ran mostly west to east—that is horizontally as they appear on a map—athwart any potential Union line of advance, making them defensive barriers that worked to the South’s advantage. Civil War scholar Daniel Sutherland has named the Rappahannock-Rapidan River line in Virginia the “dare mark” beyond which Union armies advanced only at their peril. But with the exception of the Ohio River, the principal rivers in the West ran either north-to-south, like the Mississippi, or south-to-north, like the Cumberland and Tennessee Rivers—that is, vertically as they appear on a map. Consequently they served not as barriers to a Union attack, but as avenues along which Union armies, supported by river gunboats, could advance. For these reasons, Union planners began to consider a river gunboat flotilla from almost the first days of the war.

Leave a comment

Filed under economics, energy, migration, military, nationalism, travel, U.S., war

Perils of Civil War Blockades

From The Civil War at Sea, by Craig L. Symonds (Praeger, 2009), Kindle Locs. 669-691:

Night was the most dangerous time, for that was when the blockade runners were most likely to attempt a run in or out of port. In the middle of a moonless night, perhaps in a misting rain, a slightly darker shadow amid the blackness might be perceived creeping through the anchored ships of the squadron. Wary of firing into a friend, the officer of the deck might order that the night signal be made asking “friend or foe?” At this order, the signal officer might fumble in the dark for Coston flares, and put up the required combination of red or white flares. If the appropriate response was not forthcoming, a rocket might be fired into the dark sky to alert the rest of the squadron. Feet would pound on the ladders and decks as men tumbled up from below to cast loose the big guns and train them out into the darkness seeking the shadowy outline of the blockade runner going past at 10, 12, or even 14 knots. Muzzle flashes would light up the night, temporarily blinding the gunners. Some ships would slip their anchors and set out in pursuit. Then it was over, more often than not with the runner escaped, the men angry about their missed opportunity, and the officers frustrated.

A typical encounter took place off Charleston on June 23, 1862. At 3:00 A.M., in the pitch black of the pre-dawn darkness, the deck watch on the wooden side-wheel steamer USS Keystone State spotted an unidentified steamer coming out of Charleston and making for open water. The watch officer fired a gun, slipped the anchor cable, and set out in pursuit. Thus alerted, the USS Alabama and the USS James Adger, flagship ship of the squadron, joined the chase, and all three Union warships set out at full speed after the illicit vessel. After three hours and more than 40 miles, the Alabama and James Adger found themselves falling further and further behind and they gave up the chase to the swifter Keystone State, which had a reputation as the fastest ship in the squadron. When the sun rose, the commander of the Keystone State, William LeRoy, identified the chase as the Nashville, a notorious blockade runner recently renamed named the Thomas L. Wragg. LeRoy ordered the coal heavers to redouble their efforts. To lighten ship and gain speed, he ordered the ship’s drinking water pumped over the side and jettisoned several lengths of anchor chain. Slowly but steadily the Keystone State began to gain on its quarry.

On board the fleeing Nashville, the officers and crew grew desperate. They threw their entire cargo, cotton valued at more than a million dollars, overboard, and then they began tearing apart the deck cabins to burn the wood and raise more steam. The Nashville pulled ahead again. For more than 300 miles, the two ships raced across the ocean at full speed, heading southeast. Finally after an all-day chase, with each ship squeezing every ounce of speed out of its engine, the Nashville slipped into a squall and disappeared. Eventually it reached Abaco in the British Bahamas. LeRoy, vastly disappointed, pointed, returned to resume the interminable blockade of Charleston. Statistically this went into the books as the successful escape of a blockade runner, though of course the loss of the Nashville’s cargo meant that it resulted in no benefit to the Confederacy.

But there was more. When the James Adger returned to the blockade squadron off Charleston after its 80-mile roundtrip pursuit of the Nashville, its commander, John B. Marchand, learned that during his absence another notorious blockade runner, the Memphis, had slipped into the harbor past the blockading squadron and was aground on the beach at Sullivan’s Island under the guns of Fort Moultrie. The Confederates were already at work removing its cargo by lighters. It was mortifying to Marchand to report to Du Pont at Port Royal that two ships had successfully violated the blockade.

Leave a comment

Filed under economics, energy, military, nationalism, travel, U.S., war

Logistics of Early Sidewheel Steamships

From The Civil War at Sea, by Craig L. Symonds (Praeger, 2009), Kindle Locs. 78-97:

Despite their self-evident logistical limitations, the tactical superiority of paddle steamers in the Mexican War led Congress in 1847 to approve three new side-wheel steamers (the Susquehanna, the Powhatan, and the Saranac), and one with a screw propeller (the San Jacinto), all of which would play prominent roles in the Civil War. Like all steamers of that era, each of these ships carried a full suite of masts and spars and were labeled “auxiliary steamers” because they were expected to navigate under sail at least as often as they did under steam. They were, in fact, transitional vessels that straddled the age of sail and the age of steam. The principal reason for including the San Jacinto in the program was to compare a screw-driven vessel against a paddle-wheel vessel, a comparison that was marred by the fact that the San Jacinto had a number of engineering flaws-including a propeller shaft that was 20 inches off the centerlines.

Despite that, it very soon became evident that the side-wheel steamers were inferior to screw steamers. When the Susquehanna was dispatched to the Far East by way of Capetown and the Indian Ocean in 1851, it took eight months to steam 18,500 miles, and it burned 2,500 tons of coal en route. Simple division shows that this yielded an average of 7.4 miles of forward progress for each ton of coal burned. Because coal cost an average of about $10 a ton in 1851, it cost the government about $1.35 (more than a full day’s pay) for every mile that passed under the Susquehanna‘s keel. Moreover, the lengthy transit time was a product not only of its relatively slow speed (8-10 knots) but also of the fact that the Susquehanna had to stop eight times en route to refuel, spending 54 days in port recoaling. Finally, all of those coaling stops were necessarily at foreign ports because the United States had no overseas bases in the mid-19th century. Even after the Susquehanna arrived-finally-on station at Hong Kong, it remained dependent on foreign sources of fuel to stay there. Obviously, for a navy with far-flung responsibilities and no overseas coaling bases, steam power continued to have significant limitations.

A second problem with side-wheel steamers like the Susquehanna was that those enormous paddle wheels on each side obscured much of the ships’ broadsides, thus limiting the number of guns they could carry, and those big paddle wheels made very inviting targets. If one of the paddle wheels was damaged by enemy fire, the ship’s mobility would be dramatically affected, and the helmsman would need great skill to prevent the ship from yawing off course or even steaming in a circle. Navy Lieutenant W. W. Hunter suggested that the solution was to turn the paddle wheels on their sides and place them below the water line, thus putting them out of the line of fire and restoring storing an uninterrupted broadside. Dubbed the Hunter’s Wheel, this seemed to offer a technological and tactical solution. But in practice the Hunter’s Wheel proved stunningly inefficient. In 1842 the USS Union was engineered to operate with Hunter’s Wheels, but while they dramatically churned up the water and burned extravagant amounts of coal, the ship made no better than five knots, and in 1848 its engines were removed and it was employed as a receiving ship. In the end, the best solution proved, after all, to be Ericsson’s screw propeller, and in the mid 1850s, during a burst of naval expansion, the U.S. Navy returned to it for a new generation of warships.

The Powhatan and Susquehanna were among the “black ships” in Commodore Perry’s expeditions to Japan.

Leave a comment

Filed under economics, energy, Mexico, migration, military, nationalism, travel, U.S., war

U-Boats Off U.S. Coastline, 1942

From World War II at Sea, by Craig L. Symonds (Oxford U. Press, 2018), Kindle pp. 251-255:

On December 9, 1941, the day Hitler unleashed the U-boats for use against American shipping, Dönitz asked OKW to release twelve of them for a campaign in American waters. The German high command allotted him only six, keeping the rest for service off Gibraltar, further annoying an already disgusted Dönitz. Moreover, one of the six boats developed an oil leak, so that in the end, only five of them departed in December to take up positions off the eastern coast of the United States. Dönitz also sent ten of the smaller Type VII boats, packed with extra fuel and supplies, to the waters off Nova Scotia, which was just within their operational range. Those fifteen boats represented a substantial portion of his entire U-boat flotilla.

Crossing the Atlantic in a surfaced U-boat was harrowing. Peter-Erich Cremer, skipper of the U-333, recalled that “the waves were as high as houses.” The boats pitched wildly, banging down on each successive wave with a jarring thump, often knocking crewmen off their feet. They also rolled side to side by as much as 120 degrees. When the seas became so violent as to threaten the safety of the boat, the captain could submerge into the relatively calm waters below the raging surface, but that reduced the boat’s speed to about five knots, which dramatically lengthened the transit time and used up precious fuel, food, and water supplies. Dönitz wanted all of the boats to begin simultaneous attacks on January 13, and running submerged for any length of time jeopardized meeting that deadline.

While the British and Americans squabbled, Operation Paukenschlag [Drumbeat] got under way, though not quite with the kind of devastating impact Dönitz had envisioned. Mainly this was because the five Type IX U-boats did not all manage to get into position by the target date of January 13. Hardegen’s U-123 sank the Panamanian tanker Norness off Long Island on the fourteenth, but the last of the five boats did not arrive at its assigned position off Cape Hatteras, North Carolina, until the eighteenth.

The Carolina capes constituted a critical choke point for American coastwise trade. In January 1942, 95 percent of the oil pumped from the Louisiana and Texas oil fields made its way to the Eastern Seaboard in tanker ships that necessarily had to pass around Cape Hatteras, where the shoals narrowed the shipping channel to a mere thirty miles. Eventually the United States would shift much of its domestic oil transport to rail cars and pipelines, but when Dönitz’s U-boats arrived off Hatteras on January 18, the shipping there was so abundant that upon surfacing, Hardegen was astonished to see “no fewer than twenty steamers, some with their lights on.” That night he sank four of them.

In accordance with Dönitz’s suggested protocols, the U-boats lay quietly on the bottom of the continental shelf during the daylight hours, surfacing at night to look for passing freighters, and especially tankers. Not only did the targeted ships proceed independently, but many, as Hardegen noted, still had their running lights on, making them irresistible targets. Even those ships proceeding blacked out were often starkly silhouetted against the lights that were still burning on shore, since most cities from Miami to New York did not enforce nighttime blackouts. German U-boat skippers, who had been at war for more than two years, were dumbfounded by such carelessness, and bemused by the sight of car headlights passing along the coastal roads. Peter Cremer, commanding the U-333, recalled that “through the night glasses we could distinguish equally the big hotels and the cheap dives, and read the flickering neon signs.” Peering into New York harbor through his binoculars, Hardegen jokingly told his crew that he could see dancers atop the Empire State Building. In such an environment, the U-boats, few as they were, had a field day. In the last two weeks of January, they sank twenty-three ships, thirteen of them tankers. Counting the ships sunk in Canadian waters by the smaller Type VIIs, the U-boats of Operation Paukenschlag dispatched forty-one Allied ships displacing 236,000 tons in just two weeks. The losses were shocking, all the more so in that many of them occurred within sight of the American coastline.

Leave a comment

Filed under Britain, Canada, energy, Germany, military, travel, U.S., war

Latin American Debt Crisis, 1980s

From The Penguin History Of Latin America, by Edwin Williamson (Penguin, 2003), Kindle pp. 364-367:

The mounting problems caused by the economic distortions of import-substituting industrialization [= ISI] and the associated weakening of the state came to a head in the 1980s. The crisis had been deferred in the 1960s by strong world growth, and in the 1970s, when international demand was slack, by foreign loans. But a sudden change in the world financial system effectively cut off the flow of capital to Latin America.

In August of 1982 the Mexican government announced that it was unable to pay the interest on its debt to foreign banks. Mexico was followed shortly by virtually all the Latin American countries, including Cuba. (Suspension of debt payments occurred also in African and Asian countries, but the sheer size of the Latin American debt focused international attention on the continent.) The total outstanding Latin American debt in 1982 was estimated at $315.3 billion, although over $270 billion was owed by just five countries – precisely those which had undergone the fastest ISI growth in the 1960s and 1970s. Brazil was the largest debtor, owing $87.5 billion; Mexico owed $85.5 billion, Argentina $43.6 billion, Venezuela $31 billion and Chile $17 billion.

What had caused the crash? The immediate factor was the steep rise in US interest rates in 1979–82. This was a response to the high rates of inflation and the consequent weakness of the dollar caused by the producers’ cartel, OPEC, sharply raising the price of oil in 1973 and again in 1979. A world recession followed, which had a disastrous effect on the economies of Latin America: commodity prices started to fall on world markets just when higher export earnings were needed to cope with sharply rising interest rates on the foreign debt.

The bonanza of lending and borrowing that Latin American governments and Western banks had indulged in throughout the 1970s had its origins in the very phenomenon that would cause it to come to an abrupt end a decade later: the OPEC cartel’s oil-price rises of 1973 and 1979. High oil prices allowed producer countries, especially the Middle Eastern Arab states, to build up huge surpluses on their balance of payments. Profits from oil exports were too large to be fully absorbed by investment in their domestic economies, and so these OPEC countries deposited vast sums of money in European and North American banks. Western bankers then set about looking for ways of getting a good return on these windfall deposits, and their most willing clients were the developing countries of the Third World, who were hungry as always for development capital.

Latin America was especially susceptible to the blandishments of the Western banks, for in the early 1970s, as we have seen, the most advanced of the industrializing countries in the region had come to the limit of the ‘hard’ phase of import-substitution; the process of state-subsidized inward-looking development could be kept going only by borrowing abroad to cover the yawning deficits between national income and expenditure. There followed a mad spiral of irresponsible, profit-driven lending and unwise borrowing, in which Western bankers as much as Latin American officials appeared to overlook the implications of taking out huge loans on ‘floating’ instead of fixed interest rates. However, after the shock of the second oil-price rise in 1979, conservative administrations in the USA and other industrial countries like Britain decided to bring their domestic inflation under control by restricting the supply of money and credit; this economic policy choked off demand in the West and produced a worldwide recession. International interest rates on foreign debt suddenly started to ‘float’ ever upwards until by the middle of 1982 most Third World countries found it impossible to meet their interest payments.

Indebtedness and high inflation were not, therefore, peculiar to Latin America. In fact, most governments in the industrial countries had been running up debts during the 1970s. The US budget deficit in 1982 was actually larger than that of the worst Latin American debtors, and throughout the 1980s the Reagan administration, for fear of electoral unpopularity, was unwilling to cut it by raising taxes or reducing imports. Yet it was the Latin American debt and not the US deficit which caused international alarm, because a country’s economic health was judged according to its perceived ability to overcome its financial difficulties, a factor expressed in terms of the ratio of interest payments to export earnings. Latin American countries scored badly here, given their relative neglect of the export sector in the pursuit of import-substitution. In 1982 most had ratios in excess of 20 per cent of interest payments to exports; Brazil and Argentina came off worst with ratios of 57.1 per cent and 54.6 per cent respectively, while Mexico, despite being a major oil exporter, had a ratio of 39.9 per cent. In other words, the economies that had grown fastest in the 1970s were the most deeply indebted in the 1980s.

What had gone wrong with ISI development? In essence, it had failed to cure the underlying malaise which had begun to show itself as early as the 1920s – lack of productivity. With the aim of achieving self-sufficiency, economic planners had concentrated on substituting industrial imports by setting up national industries and protecting them behind high tariff walls to the general detriment of agriculture and the export sector. (Brazil was a partial exception since from the mid-1970s it had begun to subsidize industrial exports – an expensive exercise that did not tackle the underlying problem of productive efficiency.) National industry had been overprotected for too long and had failed to become efficient and competitive: the price of its manufactures was often up to three times the world price. Latin American economies therefore ended up with not only an unproductive export sector, dominated still by low-value primary commodities, but also an unproductive industrial sector, which nevertheless consumed expensive imports of technology. The chronic shortfall between exports and imports resulted in high inflation and mounting debts.

To make matters worse, the debt problem had been badly aggravated by the financial instability caused by hyperinflation in the 1970s. As confidence in the economy evaporated in the late 1970s, there occurred massive capital flight. Instead of investing their money at home – where the currency was virtually worthless and industries regularly made losses – rich Latin Americans put it into real estate abroad or deposited it in the very banks that were issuing loans to their own governments and companies. Huge sums were taken out of these countries: the World Bank estimated that between 1979 and 1982, $27 billion left Mexico, nearly a third of its foreign debt in 1982, and $19 billion left Argentina, whose debt in 1982 was $43.6 billion. (Brazil and Colombia were relatively unaffected because of their sustained growth and high domestic interest rates.) US and European bankers colluded fully in this crazy financial cycle, pressing high-yield loans on Latin American governments while turning a blind eye to the lucrative deposits coming in from private Latin American sources (which were more often than not the indirect recipients of those very loans).

When the crash finally came, the wage-earners and the poor felt it most: inflation soared even higher in the 1980s than in the 1970s, real wages fell, and government spending on food subsidies, transport, health and education was slashed. In 1980–84 overall growth in Latin America fell by nearly 9 per cent. Consumption per capita dropped by 17 per cent in Argentina and Chile, by 14 per cent in Peru, by 8 per cent in Mexico and Brazil. Urban unemployment doubled in Argentina, Uruguay and Venezuela between 1979 and 1984, reaching unprecedented proportions everywhere else.

Leave a comment

Filed under economics, energy, industry, Latin America, Middle East, U.S.

Japan’s Home Front, 1941

From Storm Clouds over the Pacific, 1931–1941, by Peter Harmsen (War in the Far East, Book 1;  Casemate, 2018), Kindle pp. 253-256:

What kind of nation was Japan in 1941? Who were the 73 million people that would soon find themselves in the most devastating war in their island nation’s long history? Foreign affairs writer Henry C. Wolfe visited Tokyo in the fall of 1941 and was shocked by the gloom and dreariness of life in the once vibrant city of 6.5 million inhabitants. Four years of war and accompanying austerity had turned it into a “capital of shadows” with long lines of customers waiting in front of stores selling low-quality products made from ersatz material. Shoes of real leather could not be found. Clothes were made from a little cotton mixed with bark and wood pulp and ripped easily. Wolfe described what happened when an American diner at a restaurant asked for a second helping of pudding, the only part of his meal that was somewhat palatable. The head waiter replied, “Do you want me to go to jail!”

Wartime regulations had started out in a small way. Local governments had introduced rationing of sugar and matches in 1939, and it had become a national policy in 1940. Since then official controls had exploded, and by the fall of 1941 more than 100,000 goods and services were being regulated. Energy shortages were particularly conspicuous. Many vehicles were converted to run on charcoal, although that fuel was also in short supply. Police were soon forced to stop all public vehicles from running between midnight and 5 am. Adding to the woes, trams and trains were overloaded with people, since cars that had broken down could not be repaired due to a lack of spare parts.

The American trade curbs worsened an already steep decline in the standard of living, but they did not cause it. The tougher conditions faced by the average Japanese were equally due to the priorities of the Japanese rulers, which allocated ever larger resources to military purposes, leaving the civilians to pay. The war in China had taken its toll. In 1931, military expenditures had taken up 31.2 percent of the government budget, but a decade later it had increased to a staggering 75.6 percent. Average wages dropped by more than 20 percent from the mid-1930s until 1941. Meanwhile, there was less and less to be had for the shrinking incomes. The light industrial sector, where consumer products were manufactured, saw its share of overall production drop precipitously over the same period.

The finer things in life were, of course, virtually non-existent. Dance halls had been prohibited, despite their immense popularity, along with most jazz performances. Foreign movies were strictly limited, and Japanese cinemagoers, who were once among the most ardent foreign fans of Hollywood and even copied manners and slang from major American releases, were now limited to grim German propaganda fare with titles such as Victory in the West. The lights were out, also, in a quite literal sense. In Tokyo’s Ginza shopping district, the famous glittering neon signs had been turned off to save electricity. Five-star hotels, too, were wrapped in gloom after they were urged to keep lighting at a minimum.

Miyamoto Takenosuke, vice director of Planning Board, argued that “the people should be satisfied with the lowest standard of living.” He went on: “The craving for a life of luxury must be abandoned. At this time, when the nation is risking its fate, there is no individual any more. What remains is the nation and the nation alone. The storm of economic warfare will become more furious. Come rain! Blow wind! We are firmly determined to fight against the storm.” Japan’s largest candy maker Meijing [sic] Confectionary Company chimed in with an ad campaign featuring the slogan “Luxury is the Enemy!” The National Defense Women’s Association also did its part in imposing wartime rigor, posting members on street corners to stop women who were dressed too extravagantly, passing them handbills with stern admonitions about the need for thrift in light of the national emergency.

At the same time, a thriving black market for regulated goods had emerged almost immediately, and a special economic police set up to rein in the activities made more than two million arrests within just 15 months. The vigorous law enforcement did not curb the illegal transactions, but simply encouraged them to be carried out in more ingenious ways. A modern historian gives an example of how it remained possible to trade coal at the black-market price of 1300 yen, well above the official 1000 yen price tag: “To secure the additional 300-yen profit without running afoul of the law, a vendor, for example, might arrange for a customer to ‘accidentally’ drop 3000 yen next to the vendor’s stall. He would then take the money to the nearest official who would instruct the buyer to pay ten percent in thank-you money (300 yen) to the vendor.”

Despite the hardship, the Japanese government pretended it was in a position not only to care for its own population but for the peoples of all Asia.

Leave a comment

Filed under China, disease, economics, energy, Germany, industry, Japan, labor, migration, military, nationalism, publishing, U.S., war

Origin of North Korea’s Nuclear Program

From The Great Successor: The Divinely Perfect Destiny of Brilliant Comrade Kim Jong Un, by Anna Fifield (PublicAffairs, 2019), Kindle pp. 232-234:

In 1962, the Soviet Union and the United States were locked in a thirteen-day standoff over the installation of nuclear-armed Soviet missiles in Cuba, less than one hundred miles from the US coastline. For those two weeks, the world teetered on the edge of nuclear war. But the conflict was resolved diplomatically when Soviet leader Nikita Khrushchev agreed to remove the missiles as long as President John Kennedy agreed not to invade Cuba. A deal was done.

Kim Il Sung viewed this deal as a capitulation by the Soviet Union to the United States, a sign that Moscow was willing to sell out an ally for the sake of its own security. The Great Leader apparently learned from this that North Korea should never entrust its national security to any other government. This injected new momentum into his drive for nuclear independence. Within a few months, Kim Il Sung’s regime had started to explore the possibility of developing a nuclear deterrent of its own. The leader who had espoused a need for a stronger agricultural policy was soon standing before the cadres in Pyongyang to hammer home the importance of putting equal emphasis on economic growth and national defense. This was the first “simultaneous push” policy. The proportion of the national budget devoted to defense rose from only 4.3 percent in 1956 to almost 30 percent within a decade.

The nuclear scientists who returned home from the Soviet Union set about building, about sixty miles northeast of Pyongyang, a similar complex to the one they’d worked at in Dubna. This would eventually become the Yongbyon Nuclear Research Complex.

More impetus came in the early 1970s, when it emerged that North Korea’s other main ally, China, had secretly started to forge relations with the United States, an effort that led to President Richard Nixon’s historic visit to Beijing in 1972.

Meanwhile, in South Korea, the strongman Park Chung-hee, a general who’d seized the presidency through a military coup, was secretly pursuing nuclear weapons of his own. When this news emerged, it was an unbearable blow to Kim Il Sung’s personal vanity and sense of national pride.

Another key factor that must have been weighing on Kim Il Sung’s mind was his own mortality. He was in his sixties by this time and was starting to prepare his son to take over. He thought that having nuclear weapons would make it easier for his son to keep a grip on the state. In lieu of charisma, Kim Jong Il should at least have nukes.

In the late 1970s onward, the North Koreans had built more than one hundred nuclear facilities at Yongbyon alone. American intelligence agencies were alarmed. In the space of about six years, a country with no previous experience had built a functioning nuclear reactor. Three years later came unambiguous proof that the reactor’s purpose was military, not civilian; the country had built a major reprocessing facility that would enable it to turn the fuel from the reactor into fissile material.

But its efforts were not going unnoticed among allies either. The Soviet Union pressured Kim Il Sung into signing the Nuclear Non-proliferation Treaty at the end of 1985. It took seven years for North Korea to allow in the inspectors required under that treaty, and when they got in, they found numerous signs that the regime was secretly working on the very kind of nuclear program it had pledged against. In 1993, Kim Il Sung threatened to withdraw from the treaty, triggering an alarming standoff. North Korea and the United States came the closest to war in forty years.

Talks to resolve the impasse were ongoing when Kim Il Sung suddenly died in the summer of 1994, propelling both sides into unknown territory. They did, however, manage to sign a landmark nuclear disarmament deal called the Agreed Framework, under which North Korea agreed to freeze and eventually dismantle its nuclear weapons program and a US-led coalition agreed to build two civilian nuclear reactors that could be used to generate electricity for the energy-starved country.

Pyongyang had no intention of abiding by this agreement either. Signing the deal was all about buying the Kim regime time to work on its program while maintaining the appearance of cooperating.

North Korea had developed a close relationship with Pakistani nuclear scientist Abdul Qadeer Khan. In the 1990s, while North Koreans were dying of starvation and while Kim Jong Un was watching Jackie Chan movies in Switzerland, the regime was building a uranium-enrichment program. Uranium enrichment wasn’t technically covered under the Agreed Framework. And North Korea loves technicalities.

Leave a comment

Filed under China, energy, Korea, military, nationalism, Pakistan, science, U.S., USSR, war

Negative Human Development in Resource States

From The Looting Machine: Warlords, Oligarchs, Corporations, Smugglers, and the Theft of Africa’s Wealth, by Tom Burgis (PublicAffairs, 2016), Kindle pp. 211-212:

In 1970, the year the Olympic movement expelled South Africa, the government passed legislation formally stripping blacks of their citizenship and restricting them to destitute “homelands,” and the authorities appointed a barbaric new commanding officer at Robben Island prison to watch over Mandela and his fellow inmates, South Africa produced some 62 percent of the gold mined worldwide. From the early 1970s to 1993 gold, diamonds, and other minerals accounted for between half and two-thirds of South Africa’s exports annually.

South Africa’s gold and diamonds provided the financial means for apartheid to exist. In that sense white rule was an extreme manifestation of the resource state: the harnessing of a national endowment of mineral wealth to ensure the power and prosperity of the few while the rest are cast into penury and impotence. None of Africa’s resource states today come close to the level of orchestrated subjugation of the majority that the apartheid regime achieved. Neither do they employ apartheid’s racial creed, even if ethnicity has combined poisonously with the struggle to capture resource rent in Nigeria, Angola, Guinea, and elsewhere. But as their rulers, in concert with the multinational corporations of the resource industry, horde the fruits of their nations’ oil and minerals, Africa’s resource states have come to bear a troubling resemblance to the divisions of apartheid.

While the children of eastern Congo, northern Nigeria, Guinea, and Niger waste away, the beneficiaries of the looting machine grow fat. Amartya Sen, the Nobel Prize–winning Indian economist who has examined with great insight why mass starvation occurs, writes, “The sense of distance between the ruler and the ruled—between ‘us’ and ‘them’—is a crucial feature of famines.” That same reasoning could be applied to the provision of other basic needs, including clean water and schooling. And rarely is the distance Sen describes as wide as in Africa’s resource states.

Many of Africa’s resource states experienced very high rates of economic growth during the commodity boom of the past decade. The usual measure of average incomes—GDP per head—has risen. But on closer examination such is the concentration of wealth in the hands of the ruling class that that growth has predominantly benefited those who were already rich and powerful, rendering the increase in GDP per head misleading. A more revealing picture comes from a different calculation. Each year the United Nations ranks all the countries for which it can gather sufficient data (186 in 2012) by their level of human development, things like rates of infant mortality and years of schooling. It also ranks them by GDP per head. If you subtract a country’s rank on the human development index from its rank on the GDP per head index, you get an indication of the extent to which economic growth is actually bettering the lot of the average person in that country. In countries that score zero—as Congo, Rwanda, Russia, and Portugal did in 2012—living standards are roughly where you might expect them to be, given that country’s GDP per head. People in countries with positive scores enjoy disproportionately pleasant living conditions relative to income—Cuba, Georgia, and Samoa top the table with scores of 44, 37, and 28, respectively. A negative score indicates a failure to turn national income into longer lives, better health, and more years of education for the population at large. Of the ten countries that come out worst, five are African resource states: Angola (–35), Gabon (–40), South Africa (–42), Botswana (–55), and Equatorial Guinea.

Equatorial Guinea’s score (–97), comfortably the worst in the world, is all the more remarkable because its GDP per head is close to $30,000 a year, not far below the level of Spain or New Zealand and seventy times that of Congo.

Leave a comment

Filed under Africa, Angola, Congo, democracy, economics, energy, Equatorial Guinea, industry, labor, nationalism, Nigeria, South Africa