Site Admin
Posts: 71949
Joined: Thu Aug 30, 2018 1:40 p


Post by thelivyjr »


Later in the nineteenth century, the disastrous harvest of 1879, another cold year, with a mean summer temperature of 13.7°C in central England and nearly twice the normal rainfall — all the other seasons of that year were also cold — cut the wheat harvest by half.

With similar effects in other European countries it precipitated a change that would otherwise have come gradually in any case, the large-scale importation of cheap wheat from the North American prairies where the beginnings of mechanization had already appeared.

There had been wet summers in three of the preceding four years and the run continued unbroken to 1882.

With this England’s agriculture went into crisis and a decline that continued for fifty years.

Other European countries protected their small peasant farmers by tariffs on imports, but Britain went on with the policy of free trade which had helped build up her manufacturing industries.

The result was a drift of population from the land to the industrial areas and to the British colonies overseas.

The area on which corn was grown shrank by a quarter in the last thirty years of the century, and the rural population declined by some hundreds of thousands.

Site Admin
Posts: 71949
Joined: Thu Aug 30, 2018 1:40 p


Post by thelivyjr »


The near-immunity to food shortages which came about in the present century in North America, Europe and other advanced countries, and has come to be taken for granted as a benefit of modern scientific know-how, can rightly be attributed to the advances of science and technology.

However, as McQuigg and others have pointed out, the doubling of wheat and corn yields in the states of the United States Middle West over the period 1955–73 not only was achieved by technological innovations but owed a good deal to the long run of benign, drought-free years (see fig. 105).

Similarly agriculture and husbandry in western Europe gained from the warmth maximum of the period from 1933 to the 1950s.

The growing season in Ireland, which had averaged eight months of temperatures above 6°C around 1900, increased to almost nine months, resulting in a 20 per cent reduction in the season of winter cattle feeding: the growing season has since shortened again in Ireland, as in England, by about two weeks.

With the development of new high-yielding strains of rice and other crops and the ability to ship food in bulk, and in emergency quickly, around the world, the benefit has increasingly been spread to much of the Third World also.

But the increases of crop yield won by improved scientific knowledge cannot for ever be followed up by further increases, and already there are some poor countries in Africa and south Asia where population growth has been outstripping the increase in production of food.

The whole favourable development is, in fact, threatened on a world scale by the growth of population.

And since the best land for agriculture has already been taken into use, one must expect lower returns from any further increases of the acreage sown.

In this, one finds that man's vulnerability to climatic fluctuations is bound up with, and intensified by, the population explosion.

My colleagues Jean Palutikof and Graham Farmer in the Climatic Research Unit at the University of East Anglia have recently pointed out that this is now seen in a particularly stark form in the drought-prone areas of East Africa.

Traditionally the population of these areas guarded against disaster by planting a wide variety of crops, spread out over many weeks, so that at least some were likely to survive any drought periods and come to fruition.

For the same reason they kept great numbers of cattle, and they could roam over an extensive area for whatever nourishment was to be had.

The increase of population and modern political and organizational developments since 1950 have made these safeguards largely impossible.

Governments prefer to organize cash cropping, taking in larger areas which were formerly used for grazing, and concentrating on few varieties if not only a single crop.

And national and other boundaries now restrict migration.

Some aspects of the threatening situation are well illustrated by Kenya, which has been one of the most stable countries in post-imperial Africa.

Kenya, with a high birth rate and falling death rate, now has the most rapidly rising population in the world, the first nation in recorded history to achieve an annual increase of 4 per cent.

The total population was 10,943,000 in 1969 and over 15 million in 1979.

In some rural areas the population density already doubled in seven years during the 1960s, and at this rate Kenya may be trying to feed a population of 50–60 million by the year 2000.

With a run of three or four good rainfall years up to the time of writing, the rural population must have begun to make use of, and perhaps settle in, areas where the same rainfall cannot be expected to continue.

If a good deal of the food security of earlier decades of the present century was due to the combination of some already improved scientific knowledge in agriculture with a still diversified husbandry in the advanced countries, and a lower density of population in the Third World, the more recent development of rationalization on a world scale — with concentration on just one or two crops in each extensive region where they are supposed to grow best — constitutes a threat to this security.

Monoculture was at the root of most of the great famines of the past.

And it should be noted that the selection of areas where each crop grows best implies a forecast of no climatic change, indeed of no fluctuation beyond a certain expected range.

This increase of vulnerability is real and, together with the increased pressure on grain supplies arising from the growth of world population and the demand for a rising standard of living everywhere, it must give rise to serious anxieties about the future — indeed about the not distant future.

The remedy seems to lie in deliberate choices of more diversification (rather than planning maximization of a single crop) and a curb on the feeding of grain to animals for meat production for the wealthier countries, for in that way seven times as much grain is needed for the same amount of protein production for human food as when the grain is consumed for itself.

Moreover, the risk is not limited to the case of a greater than expected climatic deviation alone: monoculture and specialization of crops over great areas must increase the probable scale of the disaster if any new crop disease or mutant of a known disease should be spread by winds or other weather conditions which are themselves within the expected range.

This seems to have been an important aspect of the Irish potato famine of the 1840s.

The success of the potato in the moist climate of the Atlantic fringe of Europe, and the growth of population in Ireland in the eighteenth and early nineteenth centuries, had meant that this was the one crop which could produce enough food for a family on the very small farms, many of which were of only one hectare.

So the potato had become the only crop grown in much of the country.

And when the previously unknown blight appeared, and was quickly spread by the winds of the autumn of 1845 and the moist summer of 1846, all was lost.

It may be useful to review more recent developments in the light shed from this background.

In 1913 Imperial Russia was still the main producer and exporter of surplus grain, especially wheat.

And up to the 1930s many other countries, including some in Europe, produced surpluses for export.

But despite very great increases in production world-wide, population and consumption have increased so much that now only North America produces substantial surpluses of grain for export.

But if the United States should follow Brazil’s lead in growing crops on a large scale to produce liquid fuel and lubricants (oils, alcohol or methanol), there might soon be no surplus even there.

The North American surplus may disappear in any case within about a decade because of its own growing population.

Since 1960, and even since 1970 despite the development of high-yielding crops in the Green Revolution, the world's total production of grains has barely, and certainly not consistently, kept pace with the growth of world population.

In 1960–1 the end of season world stocks of grain were estimated at 222 million tonnes, representing 26 per cent of a year’s requirement.

Ten years later, in 1970–1, the figures were 166 million tonnes or 15 per cent and in 1974–5 and 1975–6 the end of season reserves, amounting to only 131 and 138 million tonnes respectively, were reckoned as 11 per cent of a year’s requirement.

Of course, changing policy decisions in some countries had affected the issue, and in 1979–80 the figures had recovered to 195 million tonnes end of season stocks or 14 per cent of a year’s requirement.

In the 1970s the Soviet Union, although still the world’s second biggest grain producer, had become a net importer of grain.

Table 5 gives a brief survey of the changed world situation.

The relative bulk of the different chief crops is given in table 6.

Weather has come into the situation chiefly through the year-to-year variability, which as we have seen seems to have increased in some regions, although the Canadian wheat crop has been affected by recurrent earlier autumn frosts and crops in Africa south of the Sahara by the increased incidence of drought in the 1970s.

Fig. 106 shows the variations of the Soviet grain harvest from 1960 to 1980.

In all the years with major short-falls substantial grain purchases were made from the West.

In 1980 another shortfall of well over 20 per cent — the second year running — has been reported; and this time political actions make it unlikely that purchases can be made on the same scale.

In some ways the most serious case was in 1972, when there were also other severe weather-induced shortages in food production elsewhere around the world.

The Soviet Union seems to be afflicted by the nature of climate in ways that cause the year-to-year variability of the total harvest, surveyed over many years, to be about twice as great as for North America.

Looked at in another way, the percentage probabilities of a single poor wheat harvest or of two or three poor harvest years in succession — poor harvests being defined by production more than 10 per cent below expectation — over the longest spans of years in the present century examined were as shown in table 7.

It will be observed that the overall variability of the total Soviet crop is somewhat less than for either the spring or winter-sown crops singly, as there is some apparent compensation in the variations in the different seasons.

The vast west-to-east extent of the plains of Eurasia and of the mountain massifs on their southern border means that, when blocking situations occur, the stationary anticyclones may be elongated in such a way as to bring drought (or severe cold weather in winter) over a very wide sector and yet cover a somewhat different span of longitudes in another year.

Indeed the anticyclones are generally differently located in winter, spring and summer of the same year.

The frequencies of two or three years in a row of poor harvests indicated in table 7, being relatively rare (though very serious) events, are undoubtedly affected by sampling problems — i.e. the erratic results usual in small samples.

The figures must have been raised in the United States by the historic drought in the ‘Dust-Bowl’ years in the 1930s.

In order to eliminate the political effects on the magnitude of the Soviet harvests in the revolutionary years, the figures used for those years have been derived by simulation using a meteorological model, i.e. a meteorological fiction rather than the reality of those harvests.

Site Admin
Posts: 71949
Joined: Thu Aug 30, 2018 1:40 p


Post by thelivyjr »


The events of 1972, briefly referred to above, caused a great deal of concern about tendencies of the climate that had escaped notice, or not been much thought about, until then.

In that same year, with its extraordinary heat and drought in Russia (see pp. 277–8), when the Soviet grain harvest was about 13 per cent short of expectation, the drought belt continued eastwards in such a way that the Chinese harvest was also described as disastrously short and in northern India there was a deficient monsoon with a similar result.

The drought, already then prolonged over several years, in another belt along the southern fringe of the desert zone also reached a climax in 1972 and 1973, with the result that an estimated 100,000 to 200,000 people and perhaps four million cattle died in the zone that stretches across Africa from the Sahel to Ethiopia (fig. 107).

There was also a mass migration of people leaving their homes and accustomed land southwards, in some cases crossing the frontiers which are an awkward legacy of the former European imperial administrations of the region.

The coffee harvest in Ethiopia, Kenya and the Ivory Coast and the ground nuts, sorghum and rice in Nigeria were also sharply reduced.

And, to complete the picture, the Australian wheat crop in 1972–3, owing to drought there, was also more than 25 per cent below the previous five-year average; and an irregular fluctuation (known as El Niño) of the ocean currents off Peru and Ecuador ruined the usually abundant anchovy fishery there.

The net effect was that the world’s total food production in 1972, although the second greatest ever achieved, fell nearly 2 per cent below the 1971 achievement.

This was the first drop that had occurred in any year in the period of technological advance since 1945.

(Over many of those years up to 1972 world production had been increasing by about 3 per cent a year.)

And there was a scramble among the countries most directly hit to purchase food from the American reserves, a scramble in which Russia was able to buy up a quarter of the United States wheat crop of that year as well as buying elsewhere in the West, with the result that the world price of wheat doubled within a few months and the difficulties increased for the poorest countries suffering shortage.

The 1972 case had other repercussions.

Most directly, the stresses arising from the famine seem to have triggered the revolution which toppled the age-old imperial regime in Ethiopia.

Those developing countries which have oil but a climate unsuited to agriculture developed a new appreciation of the climatic and other threats to the world s food supply and of the necessity of using their dwindling resources to diversify and strengthen their economies against the time when their oil is exhausted.

And in the leading scientific, technical and administrative institutions in the advanced countries, there was some confusion about how to interpret the climatic event and revise attitudes to climate, even before the anxieties aroused by the unprecedented international economic crisis, which began to develop with the first (fourfold) oil price increase in 1973–4.

Most immediately, the hopes that had been raised by the Green Revolution of being able to meet indefinitely the food demands of the world s rising population were seen to have been unduly optimistic, particularly since the high-yielding new varieties of rice and other ‘wonder crops’ were often more sensitive than the traditional varieties to deviations from the expected climatic conditions (see fig. 108).

There was also no ready means of being sure which aspects of the 1972 world-wide climatic anomaly pattern would prove to be a short-term fluctuation and which should be recognized as part of a probably much longer-term trend.

Voices among the ‘experts’ in the scientific world ranged from alarm — exaggerated by distortions in some sections of the sensationalist press — to extreme complacency.

To some extent this is still the case nine years afterwards — an unhappy position for science, which only emphasizes the need for more firmly based knowledge and understanding of climate, so long taken for granted and ignored.

Site Admin
Posts: 71949
Joined: Thu Aug 30, 2018 1:40 p


Post by thelivyjr »


The basic lesson was, of course, learnt that despite the enormous technological advances in agriculture and average yields per acre that are in many areas over double what they were before 1945, the production of sufficient food for the present population cannot be guaranteed in every year.

And it can be foreseen that even without climatic fluctuations or change, but with the further increase of the world’s population that is inevitable, at least over the next few decades, some reduction in the feeding of grain to animals will be needed in order to reserve more for human mouths.

Moreover, even the biggest countries are not immune to the direct effects of weather-induced variations of the harvest.

Mistakes have been made, and have continued to be made, through over-confidence in the amount of security that has been won at any given time by advancing technology and larger-scale organization.

Indeed, these very factors probably induce a tendency to press the rationalization of food production to extremes, both in respect of monoculture and by taking in marginal land, and relying on technology to secure the expected yield whatever the weather does.

This policy is reminiscent of that ruling in the Soviet Union in the enthusiasm of the first two or three decades after the revolution.

The work of the Russian statisticians V.M. Obukhov and N.S. Chetverikov, which had shown in 1927–8 from pre-revolutionary years data that the variations of the country’s overall grain production depended largely on the weather and the violent fluctuations in a few key provinces in the south of the Union, was effectively ignored.

And when the first five-year plan was inaugurated in 1929, the variant which had allowed for possible difficulties due to the weather was dropped.

In an official publication the editor wrote ‘The question of the yield can be resolved only by Marxist dialectics… and…is closely related to the industrialization of our country….'

'The yield will become the object of the planned action of the productive forces of the Socialist state.’

A slow change of attitude began with the results of the 1936 harvest, officially described as ‘satisfactory…if you take into account the complicated climatic conditions’ and Very similar to 1891’, which was the year of a great famine!

Similarly, the successes achieved over many years since the late nineteenth century with the increasing scale of the mechanized farming operations in the United States Middle West, taking in more and more of the great grasslands, gradually caused the earlier droughts to be forgotten, until disaster struck in the 1930s.

Drought had occurred about every twenty to twenty-two years since the first European penetration of the region, but this one was more serious than any drought until then known.

Whatever causes the apparent roughly twenty-year cycle in mid-western droughts, this time the climax nearly coincided with the climax of that longer-term variation, which we have noticed, that brought the mean position of the subtropical anticyclone belt a degree or two farther north and also showed in a more regular performance of the westerly winds in middle latitudes.

Successive summers between 1932 and 1937 brought repetitions of hot, dry winds from over the Rockies which parched both vegetation and soil in the Middle West.

Previously the native grasses of the region, when so parched, had produced a tough dried-up mat that protected the soil.

Now the crops were killed, and the soil that had been disturbed by the plough just blew away.

In 1933 and 1934 the wind-blown dust was readily traced to the east coast.

On 12 May 1934 the New York Times reported that the cloud of dust coming from the ‘drought-ridden states as far west as Montana, 1500 miles away, filtered the rays of the sun for five hours yesterday’.

New York was in a half-light like conditions in an eclipse of the sun, and the dust-cloud was thousands of feet high.

Thousands of farmers were ruined in those infamous years when the Middle West became a ‘Dust Bowl’ (fig. 109), many families migrated to seek a new living near the west coast, and farms inland ‘that never should have known the plough’ were abandoned.

Soil rehabilitation programmes had to be instituted by the federal government, involving returning much of the land to pasture and planting trees as windbreaks.

A partly similar mistake, or misjudgement, had been made in the Sahel in the time of more abundant rains in the 1950s and early 1960s.

International aid for the developing countries in the zone was used to drill deep wells in order to use (and ultimately use up) the great reserve of subterranean water — sometimes known as ‘fossil water’— which accumulated in different climatic regimes thousands of years ago.

This introduced a kind of short-lived prosperity to the region with greatly increased cattle herds and growth of the human population (the latter thanks also to the beginnings of satisfactory health services).

The sparse vegetation was soon over-grazed, resulting in a spread of the desert.

And this, it seems, certainly introduced meteorological self-reinforcing mechanisms, which help to maintain the drought.

Through the greater reflection of the solar radiation by bare soil, the total energy absorbed in the ground and lower atmosphere is reduced and an anticyclonic tendency with dry air subsiding from aloft is introduced.

At the same time there would be even less moisture than before stored in any vegetation and available for recycling.

In these ways the whole region became more vulnerable than before to the next down-turn of the natural rainfall, which duly came from the mid-1960s onwards.

And now we learn that, because the later years have shown the meagre recovery of rainfall in the Sahel from its 1971–3 minimum seen in fig. 99 (p. 276) and because some meteorological advice takes the complacent view that the recent extreme stress was only a random short-term variation, resettlement of the displaced population and rehabilitation of their cattle stocks is under way.

Underlying the events reviewed in this chapter there seems to be a sort of historical cycle, whereby human populations expand in periods of benign climate and occupy with increasing density lands which sooner or later fail to support the numbers by then dwelling in them.

Similar expansions of population are seen to be introduced by advances in technology.

When the bad years come, the population has always in the past been reduced or disappeared, partly through migration and partly through undernourishment, disease and death.

The situation is doubtless characteristically compounded by the inflexible attitudes developed by the sufferers.

Confronted by similar threats to the greatly inflated, and still fast growing, population of many regions in the world today, some take the view that humanity has coped with all the climatic changes of the past, including the ice age to post-glacial change, and will doubtless do so whatever may befall in the future.

This view overlooks the enormous human sufferings involved, ranging from the difficult lot of the immigrant and slave labourer to mass epidemics of disease and death.

It is surely our duty, and a wise precaution of the advanced and more fortunately situated countries too, to do whatever can be devised to minimize such troubles by preparedness based on a more realistic understanding of climate.

Site Admin
Posts: 71949
Joined: Thu Aug 30, 2018 1:40 p


Post by thelivyjr »


Having dwelt so long in this chapter on the impacts of climatic variation on food supply, as the most basic aspect, there is no space here to go into many of the other types of impact on human society mentioned at the beginning of the chapter.

The most serious among these in terms of death toll are those that occur through epidemics of disease and through episodes of flooding either by rivers or the sea of extensive, heavily populated lowlands.

The latter have commonly been followed by disease epidemics, though modern advances in protective medicine can now be expected to contain the situation — so long as there is no breakdown of organization and the scale of the disaster is manageable.

We have noticed in earlier chapters of this book how in some degree climatic fluctuations seem to have been involved in the disastrous plague that swept the Roman world in Justinian’s time and in the Black Death in the Middle Ages, as well as in the great cholera epidemic that started in Bengal in 1816–17.

It was well known, too, centuries ago in Europe that the recurrent outbreaks of plague seemed to be affected by the weather, flaring up in hot, dry summers and tending to die out in long, severe winters.

It is not the intention of this book to present a theory of climate and history, nor to pretend that the linkages are simpler than they are.

But in the case of links between climatic fluctuations and major outbreaks of disease, listing a few broad categories may help understanding.

The circumstances conducive to such situations may be classed as:

1 events, such as some of the greatest droughts and floods, which cause a breakdown of sanitation and hygiene;

2 weather conditions exceptionally conducive to the breeding of certain insects and other disease organisms and vectors, of the hosts of various sickness organisms, and/or conditions which extend their geographical range;

3 weather conditions, and any weather-induced failures of the food and water supply, which lower the resistance of human populations to sickness and disease.

We may note in passing that the cases classed under 2 also apply to the diseases of animals and must include the breeding of insects, such as locusts, and parasites, blights, etc. which damage crops and other elements of the vegetation on which the human economy depends.

Cases to be classed under 3 seem often to involve wet winter conditions (in almost any latitude) and the common infections which are among the first results of them.

In a great number of the phenomena included under 2 in the above list certain combinations of warmth, though not excessive heat, and a moist environment, or enough humidity to ensure at least some locally moist micro-environments, seem to be necessary.

As examples of the weather-dependence of the abundance of insects associated with (a) human illness, (b) a devastating sickness of both animals and men, and (c) the large-scale destruction of crops, we may briefly consider the following:

1 The flea that transmits bubonic plague (Xenopsylla cheops) undergoes a speeding up of its life-cycle as temperatures rise in the range 20–32°C (68–90° F).

Breeding is speeded up, but the death of each generation of the insect also comes sooner, the higher the temperature.

At relative humidities below 30 per cent of saturation the life of the flea is reduced to a quarter of what it is in near-saturated air.

The malaria-bearing mosquito (Anopheles) does not breed at temperatures below 16°C (61°F) or with relative humidity below 63 per cent, and like all mosquitoes thrives in moist environments with stagnant water bodies.

From time to time this sickness has been introduced by home-coming travellers (the mercenary soldiers of former centuries seem to have provided examples) to the mosquitoes of the northern Europe — Oliver Cromwell died of the ague, as it was called, caught from an English mosquito in the Fenland, and cases are known to have occurred as far north as Sweden — but it has always died out within a few years in conditions that were presumably too cold or too dry for it at some critical stage of the mosquito’s life-cycle.

Rather similarly observation has shown that the average life of the yellow fever mosquito (Aëdes aegypti) is reduced from 7.0 days in near-saturated air to 4.5 days in dry air at 20°C with humidity below 48 per cent, and to about 2.0 days if the temperature is 26°C, thereby reducing the opportunity for reproduction.

Different varieties of mosquito seem to be capable of transmitting yellow fever infection in rural conditions in Africa and South America, but their climatic preferences are evidently similar.

2 The tse-tse fly (Glossina), whose bite spreads the deadly sleeping sickness, similarly requires enough, but not excessive, warmth and humidity.

There are several varieties of this fly involved in transmitting the sickness, all of which require some shade from the sun, though in differing degrees.

Most of them therefore thrive best in areas with plenty of trees or shrub vegetation, often near rivers and lakes.

Their biting and bloodsucking is done in bright conditions in the day time, but activity ceases at temperatures below about 15.5°C (60.0°F).

Drought is very damaging to the fly populations in the larval andipupal stages.

Hence their range is confined by the desert zone to low latitudes, and some areas have been improved for habitation by clearing of the vegetation.

The particularly deadly time in West Africa in the 1860s and 1870s, which earned the region the name of ‘the White Man's Grave’, when the average expectation of life of a European going there was six months, seems to have been a period when the equatorial rains were peculiarly active over Africa and the lakes were rising strongly.

(Despite this evil health record at that time the Jesuit and Methodist missionary societies never had fewer than twelve volunteers waiting in London to go out to the mission field to replace those who died.)

A little earlier, in the days of the old Danish colony in Ghana in the 1820s and 1830s, when lake levels in equatorial Africa were lower, and other evidence suggests that climates in this part of Africa were drier, the incidence of the sickness seems to have been by no means so bad.

It also eased off after the 1870s–80s despite the fact that there had been no real advance of medical technique, but perhaps connected with the decline of the rains which became sharp in or around the 1890s.

3 The desert locust (Schistocerca gregaria) also needs moist periods, after rains in the desert or desert fringe, to multiply.

It is most active in temperatures between 25 and 35°C (77–95°F).

At temperatures below 15°C it is lethargic, and temperatures above 50°C (122°F) seem to be lethal for it.

In all these cases, therefore, the insects multiply when they find themselves in the optimal weather conditions mentioned, and are then liable to be spread by whatever winds blow.

Locusts have from time to time turned up in many parts of Europe but very seldom in enough numbers to do significant damage.

But there are many records of crop disasters and dearth caused by locust swarms in hotter countries in earlier times.

The total area that has experienced invasions of desert locust swarms in recent times amounts to about thirty million square kilometres, and over an important part of the area the breeding of locust swarms is observed on average every second year.

In all the cases mentioned modern control methods take advantage of the known environmental and weather dependence of the species concerned.

In the case of locusts, the international anti-locust organization monitors the situation by continual mapping, including weather mapping, of the whole zone where the insects breed.

Control spraying of insecticides can be guided to the actual breeding areas and to where the locust swarms on the wing are concentrated into a narrow zone by the meeting of air currents from both hemispheres at the Intertropical Front or along lines of convergence within the Intertropical Convergence Zone.

There has been clear evidence of success of the control campaigns launched in the 1960s and after, though it is acknowledged that the overall decline of the locust menace in these decades may be in large part attributable to weather less favourable to the insects.

Similarly, specific weather conditions promoting the development of potato blight (periods of forty-eight hours or more with temperatures continuously above 10°C and humidity above 90 per cent saturation), or favouring cattle diseases in temperate countries such as liver-fluke and gastro-enteritis (and their hosts or vectors at some vital stage of the development cycle), can be defined and are used to issue warnings and initiate preventive measures.

Site Admin
Posts: 71949
Joined: Thu Aug 30, 2018 1:40 p


Post by thelivyjr »


The only weather conditions which in the worst cases have directly caused within a single year or less disasters to humanity on a scale comparable with those occurring through starvation and disease have been, as stated earlier, vast flooding:

1 River floods, such as those of the Yangtse-kiang and Yellow River (Huangho) in Honan, China in June 1931, when more than one million drowned, and the similar disaster in the Yellow River valley in September-October 1887 when 900,000 were reported drowned.

The river floods and subsequent disease in China in 1332–3 are said to have taken seven million human lives, with long-lasting devastation in parts of the country and destruction of many settlements; and it has been suggested that this may have been the starting point of the plague which swept the world as the ‘Black Death’.

2 Coastal flooding by the sea in storm surges, propelled either by tropical cyclones and typhoons or by the cyclonic storms of middle latitudes.

On 12–13 November 1970 Bangladesh was visited by a flood of this kind, due to a cyclone in the Bay of Bengal, which submerged a large fraction of the country.

The death toll, originally estimated at 300,000, was finally put at about three-quarters of a million by the authorities.

Many similar disasters were recorded on the low-lying coastlands around the North Sea in the Middle Ages and after, particularly on the continental side, with estimated death tolls from 100,000 to 400,000.

Their non-occurrence in recent times is a tribute to the effectiveness of the sea defences that have been built over the last three hundred years.

The number of human lives lost in the worst phase of the drought in 1972 and 1973 in the Sahel-Ethiopian zone of Africa certainly came within the latter range.

The bitterest winters in Europe and North America seem never to have produced deaths on any such scale.

Despite much misery and privation to the poor and the old, and numbers of people reported frozen to death on the roads, buried by snow in the countryside, and dying in the streets of the cities, the severest impact was usually quite localized or even just on scattered individuals; the scale was probably never worse than in the famous disasters to Napoleon’s and Hitler’s armies exposed on the plains of Russia in 1812 and 1942.

There was, however, sometimes a more widespread indirect impact through food shortages and bread prices, etc., which we have noticed in the case of the French Revolution after the winter of 1788–9 and in some much earlier cases where both trouble with wolves and the eating of small children were reported.

And in our own day, of course, important economic losses can arise through severe winter weather, particularly in regions where this is somewhat exceptional; and there is much room for economic (and military) gain in careful planning to optimize capability in such weather and in the economics of provision for it.

The cost of providing powerful snow-clearing equipment for highways (snow-blowers), of heating certain stretches of road surface and railway junctions against frost, of holding fuel reserves and standby services (helicopters, etc.) for emergency food drops and rescue work — much of which may only be needed in a small minority of winters — has to be weighed against the losses incurred in such winters.

Even in countries where cold winters are common, severe losses can arise.

The city of Buffalo in New York State was brought to a standstill for many days in the winter of 1976–7 when snow 3–4 m deep blocked the streets.

And in the winters of the late 1970s in south Norway flat roofs on modern buildings collapsed under the weight of depths of snow which had not been experienced in recent times.

In the winter of 1978–9 in England, when snow lay for about forty days over much of the low-lying parts of the country with depths ranging up to 50 cm, where the recent long-term averages have been seven to fourteen days, the cost of road clearing in one county in the eastern Midlands (Nottinghamshire) was 3.3 times the average; and over the whole United Kingdom the extra costs were estimated to total £500 million (or about £10 for every man, woman and child), most of which represented the 9 per cent greater than usual fuel consumption.

Evidently the extra costs of a year like 1740 in England which was cold in every season and reasonably judged to lead to 50–70 per cent additional fuel demand could run to a much bigger figure.

Possibly it is because of the excessive concentration of much writing about climatic fluctuations upon exceptional winter snow and ice and the advances of glaciers, and the heart-rending accounts of individual suffering in such circumstances, which — unlike harvest failures — pare nonetheless marginal to the wider community, that some historians have proclaimed that the history of Europe over the last thousand years would not have been much different if the climate had remained constant.

What view of history is that at bottom?

A political or constitutional historians view?

Certainly one biased towards the more secure and sheltered parts of Europe in the west and south: for in the north even the political and constitutional divisions and alignments may be held to have been affected.

And certainly it is a view which ignores much that concerned the health, lives and happiness of Europe’s people.

Site Admin
Posts: 71949
Joined: Thu Aug 30, 2018 1:40 p


Post by thelivyjr »


For each development mentioned in this book, for instance, in the social history of Europe, and even in the outlying colonies in Iceland and Greenland during the late medieval decline, it is possible to suggest other, non-climatic causes.

Certainly, all the other stresses involved need to be established.

But, although we cannot, or cannot yet, establish in all necessary detail the reason (or the chain of events) whereby pestilence or social unrest broke out just when and where it did, there can be little doubt that the climatic shift that was going on in the late Middle Ages occupies a central place in the simplest explanation of the whole complex of events.

And in some details, such as the epidemics of ergotism (see pp. 199–200) and the cessation of communication with Greenland when the sea approaches were usually blocked by ice, there can be no doubt at all that the development of the climate was crucial.

In other parts of the world, too, on the arid fringe, the abruptness of some of the decreases of ancient populations may prompt reasonable questions about the possible involvement of climatic disasters or more indirect climatic pressures.

A possible example of the latter affecting, at least to some extent, an invader and his victims in different ways may be the great massacre of its population by Mongol invaders which ended the medieval greatness of Baghdad in AD 1258 after Iraq’s agriculture had long been in decline.

About the same time the Mongol homelands in central Asia seem likely to have been thriving — and become overpopulated — under a moister than usual climatic regime.

Some developments were apparently caused by the mistakes made by man in what were probably deliberately attempted adaptations to changed climatic circumstances or to the prevailing impression of them.

Thus, the adoption of sheep rearing in parts of Denmark and in the Breckland of East Anglia during the colder centuries of the past millennium on lands that had been tilled in the high Middle Ages did not turn out well.

In the always rather dry, windy climates of East Anglia and Jutland the vegetation cover did not stand up to the grazing, and the land deteriorated to a sandy waste only reclaimed by planting trees to provide shelter belts, and later afforestation, in more recent times.

Even today, when our perception of, and ability to cope with, short-term disasters by mustering relief supplies and first aid from all over the world is so impressively improved, it is doubtful whether our ability to absorb long-term changes is significantly better than it ever was.

We are clearly hindered by too much rigidity of planning and, for instance, allocating quotas of essential agricultural products to be met by monoculture to supply the world economy, and by the rigidity of national frontiers when the need for human migration arises.

Even the effects of the rather noteworthy tendency to clustering of two, three or four years of similarly anomalous weather could impose (almost?) unmanageable stresses.

One of the lessons from our summary in this chapter of the impacts of climate on human society must surely be that in the modern world the effects of climatic difficulties and disasters, and particularly the stringency imposed by harvest shortfalls, in any one region reverberate around the world and are liable at least to affect prices ruling in the whole world’s economy.

Some of the side-issues can be quite interesting and may acquire an important cumulative effect with time.

Thus, it was suggested by Dr E.J. Moynahan at the International Climate and History Conference held at the University of East Anglia, Norwich, in 1979 that through the famines of the Middle Ages and after there may have been a natural selection operating in favour of fat people, who would be better able to survive than their leaner fellows.

Indeed, the selective advantage of fatness may have operated on human populations from the earliest times.

It is only with the much longer expectation of life in European and other populations in recent times that the advantage has gone to the leaner physical constitutions which place less burden on the heart.

Some have objected to the term Little Ice Age to describe the colder parts of the last millennium on the ground that different writers have placed the limits of it at different times, e.g. 1300–1900, 1430–1850, 1550–1700 or about 1800, and so on.

But these differences are only a matter of which of the more sudden developments in the onset and recovery stages passed particular significant thresholds for the subject or region of interest.

It has also sometimes been objected that the greatest growth of population, improvement of general health, and advances in industrial technology and agriculture, and in the extension of civilization to the whole globe, took place ‘precisely’ during the Little Ice Age, between 1700 and 1900.

But this is to ignore the fact that all these developments took place during the long drawn-out, and erratic, recovery from the depths of the Little Ice Age regime.

The ‘parallelism of the climatic and cultural curves’ is, in fact, remarkable and calls for some consideration.

Possible future climatic changes in marginal areas may also easily come to affect the whole world's economy.

A conference on the World Food Supply in Changing Climate, held at the Sterling Forest Center in New York in 1974, estimated that the grain growing area on the Canadian prairies would be reduced by about 1 per cent by a 1°C fall in the long-term average temperatures and production would fall by a similar amount; but the effect on production would become much greater with any further cooling.

A 10 per cent decrease of the rainfall would lower production by several per cent, a 10 per cent increase of rainfall would increase the wheat production by a few per cent but have little effect on the oats and barley.

In sum, the impact of climatic fluctuations and change on history, and on human affairs today and in the times with which our future planning must be concerned, can best be seen as a destabilizing influence and catalyst of change.

At the worst, we see reactions by human society which have amounted to shifting or concentrating the burdens of suffering on to the weakest members of the national and international community.

This may be appreciated perhaps best when we consider the ugliness of the extreme case, the reported developments of cannibalism.

Site Admin
Posts: 71949
Joined: Thu Aug 30, 2018 1:40 p


Post by thelivyjr »



If we are to develop any sound scientific and reliable system of forecasting or even just advice on future climate, we must first understand the causes of climatic fluctuations and change.

Without knowledge of the processes involved in the development of climate’s variations, their normal time-scales and the range of their effects, as well as some ability to monitor the key elements in the progress of each, any forecast must be mere guesswork.

This criticism must apply even to apparently sophisticated mathematical models of climate development, unless and until their results can be demonstrated as realistic when compared with an assortment of epochs in the known past record of climate.

It is important also to gauge the present and probable future limits of predictability.

And if the interested layman, especially anyone involved in decisions affected by future climate, is to be able to judge what may be possible in the way of forecasting, or what else should be done to allow for the future behaviour of the climate, the current state of knowledge of the causes and processes of climatic change — and the prospects of advance — must be properly understood.

We have had a first look at some of these questions in chapters 3 and 4 of this book.

In this chapter it is time to summarize briefly the causes and symptoms of change and how our scientific ability to handle them has developed and is developing.

Here too we must begin to consider not only the natural causes of climatic change but also man’s impacts on the climate.

We have seen how the level of temperatures prevailing and their distribution over the globe can conveniently be treated as the most fundamental things, since they explain so much else — the development of the general wind circulation and, through it, the redistribution of heat and moisture and the development and steering of weather systems.

Even the yield of the Indian summer monsoon seems to go up and down with the global temperature level, particularly as represented by the shrinkage or expansion of the region of Arctic cold to the north.

The things that can change the prevailing temperature level may be summarized as follows:

1 Variations in the energy output of the sun (and possibly in the transparency of interplanetary or interstellar space).

2 Astronomical variations affecting the distance of the Earth from the sun at different seasons and the angle at which the sun’s beam falls on the Earth at different latitudes and seasons.

3 Variations in the transparency of the atmosphere to either the incoming solar radiation or the outgoing Earth radiation.

4 Changes coming about in the internal heat economy of the oceans and atmosphere as a result of their circulation (in three dimensions) and whatever influences bring about changes in the circulation.

5 Changes in the absorption and re-radiation of incoming energy at and near the Earth's surface through (a) variations of cloudiness, and (b) changes in the nature of the surface itself — extent of snow and ice, of different kinds of vegetation and bare soil, desert or marshland and lakes and, on long time-scales, through changes in the distribution of land and sea, of mountains, plateaux and ice-sheets.

Let us consider these items one by one.

Site Admin
Posts: 71949
Joined: Thu Aug 30, 2018 1:40 p


Post by thelivyjr »


The idea that climatic variations might have their origin in variations of the prime source of energy, the output of the sun itself, was obviously one of the first thoughts of people with a scientific concern about the subject.

Riccioli suggested in 1651 that the temperature of the Earth should fall the more spots there were on the sun.

The variable occurrence of dark spots on the sun had been a cause of fear and prognostications of doom to the peoples of Europe a generation or two after the Black Death, when some sunspots were so large in the 1360s-80s as to be obvious to the naked eye looking at the sun in foggy weather.

One report tells of ‘dark spots on the sun’s face as big as the nails in the church door’.

Galileo observed sunspots with his telescope in 1611, and an increasingly continuous record of their variations can be pieced together from that date.

As mentioned in chapter 4, there seems to have been a prolonged period of almost no sunspot activity between about 1645 and 1715, and this phase coincided with a time of generally low temperatures prevailing over most of the world.

It is now well known that sunspots are only one of a number of different types of solar disturbance, and when they occur the reduced energy output from the darkened areas of the sun is often more than compensated by intensified radiation from brightened areas round about, known as faculae.

These have been systematically measured only since 1874.

Thus, unfortunately, our fine long record of sunspots is a very inadequate indicator of solar output variations, though the dates of maxima of the roughly eleven-year sunspot cycles have been tentatively established (partly by using reported observations of great displays of the polar lights or ‘aurorae’) back to 649 BC.

A better index might be one which measured the difference between the areas of faculae and sunspots.

In fact, very few series of weather or climate data have shown any appearance of significant associations with the somewhat variable, but approximately eleven-year, sunspot cycles.

The more prominent occurrence of more or less cyclic recurrences of weather patterns at about 20–23 year intervals e.g. of droughts in the United States Middle West and of some features in the long temperature record in England — may be connected with the double sunspot cycle: but this is complicated by the fact that the sunspot cycle, although averaging about 11.1 years, actually varies in length.

Extreme cases observed in the last two centuries have ranged from just under nine to fourteen years.

The activity of the cycles also varies, the shortest cycles generally producing the greatest numbers of spots.

And it appears that the sun’s output may actually be greatest at middling sunspot numbers, about eighty on the internationally recognized scale of Zurich relative sunspot numbers, as compared with over two hundred at the greatest maxima (when, presumably, the effect of the darkened areas outweighs that of the faculae).

There is some evidence that the longer-term variations of sunspot activity may be more simply associated with variations of the global temperature level.

Thus, fairly short cycles and apparently rather high sunspot activity prevailed not only during the warmest period of the present century (average cycle length 1915 to 1964 was 10.2 years), but also during several other warm climatic periods in the past, in late Roman times and in the Middle Ages.

And the so-called Spörer minimum of solar disturbance (with mean cycle length, between the successive weak sunspot maxima, of about twelve years) between AD 1400 and 1510, like the Maunder minimum in the seventeenth century, seems to have coincided with a notably cool period of global climate.

A better measure of the sun’s luminosity, or strength of the solar beam, seems now to be available in the form of the ratio of the darkened area (umbra) to that of the grey area (penumbra) in sunspots, this being assumed to measure the rate of convective flux of energy from deeper in the sun.

The annual values of this solar luminosity index, plotted in fig. 110, seem to parallel rather well (or slightly precede) the global temperature rise and fall within the period from 1880 to the 1970s which we have presented in fig. 91a (p. 258).

(Although this same index cannot be produced for the eighteenth century, the pronounced maximum of warmth that affected most of the known world in or about the 1730s bears a similar relationship to the sunspot record preceding by some 20–40 years a series of extremely active sunspot cycles as in the twentieth-century case, just two hundred years later.)

Site Admin
Posts: 71949
Joined: Thu Aug 30, 2018 1:40 p


Post by thelivyjr »


The nature of the astronomical variations concerning the Earths orbit and axial tilt, which affect the strength of the solar beam in the ways mentioned above, has been explained in chapter 4.

The agreement between the time-scales of these variations and the variations of the Earth's thermal regime and land-based ice cover, indicated by isotope examination in recent years of the longest records available in ocean-bed sediment cores, seems to put the thesis that these orbital variations control the timing of ice ages and interglacial periods beyond reasonable doubt.

One may, however, assume that the step-by-step increase of the Earths reflectivity (albedo) as the area of snow and ice extends, under declining radiation receipt, provides a necessary amplification of the effects of the radiation variations.

There has been some interesting debate in recent years as to just where and at which season the response to these radiation variations should be most sensitive and significant to the climatic development over the whole globe.

As a result, the older notion that the most significant variations of snow and ice accumulation should be responses to the varying strength of the summer sun on the almost completely (and partly mountainous) land zone between latitudes 60 and 70°N seems to need some modification.

The most immediate response in growth of snow cover to the regular seasonal change of radiation from the sun is found in the autumn between latitudes 40 and 70°N and particularly in the heart region of the Eurasian continent between longitudes 50 and 70°E.

In the melting season the response to radiation change is quickest in the southern hemisphere oceans — so also around October–November.

This may be the time of year therefore when any long-term changes in the radiation available have most effect.

It is logical to suppose that in full ice age conditions, with permanent ice covering North America north of 50°N, the responses to radiation changes there would be less sensitive and that the summer peak would become more important there than what happened in autumn.

What concerns us here is that these astronomical variations provide the one entirely predictable element of the future.

To this we shall return in the next chapter.

But it is important to note that they do not explain the suddenness of some of the climatic shifts that are indicated by the geological record in the course of these 10,000–100,000 year cycles.

For these we must look to other causes, such as possible variations in the sun's output or in the amount of volcanic dust in the atmosphere, happening to reinforce (or oppose) powerfully the slow trend induced by the orbital variations.

Post Reply