Category Archives: Bureau of Meteorology

Trends in sea level at Cooktown, Great Barrier Reef

Mean sea level change at Cooktown, Great Barrier Reef, Queensland

Dr Bill Johnston


Main points

  • There is no evidence that melting glaciers, increasing levels of atmospheric CO2­­ or expansion of the oceans due to rising temperatures has caused sea levels to increase at Cooktown. Consequently, the likelihood that sea level will rise by 26 to 29 cm by 2030 as suggested by the IPCC is far-fetched.
  • As trends measured by multiple tide gauges adjacent to the reef differ from satellite-based estimates, and time-lapse aerial photographs since the 1950s show no shoreward encroachment of tidal wetting fronts, satellite data should not be used in critical studies or to inform government policy.
  • The El Niño Southern Oscillation exerts an overarching impact on fluctuations in sea level and other climate and environmental variables.


The Great Barrier Reef Marine Park Authority (GBRMPA) claims that due to global warming, sea level is increasing and that the fastest rate of sea level rise is in the northern sector of the Reef. Further, the Intergovernmental Panel on Climate Change (IPCC) predicts sea level will rise by around 26 to 29 centimetres over the next 9-years (i.e., by 2030) and by 47 to 62 centimetres by 2080.

But is it true or is it just untrustworthy science?

Rapid rates of sea level change should be evident in mean sea level (MSL) measured by tide gauges relative to the land, especially at Cooktown where Maritime Safety Queensland has operated an automatic tide gauge since January 1996 (Figure 1). Also, evidence of shoreline encroachment resulting from sea level rise should be obvious in time-series of aerial photographs available from Queensland Government archives since the 1950s and 1960s. 

Figure 1. The Cooktown storm surge tide gauge (arrowed) located on the wooden-decked wharf prior to its restoration in 2015. (Photo 44740 from the Cultural Atlas of Australia.)  

What we did

High-frequency (10-minute) tide gauge data was downloaded from the Queensland Government Open Data portal, aggregated into monthly averages and analysed using a technique that partitioned variation IN the data caused by influential covariables, from underlying impact variables that impacted ON the data-stream.  

Aerial photographs taken in 1969, 1974, 1979, 1983, 1987, 1989, 1991, 1994 and high-definition Google Earth Pro Satellite imagery were also examined for signs of tidal encroachment at Cherry Tree Bay east of Cooktown across the peninsula.

What we found

The Bureau of Meteorology Southern Oscillation Index (SOI) was the most influential of a range of climate and environmental variables that affected MSL. Rainfall and rainfall two months previously (RainLag2) also explained a statistically significant but small portion of MSL variation. Having accounted for those covariables, extraneous factors impacting on the data-stream caused step-changes in 1997, 2009 and 2015.

Following Tropical Cyclone Justin in March 1997, a major dredging campaign removed 108,000 m3 of accumulated sediment from the harbour floor, which caused the wharf supporting the tide gauge to settle about 40 mm into the bed of the river by January 1998. Dredging of more sediment in 1999 (26,000 m3) did not affect the gauge. However, in March 2009 it settled a further 37 mm probably as a result of disturbances caused by TC Ellie (30 January to 4 February 2009) and TC Hamish (4 to 11 March 2009). The harbour was dredged again following TC Ita in 2014 (60,000 m3), then in January 2015 the former wooden wharf that supported the tide gauge was strengthened and re-decked with a new composite material capable of allowing small trucks to load and unload supplies ( Dredging and refurbishment caused the tide-gauge to settle a further 32 mm. Step-changes underlying the data-stream show the gauge is not well-secured to the harbour floor.

The highly significant step-changes (P <0.001) totalling 109 mm (SEM 9.4 mm) accounted for all the apparent MSL trend. There is no evidence therefore that sea level is rising in the northern sector of the Reef. The IPCC prediction that sea levels will increase globally by 26 to 29 cm by 2030 is an unlikely scenario.

A Queensland Government aerial photograph taken on 11 September 1969 was re-scaled and oriented so features across the peninsula east of Cooktown including the well-defined Cherry Tree Bay and associated rocky headlands can be directly compared as an overlay on a Google Earth Pro satellite image taken on 16 September 2018.

Marked where they intersect the headlands, tidal wetting fronts are the same along the low-gradient beach. Littoral zones around the headlands that define inter-tidal habitats also directly align. The same shoals and individual shore-line rocks, the small watercourse draining to the beach: all the same. There is no evidence of tidal encroachment and therefore no evidence that sea levels have materially changed over the intervening 49-years (Figure 2).

What we conclude      

Satellite data depended upon by IPCC do not stack-up with tide gauge data or aerial photographs taken between 1969 and 1994 compared with high-definition Google Earth Pro Satellite imagery of the same sandy-beach.

It seems that while CSIRO et al. can model sea level relative to some point at the centre of the earth with mm/year precision using satellites traversing the same patch of heaving ocean every 20-days or so, they and other oceanographers and elite climate scientists lack the skills to analyse tide gauge records or interpret aerial photographs they can freely download from the internet.     

Satellite data upon which speculation relating to sea level rise depends, is pre-loaded with trend and should not be used for critical studies, for spreading alarm or for informing government policy. It is a ridiculous notion that sea levels will increase by almost 300 mm during the next 9-years.

Figure 2. Aerial photograph of Cherry Tree Bay, east of Cooktown taken on 11 September 1969 overlaid on Google Earth Pro (GEP) Satellite image for 16 September 2018; upper-left, GEP opacity 0%, 50%; lower-left 75%, 100%. Tidal wetting fronts, littoral zones, rocks and shoals show no encroachment or change in exposure due to rising sea levels over the intervening 49-years.

 To access full paper in downloadable PDF form, click here

To access full table of data supporting the paper, click here

Trends in sea level at Townsville, Great Barrier Reef

Sea level at Townsville, Great Barrier Reef, Queensland

Dr. Bill Johnston[1]


Main points

  • If melting of glaciers and icesheets in Greenland in recent decades significantly influenced mean sea level (MSL) it would be detectable in data for Cape Ferguson from 1991 and Townsville Harbour from 1959. However, there was no evidence that climate change, warming or melting ice sheets has caused sea levels to increase.
  • Tide gauges are affected by the conditions under which they operate. Data are coarse, imprecise, poorly documented and not understood by climate scientists and oceanographers who routinely conflate variation caused by covariates such as the El Niño Southern Oscillation, components of local water balances, and step-changes caused by site and instrument changes as being due to the climate.
  • In order to draw valid conclusions, it is imperative that scientists implement a quality assurance process that distinguishes between variables that cause variation IN data (covariables), from those that impact ON the data-stream (impact variables) and adjust for those using independent statistical methods.
  • Scores of peer reviewed papers published at great expense in elite scientific journals, by multiple authors supported by long reference lists are biased by lack of attention to detail and poor science. Using Cape Ferguson as a case study, and replicated using data for Townsville Harbour, the approach outlined here, which is widely applicable, sets a benchmark for undertaking due diligence on data. Findings of papers that failed to assess the fitness of data used to determine trend and change should be disregarded.


Australia’s lead management agency for the Great Barrier Reef, the Great Barrier Reef Marine Park Authority (GBRMPA) states on their website that “global average sea level rose by 0.18 centimetres per year from 1961 to 2003. The total rise from 1901 to 2010 was 19 centimetres, which is larger than the average rate during the previous 2000 years.” (

Further, they say that “Since 1959, records of sea levels for Townsville, in north Queensland, show an average increase of 1.2mm per year. However, the rate of increase may be accelerating, with records of sea levels at Cape Ferguson near Townsville showing an average increase of 2.9mm every year between 1991 and 2006.” How can it be that for the same waterbody, sea level is increasing 2.5 times faster just 25 km away from Townsville Harbour at Cape Ferguson?

GBRMPA goes on to claim that “because much of the land adjacent to the Great Barrier Reef is low-lying, small changes in sea level will mean greater erosion and land inundation. This will cause significant changes in tidal habitats, such as mangroves, and move saltwater into low-lying freshwater habitats. This will have flow-on effects for juvenile fish that use these habitats for protection and food resources.” So how can that be that compared with current satellite imagery aerial photographs from the 1950s and 1960s show wetting fronts on beaches and tidal influences on rocky headlands such as Cape Cleveland are unchanged?

Paid for by taxpayers, led by government agencies including CSIRO and the Bureau of Meteorology, ably assisted by the Australian Institute of Marine Science (AIMS) and barracked-on by slick campaigns run by WWF, the Climate Council, the Australian Museum, the Great Barrier Reef Foundation et al., Australians are bombarded by confusing, over-hyped mis-information and scare-campaigns related to the Great Barrier Reef.

Disaster-porn has replaced knowledge and understanding to the point that Australia’s climate history has been substantially re-written. Like a billion-dollar cart of hay put before the science-horse, in almost every sphere, policy-driven science has overtaken the scientific method.

Coupled with previous exposés that showed apparent trends in maximum temperatures at Cairns, Townsville and Rockhampton were caused by homogenisation adjustments and not the climate [LINK], this series of investigations examines monthly sea-level data measured at Cape Ferguson since September 1991 and the longer record for Townsville Harbour since January 1959. The aim is to independently verify that due to anthropogenic warming, survival of the Great Barrier Reef is imperilled by compounded multiple threats including sea-level rise. Of overriding concern is that on behalf of their ‘independent’ boards and sponsors, scientists may have been led astray by liberally-scattered golden-hay, and thereby lost pride in their scientific work.

What we did

Using the 30-year monthly MSL dataset for Cape Ferguson as a case study, we objectively distinguished between variables that cause variation IN tide-gauge data (covariables) from those that impacted ON the data-stream (impact variables). The approach outlined in the paper provides climate scientists and oceanographers with a method for verifying that data they use is fit for purpose i.e., that trend reflects the oceanographic waterbody and not covariables and/or effects caused by site and instrument changes. The Cape Ferguson study was replicated using the 62-year monthly dataset for Townsville Harbour.

Principle findings

  • At Cape Ferguson, 31.9% of variation in MSL was accounted for by (in order of importance), SOI3pt; barometric pressure (hPa); lag1 solar exposure (MJ/m2); Lag2 rainfall (mm), and current rainfall. Accounting for a step-change in 2009 caused by a change in calculating 10-minute values from 1‑second samples, and a residual 18.06-year cycle, increased R2adj to 0.645 (64.5%). Having removed variation IN the data and the effect of the inhomogeneity ON the data-stream, no trend or change was attributable to any latent factor such as melting glaciers and icecaps in Greenland, coal mining or global warming. 
  • The dataset for Townsville Harbour from January 1959, was nosier than Cape Ferguson, partly because data before 1984 were manually digitised from tide gauge charts and also because water levels in the harbour, which lies at the entrance to Ross Creek are greatly influenced by hydrological processes within the catchment, including urban development, irrigation, leakage etc. Thus, while SOI3pt was less influential, components of the water-balance (rainfall, evaporation and seasonality) were more so. Significant covariables accounted for 25.3% of variation in MSL.

Step-changes in residuals aligned with construction of the Ross River Dam in 1971 and its enlargement 2007. A third inhomogeneity in 1987 may have been associated with harbour developments or an undocumented change related to the gauge. Significant variables and step-changes together accounted for 49.2% of MSL variation.

  • Although MSL data were affected by random noise no residual trends or changes were due to any other systematic factor including warming of the climate or the ocean.

The full report can be downloaded [HERE].

[1] Former NSW Department of Natural Resources research scientist.

Trends in sea surface temperature at Townsville, Great Barrier Reef

Trends in sea surface temperature at Townsville, Great Barrier Reef, Queensland

Dr. Bill Johnston[1]


Main points

#1. Heat exchanges with the landscape bias trends in sea surface temperature (SST) measured close to shore such as at Cape Ferguson near Townsville (Latitude -19.2774o, Longitude 147.0586o), especially during periods of low summer rainfall when maximum temperature (Tmax) is axiomatically higher. Removing seasonal cycles, which show no trend, and accounting for the significant effect of terrestrial Tmax and barometric pressure (hPa), left no trend of change attributable to any other factor. While Tmax is clustered into dry-warm and moist-cool years, there is no evidence that SST has warmed since records commenced in September 1991.

#2. At Cape Ferguson SST cools more slowly from its peak in January to July than it warms from August to to December. Great Barrier Reef (GBR) ecosystems must therefore be adapted to the 8.1C interannual cycle and average month-to-month SST changes of up to 2C.

#3. Australian Institute of Marine Science (AIMS) SST data is short, patchy, poorly dispersed towards the extremities of the Reef and not useful for estimating trend. Selecting day-of-year averages for 27 sites extending from Thursday Island Cape York to North Solitary Island in the south showed Reef ecosystems are adapted to average temperatures between 27C to greater than 29C and greater than 30C for four to five months, and less than 20C in winter (July to September). Highest average SST is predicted to be 29.64C ( ±PI 1.12oC) at Latitude ‑13.5o in late January; SST cools slightly towards the equator.

#4. The Southern Equatorial Current which splits to form the North Queensland current and the East Australian Current (which dissipates south into the Tasman Sea) is cooled continuously by convection, long-wave re-radiation to space by towering clouds, cool rainfall and the formation of reflective residual cirrus ice-clouds. These processes maintain SST within close limits that rarely and only transiently exceed 30oC.

#5. No difference was found between temperatures measured between Port Stephens and Cape Sidmouth in November and December 1871 and data for those times derived from AIMS datasets. Further, data does not support claims by AIMS, the Great Barrier Reef Marine Park Authority, The Australian Museum, the Great Barrier Reef Foundation and groups including WWF and the Climate Council that sea surface temperature has increased by an unremarkable 0.8oC or that continued warming is likely to threaten survival of the Reef.

#6 Near the Equator the water cycle operates as a self-regulating heat-pump that catapults moisture high into the atmosphere to form cloud that reflects or rejects incoming solar energy during the monsoon and thereby limits input of warm waters to the North Queensland and East Australian currents. There is no evidence that the process has broken-down or is likely to break down in the future.


With operations in Townsville, Darwin and Perth, AIMS is part of arguably the largest, most expensive and elite conglomerate of research institutions in Australia. Spread across multiple universities and state and commonwealth agencies and with strong support from the Australian Research Council, their research focuses on the effect of climate change on Australia’s Great Barrier Reef. They and partner organisations including CSIRO and the Great Barrier Reef Marine Park Authority (GBRMPA), Great Barrier Reef Foundation, WWF and the Climate Council have consistently claimed survival of the Reef is imperiled by rising seas and anthropogenic warming. For instance, GBRMPA states unequivocally[2] that “Australia’s climate has warmed on average by 1.44 degrees Celsius since national records began in 1910, with most warming occurring since 1950 and every decade since then being warmer than the ones before”; and that “sea surface temperatures in the Australian region have warmed by around 1 degree Celsius since 1910, with the Great Barrier Reef warming by 0.8 degrees Celsius in the same period”.

Research reported here investigates that claim. The main question is:

  • Is mean SST increasing, and if so, at what rate.

What we did

Using average SST data from September 1991 for the fixed tide gauge at Cape Ferguson, which is part of the Australian baseline Sea Level Monitoring Project run by Australia’s Bureau of Meteorology (BoM), we aimed to distinguish between variables that caused variation IN SST from latent factors that may have impacted ON the data-stream (impact variables). Multiple linear regression (MLR) was used to investigate variation IN SST, while factors that impacted ON the data-stream were investigated using step-change analysis of MLR residuals (SST with covariable effects removed). 

Commencing on 28 November 1871, SST was measured between Port Stephens and Cape Sidmouth near the top of Cape York by astronomers from Melbourne and Sydney who sailed on the Governor Blackall to observe the total eclipse of the sun and also on their return voyage commencing 13 December. They used bucket samples taken near the bow of the steamer each hour between 6 am and 6 pm each day. Data were summarised and coordinates were estimated from accompanying notes using Google Earth Pro. Although published in 1877, the data has never been used before to benchmark data collected more recently by AIMS. (National Library of Australia call number NL 551.56 R963.)

As AIMS data consisted of varying numbers of daily observations, collected using a variety of dataloggers and sensors over variable time periods, averages were calculated for 27 sites spanning the Reef corresponding to the time of the 1877 voyages and mainly at the start and middle of each calendar month. Datasets were analysed as transects using polynomial regression and compared statistically and graphically.

The Cape Ferguson, 1871 and derived AIMS SST datasets used in the study are available here .

Principal findings

Data measured close to shore was contaminated by heat transfers with the landscape. Thus, data for Cape Ferguson (and some AIMS dataloggers, notably several in Torres Strait) was warmer during dry hot summers and did not truly reflect SST.  

The Eastern Australian Current warms rapidly from November to December and temperature measured on the journey to Cape Sidmouth in 1871 was significantly cooler than values for the return voyage to Port Stephens. However, despite spatial and temporal uncertainties and within and between year variation in the behavior of the currents, confidence bands for AIMS data averaged for 01 and 15 November overlapped those for the voyage north from Port Stephens and were therefore not different. Within the Latitude limits of where datasets overlap, AIMS data for 04 and 18 December, 01 January and 15 and 01 February, are also not different to data for the return voyage from 13 to 24 December 1871.

Furthermore, toward its northern extremity (Bramble Cay, Latitude ‑9.08o, for which there is no useful AIMS data), while SST increases steadily from 01 November to mid-December, from then until March, SST does not exceed between 29o and 30oC. The curvilinear response evidenced an upper-limit to SST, which is rarely or only briefly exceeded.

Average monthly SST attains a plateau in late November that persists until the cooling phase commences in March. SST in the range 27oC to 29oC from November to late March provides a five-month growing season for corals, which combined with the minimum of around 20oC in July (North Keppel Island) defines the ecotone limit of Reef ecosystems.

North Solitary Island is too cool from September to April (<24oC) for Reef ecosystems to establish and thrive. It was estimated that at Latitude -13.5o , which was the warmest point along the Reef transect, maximum SST occurred in late January to early February (29.64oC ±PI 1.12oC ), the minimum occurred in mid-August (24.26oC ±PI 1.47oC), SST increased to mid-November (27.96oC ±PI 1.1oC) after which the cycle repeats. The interannual range was therefore about 5.4oC. Despite trend in sea-surface temperature being touted as a threatening process that may ‘catastrophically’ impact on the long-term health and survival of the Reef, of the scores of sampling sites operated by AIMS, only several are dispersed towards the extremities of the Reef, while too few are sufficiently well maintained and serviced to provide reliable long-term data.

Sea surface temperatures reported by AIMS are no warmer than they were in November and December 150 years ago in 1871. As solar radiation increases in summer, SST north of Latitude -13.5o is cooled by the monsoon and remains in the range of 29oC to 30oC. AIMS SST data shows no evidence that the process has broken-down or is likely to break down in the future.

As SST has not changed, nor is it likely to change in the future, coral bleaching is due to something else.

The full Report is available here

[1] Former NSW Department of Natural Resources research scientist.


Ocean surface temperature limit

Guest post

Richard Willoughby[1]


Main points

  • Observations for at least the last 50 years of open ocean surface temperature provide clear evidence that the annual average ocean surface temperature is limited to 30 degrees Celsius with short-lived excursion up to 32 degrees Celsius.  This observation contradicts the predictions of climate models that show tropical ocean surface temperature perpetually rising.
  • The formation of clouds in response to the surface temperature of tropical oceans limits the surface insolation.  Once the surface temperature reaches 32C the cloud becomes sufficiently persistent that surface heat input and heat output reach a balance.  The consequence being that 32C is the maximum possible temperature of open ocean water. 
  • The ocean surface temperature limiting mechanism is tightly linked to the atmospheric water content.  Once the atmospheric water content reaches an equivalent depth of 45mm, a process of cyclic deep convection sets in whereby water vapour is literally catapulted as high as 12,000 metres into the atmosphere by convective instability.  This is a well-defined physical process detailed in the paper.
  • The paper challenges the concept of “Greenhouse Effect” where radiating gasses such as water vapour is assumed to only cause surface heating.  The key failure with climate models is the parameterising of clouds which are, in fact, formed from different phases of atmospheric water.  Once the atmospheric water reaches 45mm, the mode change to cyclic cloudburst results in the atmospheric water becoming a cooling agent rather than a warming agent at lower level.


The satellite era has provided abundant data with broad coverage of the global energy balance, atmospheric conditions and surface temperature.  Observation of the ocean surface temperature data from satellites provides compelling evidence that the ocean surface temperature rarely exceeds 30 degrees Celsius.  Rare exceptions like the Persian Gulf, which reaches up to 35 degrees Celsius, provides the key to identifying that cyclic cloudburst is a surface temperature limiting process.  Cloudburst is rare in the Persian Gulf and it is the only sub-tropical ocean surface exceeding 28 degrees Celsius that has not experienced a tropical cyclone in recorded history.

What we did

In addition to the satellite data, surface level data from the tropical ocean moored buoy arrays was evaluated to observe the surface temperature limiting process in operation through one hour intervals displaying the exquisite precision of this natural atmospheric-ocean system.  A series of charts within the paper demonstrates the same process across the three tropical oceans separated by thousands of kilometres and at different times of the year all regulating ocean surface temperature to a limit of 30 degrees Celsius.

A single column atmospheric model was developed through the course of this study to quantify convective instability and that led to the understanding that the increasing persistency of high level cloud reduced surface insolation by increasing cloud short wave reflection.  Clear sky conditions reduce rapidly with surface temperature above 28 degrees Celsius.

Detailed analysis, month-by-month of top of the atmosphere radiation balance and the level of atmospheric water unearthed a key parameter that the role of water vapour pivots about a level of 45mm.  Below that level, the water vapour is a warming agent through long-wave radiation absorption while above that level it is a cooling agent through the process of convective instability dominating cloud formation and increased cloud short-wave radiation reflection.

Principal findings

  • Current climate models assume the ocean surface temperature can continue to increase without constraint.  Clouds in climate models are parameterised and their formation is not tightly linked to surface temperature.  These fundament flaws mean climate models predict physically impossible outcomes.  They are not representative of Earth’s climate system.  The warming trend all climate models predict for the Nino34 region is clearly not evident in the actual recorded data over the last four decades.
  • Until climate models can replicate the physics of deep convection, tightly linked to surface temperature rather than the naive parameterisation of clouds, they will remain nothing more than extended weather models with useful predictive ability of a few days.
  • The behaviour of water in Earth’s atmosphere follows clearly defined processes that depend on the physical properties of water, in all phases, and atmospheric pressure.  The only way the open ocean surface temperature can exceed 32C is through a substantial increase in atmospheric pressure.  There is strong proxy evidence that higher ocean surface temperature were recorded in the Cretaceous period when the atmospheric pressure was approximately 10% higher than present time.  This is consistent with the temperature limiting process detailed in the paper.
  • Observations of the attributes of water in the atmosphere contradict the heat trapping assumption of atmospheric water described by the “Greenhouse Effect”.  Water in the atmosphere is not heat trapping but rather a temperature regulating component that increases radiating power (the sum of reflected short wave radiation and emitted long wave radiation) when the surface warms and reduces radiating power when the surface cools through reduced cloud cover enabling more surface insolation. 

The full report can be downloaded here .

[1] Professional Electric Engineer consulting in engineering risk for major projects with an enduring interest in natural catastrophes and changing climate.

Duty of Scientists

by David Mason-Jones

At what point does it become the moral and legal duty for scientists to speak out when an issue involving the integrity of science arises?  This challenge has always existed but may be presenting itself in in a new light over the issue of whether sea surface temperatures near the Great Barrier Reef are rapidly rising.

It was this challenging question that came strongly to mind when I was made aware of the two graphs shown below. The first graph is of the raw data from a sensor at the Australian Institute of Marine Science (AIMS) wharf at Cape Ferguson, not far from Townsville, Queensland. To the naked eye, and on an expanded scale, it definitely fails to show any rapid rise over the 29.5 years of observations.

Source: National Tidal Centre. Bureau of Meteorology. 07 May 2021. Australia

Using basic statistical methods, if one digs deeper into the table of data supporting the graph the answer comes back the same – ‘No.’

After reviewing the Cape Ferguson data, natural resources research scientist Bill Johnston found a connection between sea surface temperature measured by the tide gauge and maximum temperature measured on-land but no evidence of a trend uniquely due to warming of ocean waters in the vicinity of Cape Ferguson.  

‘If the numbers at Cape Ferguson are supposed to constitute part of the evidence for rapidly increasing sea surface temperatures in the vicinity of the GBR, they just don’t stack up,’ says Dr Johnston. ‘If a rapid rise exists, we are going to have to look much further afield for the compelling evidence than just a wharf near the shore of the mainland.

‘And if we go looking for the real evidence, we are going to have to find some obvious and sustained increases elsewhere in the Reef and its lagoon to compensate for the absence of an ocean-related trend at Cape Ferguson,’ he says.

The second graph shows Johnston’s work in ‘de-seasoning’ the annual swings in measured temperature due to the difference in the Sun’s apparent position north and south of the Equator. When the sun appears to be directly overhead, it is hotter, the days are longer and the water near the surface is warmer. When the angle of the Earth’s tilt makes the Sun appear to move north of the Equator, the reverse is the case – less solar heat, shorter days and cooler water. The annual swing in sea surface temperature is constant and not attributable in any way to changes in the climate. 

Source: Scientist Dr. Bill Johnston

Dr Johnston’s analysis of the de-seasoned data shows two clusters of warmer temperature (1998 and 2018) but no indication that sea surface temperatures are increasing rapidly or likely to increase in the future.   

The commonly held view that sea temperatures are rising in the GBR is widespread and the message is deeply entrenched, so much so that the topic is an almost guaranteed dinner-party-wrecker if seriously disputed by any of the guests.  

But is it true?

The problem for our dinner party is that, like it or not, a clear-headed review of the observed sea surface temperatures in the GBR shows no rising trend in the data – at least not at Cape Ferguson. Most importantly, there is no such phenomenon in the data that would justify the use of the word ‘rapidly’.

So what is the duty of an ethical scientist at the dinner party in this situation – to wreck the party or just to stay quiet and avoid rocking the boat? What is the scientist’s obligation to draw people’s attention to the discrepancy between belief and reality? This is possibly one of the oldest challenges in science.

‘At Cape Ferguson, there is a huge disparity between what people – and scientists – seem to believe and what the data says,’ says Johnston. ‘True scientists should not sit back and allow this misconception to take hold in the public mind as the truth.’

While Cape Ferguson is a single instance, other sites around the GBR show the same inconvenient result. The AIMS site at Thursday Island, off the tip of Cape York (May 1998 to February 2019); Arlington Reef off Cairns (April 1996 to February 2020); Seaforth Island off Proserpine (July 2005 to February 2021) and Square Rocks off Yepoon show no discernible warming trend. It seems to be the truth that over-hyped talk of rapid rise in sea surface temperature is belief-based, not fact based.   

Someone of influence and repute needs to start blowing the whistle on this.   

The data for Cape Ferguson is available from the Australian Bureau of Meteorology (BOM) website ( ). The site is one of National Tide Centre’s Australian Baseline Sea Level Monitoring Project (ABSLMP), a unit within BOM. ABSLM monitors seal level and sea surface temperatures at 16 sites around Australia and data can be viewed as PDF graphs and as tables. Please have a look for yourself. I stress this request, please look for yourself and don’t just take my word for it. You just may be shocked by the disparity between the general belief and the reality revealed by the data. Please also bear in mind that the data you will find is not the work of some hair-brained contrarian sitting at a computer late at night, blogging away madly and making it all up. It is the data from the Australian Baseline Sea Level Monitoring Project (ABSLMP).

I will deal with more of this data in greater detail in subsequent articles to be published on   

For the moment however, let me just focus on the Cape Ferguson site at the AIMS wharf which is part of the baseline monitoring program referred to above. While AIMS is not directly responsible for the collection of the data (it is done automatically) the sensor is co-located at its property and, given the marine science role of AIMS, one might expect that the organisation might have more than a passing interest in the integrity of the Cape Ferguson data.

It’s all a bit disheartening. Very few individuals appear ready to acknowledge that they may be under a moral or legal obligation to speak out about the chasm between belief and empirical data when it comes to sea surface temperatures near the Reef.   

There is a further question that opens out from this. At what point do we expand the idea of a personal moral or legal duty of an individual to the wider scope of the legal duty of a corporate entity such as a university or publicly funded research organisation? Not only is it intolerable that individual scientists may avoid their moral and legal duty, it is also intolerable when a corporate entity does the same.

Research organisations we have come to trust cannot be granted the luxury of legal immunity when they make claims that cannot be substantiated. They cannot choose to remain silent when challenged by obvious discrepancies.     

I opened this essay by posing the question about when a scientist’s personal moral or legal duty to speak-out clicks in. The disheartening thing is that this point of law is not yet clearly determined and the law seems more porous on this subject than it does in, say, the commercial world where an individual makes a false or misleading statement about the contents of a prospectus when promoting that prospectus. Similarly, a company – a corporate legal entity – faces severe legal sanction if it issues a false or misleading prospectus.

Let’s hope the persistence of scientists like Dr Bill Johnston gives heart to others to speak out when they see instances like Cape Ferguson. Our systems of quality control in science need teeth, not more funding for flawed science based on foundations of wrong, or shaky, data.         

<end notes>

David Mason-Jones is a freelance journalist of many years’ experience. He has researched and written extensively on environmental issues. or

Dr Bill Johnston is a former NSW Department of Natural Resources senior research scientist and former weather observer.

To view the graphs and tables of the ABSLMP data in full visit  

For more information about climate of the GBR visit 

Charleville, Queensland.

Note: If you want to link directly to the Full Paper supporting this Brief Summary, click here. The Full Paper is a PDF of 30 pages and includes extensive analysis including more detailed discussion, tables of raw data used, additional photos, graphs of data from Charleville and the other ‘nearby’ weather stations which the BOM used to homogenise the poor state of the Charleville data.

If you would prefer to read this Brief Summary first, then a link to the Full Paper is provided at the end of this Summary.

Are Australia’s automatic weather stations any good?

Part 4: Effect on temperature extremes

Dr. Bill Johnston[1]

Main points

  • There is no change in the climate at Charleville or the ten other sites that were used to homogenise maximum temperature at Charleville in 1996.
  • For the people of central Queensland, save for a sudden shift in the subtropical ridge such as happened in November 1924, the climate they have is the climate they will experience for the foreseeable future. Droughts will come and go every 9 to 12 years but they are not increasing in severity or frequency. Their effects are just more visible from space, on the evening news and on social media.
  • It is also no warmer in Central Queensland than it has been in the past. Conflating the use of rapid-sampling automatic weather station (AWS) probes housed in infrequently serviced 60-litre Stevenson screens under adverse conditions with “the climate” is misleading in the extreme.
  • Using faulty (correlated) data to correct faults in Australian Climate Observations Reference Network – Surface Air Temperature (ACORN-SAT) data has no scientific or statistical merit and should be abandoned.
  • There is no end to the damage that the Bureau’s infrequently maintained AWS network and biased homogenisation methods can inflict on Australia’s economy and our national prosperity. With trust in our institutions at an all-time low, an open inquiry is long overdue.


The United States Army Air Force (USAAF) used the aerodrome at Charleville in southwestern Queensland as a heavy bomber transit base during World War II. The operational precinct, including the original QANTAS hanger, the tower, meteorological (met) office, refuelling and other facilities were located south of Warrego Highway near the town racecourse (Figure 1).

Astonishingly, although Charleville is one of 112 ‘high quality’ Australian Climate Observations Reference Network – Surface Air Temperature (ACORN-SAT) sites used to monitor long term temperature trends, the Bureau of Meteorology (BoM) did not know where their original office was or that the instrument enclosure and cloud-base searchlight was located 160 m southwest at Latitude ‑26.4103o, Longitude 146.2561o.

Assertions that “The Bureau’s methods have been extensively peer-reviewed and found to be among the best in the world”; and that “the community can have confidence the Bureau is providing an accurate estimate of Australia’s true temperature trend” are demonstrably not true. In 2011, ‘World-leading, expert’ peer reviewers did not evaluate the integrity of metadata used to make adjustments to ACORN-SAT data or investigate the soundness of individual datasets. BoM has also misrepresented the usefulness of temperatures observed under variable conditions using a range of instruments at different locations, to accurately measure trend and changes in the climate.

Figure 1. Located in central Queensland (top image), the Charleville aerodrome was used by the USAAF as a heavy bomber transit base during WWII. Facilities including the control tower, radio-met office and met enclosure were located in the northwestern sector of the aerodrome adjacent to the heavy-duty E-W runway (right; section of the plan from the National Archives of Australia (Item ID, 1972125) shown in Appendix 1 of the accompanying report).

About the data for Charleville airport.

From 1943 to 1956 when Bureau staff moved from the Aeradio office to the Department of Civil Aviation Flight Services Unit (FSU) near the terminal and they moved the site about 580 m southeast; ‘raw’ daily maximum temperature data was made-up. Metadata seems deliberately vague about what went on, where the data came from and the process they used to fill the 13-year gap. As if to re-write the climate history of central Queensland, data-reconstruction diminished the impact of the long-running inter-War drought on temperature extremes, particularly during the most devastating years to 1947.

In September 1990 the site moved 150 m nearer the runway where temperature was measured by a Micromac automatic weather station that only reported hourly data. There is some confusion about when a 60-litre screen was installed or whether the 230-litre screen was used by a replacement AWS before it became the primary instrument on 1 November 1996. It appears that in lieu of manual data, which may not have been digitised and despite its shortcomings, BoM relied on the Micromac AWS for reporting daily maxima and minima (Tmax and Tmin) until 1996. The site moved about 600 m west to its present position in October 2003 (Figure 2), which was when the previous 230-litre screen was decommissioned.

Site and instrument comparisons.

The BoM seems to have gone to extraordinary lengths to obstruct instrument (thermometers vs. AWS and 230-litre vs. 60-litre screens) and inter-site comparisons. Aerial photographs show the FSU-site was operational at least until 13 January 1992 and that an instrument was located near the Micromac site possibly as early as 1963; however there is no overlapping data for the 1990 move. There is also no data that allows different screens to be compared after 8 December 1999, when the 60-litre screen was installed until the site moved on 13 September 2003 and the 230-litre screen was decommissioned. Furthermore, the site diagram for Charleville Aero (44021) dated 28 May 2002 (before the move) only shows one screen, while despite being the same site, the plan for Charleville Aero Comparison (given the new ID, 44221) dated 7 May 2005 only shows a single large screen even though it had been decommissioned 2-years earlier!

Occurrence of temperature extremes.

Analysis shows there was no change in the real climate at Charleville. Despite this, the distribution of upper-range values (temperatures greater than the 95th day-of-year percentiles) was distorted (skewed higher) by the AWS and 60-litre screen. Pre- and post-2013 percentile differences showed the AWS most likely exceeded its calibrated range to peak 2.1oC above pre-2013 percentiles and to avoid unrealistic numbers of observations exceeding the old century benchmark of 37.8oC (100oF), values greater than 86th percentile (35.7oC) were most likely culled or moderated in situ or by post-acquisition processing before being logged.

Homogenisation is fatally flawed.

Headed by a case study of data for Mungindi Post Office, the ten sites used by BoM to homogenise Charleville Tmax for a change in 1996 (probably related to a new AWS replacing the Micromac before becoming the primary instrument on 1 November) were also analysed.

Using highly correlated comparator datasets to make adjustments is fatally flawed. Data for Mungindi and the nine other mostly manual datasets owed their correlation with Charleville to embedded parallel faults – alignment of screen and site changes, site moves, watering, the change to the AWS and small screen at Longreach. Like for Charleville, unbiased analysis found no trend or change at any sites that was attributable to the climate.

Using faulty (correlated) data to correct faults in ACORN-SAT data has no scientific or statistical merit and should be abandoned.

Figure 2. The Charleville Airport met-enclosure and 230-litre Stevenson screen photographed in 1954 (top image). The structure to the right protected the cloud-base searchlight. Buildings in the distance included the powerhouse, balloon-filling shed and Aeradio office. Although the original screen was oriented to the north (not to the south, which is the standard) the contrast between the grassy WW-II site and the red bare-soil at the current site (lower image) is stark. The current 60-litre Stevenson screen houses temperature probes for the rapid-sampling automatic weather station. Transient air-pockets rising from dry bare surfaces, which during the heat of the day may reach temperatures of >60oC, result in temperature spikes that are unlikely to represent the ‘true’ temperature of the local climate. (Photographs courtesy of the Airways Museum and BoM.)    


The Bureau’s temperature datasets are used widely to inform, discuss, develop/support policy and shape public opinion as well as for a wide range of scientific studies related to the climate and the biosphere more generally. Scores, if not hundreds of peer-reviewed scientific papers published in top-shelf journals over the past few decades are predicated on the notions that (i), data relied upon truly reflect the climate; (ii), data are fit for purpose – that conditions under which measurements were made were not biased by site changes and that data were carefully curated; and (iii), methods used by the Bureau for quality control including for homogenisation are transparent and replicable across multiple sites, datasets and regions.

However, none of the underlying assumptions has been found to be true. While all sites thus far examined by BomWatch have changed and moved in ways that affect data, no datasets show trend or change that could be unambiguously attributed to the climate. Metadata is universally inexact and misleading. That pre-1956 data at Charleville were made-up and that for Bollon and Injune were probably in-filled or estimated for several years data, undermines claims that the Bureau’s data handling methods are amongst the best in the world.

  • That homogenisation uses highly correlated neighbours that embed parallel faults to adjust ACORN-SAT data used by CSIRO in State of the Climate Reports to justify and inform policy; which in-turn is unquestionably reported-on by activist organisations and media such as the Climate Council, WWF, The Conversation, the ABC and The guardian is a scientific travesty of monumental proportions. Further, it is the Bureau that bears the responsibility.
  • That users of the Bureau’s data including professors and students at Australian universities have failed to check soundness of the data they use, seriously undermines their credibility and the standing of the institutions they represent. The multiple failures of down-stream peer review are endemic and beyond redemption.
  • Finally, failure to undertake due diligence by those purporting to be ‘battling’ climate change like the Great Barrier Reef Foundation, Farmers for Climate Action, the Murray-Darling Basin Authority, banks like ANZ, corporates such as QANTAS and BHP, lobbyists for the electricity industry, committees advising ministers, the Australian Research Council, CSIRO and academies of sciences, fuels widespread distain for elitist institutions and entities whose trustworthiness is questionable.

There is no end to the damage that the Bureau’s data and data handling methods can inflict on Australia’s economy, our children’s future prospects and our national prosperity.

14 February 2021  

[1] Former NSW Department of Natural Resources senior research scientist and former weather observer.

To read the Full Paper supporting this Brief Summary, click here. The Full Paper contains tables of data used, more detailed discussion, more photos and graphs of weather records.

Rutherglen, Victoria

Are Australia’s automatic weather stations any good?

Part 3: Non-climate biases

Dr. Bill Johnston[1]

(Read- time 6 minutes)

Historic climate data collected to monitor and describe the weather are notoriously problematic to use in bolt-on experiments that aim to determine trend and change in the climate. The instruments used and conditions under which observations were made are rarely known and it is the earliest data that form the baseline for establishing long-tern trend. Pre-1940 data for Rutherglen Research (Bureau ID 082039) for example were observed at a site in a wheat paddock 1-km north of the current site; thermometers were most likely exposed in a small metal-roofed Stevenson screen not a 230-litre standard one and according to Simon Torok (1996)[2] observations from 1912 were combined with those from the Post Office (about 12 km away) to form a composite series. Metadata does not mention where the original site was or for what period data were combined. Cross-referenced by Torok’s notation for May 1939 that the screen “opens to the west”, a step-down in average maximum temperature (Tmax) in 1941 of -0.71oC indicated the screen was replaced in 1940 and that the previous screen was biased-high. For data up to 1966 the step-change resulted in a spurious negative trend that was unrelated to the climate.


  • Marketing campaigns by taxpayer-funded institutions, universities, the Australian Academy of Science, the ABC, The Guardian, the Nine network (ie, the former ‘Fairfax’ press) and The Conversation; together with ardent climate catastrophists have groomed the public relentlessly for nearly three decades to believe that temperature is bound to increase in-line with CO2. The expectation of ‘trend’ has distracted us into questioning whether temperature is increasing at 0.5oC/century or 1.5oC/century, rather than investigating if real trend exists (i.e. that trend in data is spuriously due to something else).
  • Data-fiddling hardly describes the fraught process of creating homogenised ACORN-SAT datasets (Australian Climate Observations Reference – Surface Air Temperature) used by CSIRO in their State of the Climate reports and those used by the Bureau of Meteorology to create graphs and pictures for recent Bushfire Royal Commissions, inquiries and inquests. In addition area-weighted daily temperatures that form the gridded AWAP (Australian Water Availability Project) dataset are not corrected for individual site biases. With its hard-wired AWS network, the Bureau and its colleagues in CSIRO, the University of NSW and others like James Cook, Monash, Melbourne and Woollongong universities are able to create and market whatever climate they want the community to believe.
  • Winding back the blinds to let some light in takes an enormous amount of effort. It’s impossible to have anything published in a scientific journal that rocks their boat for example. It is unarguable that the Bureau and friends have re-written Australia’s climate history – the droughts, floods, bushfires and extremes that have fashioned the country, in favor of their narrative which they fervently protect. For instance, although the community is led to believe climate change threatens the survival of the Great Barrier Reef, no such change or warming is detectable in climate datasets for sites adjacent to the Reef. The story was made up so handpicked ‘expert’ committees running the agenda and WWF-related campaigners like the Climate Council and Farmers for Climate Action mislead everyone. For a scientist its disheartening that while agricultural industries are blamed and pushed to the brink, billions of taxpayer dollars are being spent to “save the Reef” from a problem that doesn’t exist. 

Investigating automatic weather stations

Starting with raw daily maximum temperature data (Tmax) from Climate Data Online, statistical code that derives annual averages and other attributes, and as much metadata as can be found, we at recently set out to investigate biases and other problems in Australia’s automatic weather station (AWS) network. Manual stations that report their data every morning are always 1-day late and since they were networked and integrated across the nation, temperature and rainfall maps used by news networks and temperatures reported through the day are produced using AWS data alone. So it is very important to have confidence that the technology is sound and that the Bureau’s data handling methods are beyond reproach.

Our AWS-project was introduced using Amberley RAAF as the case study and six widely dispersed airport sites as blind-study replicates. Aerial photographs, maps and plans and other metadata showed unequivocally that like at Cairns, Townsville and Rockhampton, homogenisation attributed the effect of site changes to the climate.

Metadata for Cape Leeuwin was also sparse and incomplete. The original site near the lighthouse was pinpointed using maps and photographs, which also confirmed that a large standard 230-litre screen was in use before the site moved north in October 1978. The extreme environment affected the data – wind-shaking damaged or caused thermometers to reset; while fog, mist, rain, and sleet driven into the screens by strong winds affected observations, which are assumed to be dry-bulb. Moving the AWS beside the drop-off to the ocean in April 1999 reduced rainfall-catch by 26% (Figure 1). Allegedly one of the most important sites in the AWS network, data for Cape Leeuwin including AWS data were not useful for determining long-term trend or change.

Figure 1. The AWS at Cape Leeuwin was relocated beside the drop-off to the ocean in 1999, where it is over-exposed to strong to gale force winds.

The Rutherglen Institute in northeastern Victoria started life in 1895 as a viticulture college (Figure 2). Early research was instrumental in identifying insect (Phylloxera) resistant vines and rootstock.

Figure 2. The Rutherglen viticulture college in 1908. The building housed schoolrooms, amenities and single-bed accommodation for wards of the State and local children (State Library of Victoria photograph.)

Due to Government antipathy the college almost failed. Proposals included turning it into a consumptive sanatorium (for TB suffers), dispensing with staff and selling it off or making it a boarding school for educating neglected children and those of local farmers in the science and practice of agriculture.

Figure 3. The Rutherglen Automatic Weather Station as it stands today around 1KM away from its original position. A move of this distance can be expected to create a significant step change in the data. The problem is that the Australian Bureau of Meteorology is unaware of numerous site changes that have taken place at many of its stations in Australia. The Rutherglen weather station is a case in point.

Analysis of Rutherglen Tmax since 1912 found no trend or change attributable to the climate. Despite repeated assurances that ACORN-SAT sites have been thoroughly researched metadata was misleading and incomplete. For instance although the site moved twice during the period; no metadata was available from December 1949 until May 1975 (26 years). A 1941 aerial photograph pinpointed the original site while objective statistical analysis provided unequivocal evidence in both time and rainfall domains that trend was due to site changes not the climate. Analysis was supported by close examination of ten, AWS-only datasets of sites used to homogenise Rutherglen ACORN-SAT. Those sites also showed no trend or change attributable to the climate.

Picking and choosing changepoints and using up to 40 highly correlated comparator datasets to make adjustments has no scientific or statistical merit. The process is demonstrably biased and should be abandoned. Peer review in 2012 ignored obvious problems while the Technical Advisory Forum that was supposed to provide overview of the homogenisation process over three years to 2017 failed to do its work (

If there is no evidence of increasing trends in individual datasets then the Bureau and CSIRO’s claims that temperature has increased since the 1950s are baseless. Instruments and observers change; data are imprecise; site control is ad hoc – a garden over there, a new shed or concrete path; mowing the grass or not, or using herbicide introduces artificial changes in datasets. Due to infrequent maintenance screens also deteriorate, paint flakes-off, dust and grime accumulates, wasps build nests on sensor-probes and ignoring changes in nearby land-use introduces non-climatic bias which on warm-days may be as high as 1 to 2oC.

Click here for a full paper showing air photos, graphs, tables of data and relevant information

For historic datasets used as baselines it’s even worse particularly if they don’t know where the original site was or what it was like; and for ACORN-SAT sites such as Cairns, Townsville, Rockhampton, Amberley, Cape Leeuwin and Rutherglen where they failed to undertake the site research they said they did. By failing in their due diligence, the Bureau and others may have misled the Minister to whom they ultimately report and misled the public to whom they own a duty of care.

Research on long-term data and automatic weather stations is continuing.

[1] Former NSW Department of Natural Resources research scientist and weather observer.

[2] Torok SJ (1996). Appendix A1, in: “The development of a high quality historical temperature data base for Australia”. PhD Thesis, School of Earth Sciences, Faculty of Science, The University of Melbourne.

Cape Leeuwin, Western Australia

Part 2: Issues affecting performance

 Dr. Bill Johnston[1]


AWS data for Cape Leeuwin don’t reflect the weather and are unlikely to be useful for detecting trends and changes in the climate.


Being the longest single-site temperature and rainfall record in Western Australia, the automatic weather station (AWS) at Cape Leeuwin is regarded as one of the most important in the Bureau of Meteorology’s AWS network (Figure 1). However, situated at the southwest extremity of the continent, of all the Bureau’s weather stations, it is also one of the most exposed.  

Figure 1. Signage at Cape Leeuwin explaining the importance of the automatic weather station.

Cape Leeuwin’s rugged rocky headland marks the junction of the Indian and Southern oceans. Cold, sub-polar low-pressure systems sweep in during winter and the sub-tropical high-pressure belt squeezes them south in summer. As there are no obstacles south or west to break-up or divert the low-pressure troughs, associated frontal systems and often gale-force winds; Cape Leeuwin experiences some of the most intense weather systems affecting mainland Australia. (BJ photo.)

While the general circulation moves from west to east at a rate of about 650 km/day, the mid-latitude belt of high-pressure shifts from around -40o Latitude (the Latitude of Flinders Island in Bass Strait) to around -30o Latitude (about the Latitude of the NSW-Queensland border) in April and back again around early October. This N-S seesaw, which follows the sun, is the reason Australian climates are strongly seasonal (moist during winter in the south when low pressure systems prevail and moist in the north during summer due to intrusions of the tropical monsoon and vice versa).

Weather-watches know that when the sub-tropical ridge settles a few degrees further south in winter, say south of Dubbo (-32o Latitude), approaching low pressure systems are also forced south and southern Australia is drier than normal and vice versa. Similarly, the position of the high-pressure ridge in summer determines the southern reach of the monsoon.

Observing the weather in situations that are open and exposed, subject to wild fluctuations, extremely strong winds and heavy seas is a challenging occupation, especially when manual observations were made at regular intervals both day and night. Wind was so strong and unrelenting that it shook the Stevenson screen, reset thermometers and resulted in breaks in their mercury and alcohol columns (Figure 2). It also drove mist, fog, rain and sea-spray into the screen, wetting instruments and interfering with measurements which are assumed to be dry-bulb.

Figure 2. A Fahrenheit meteorological minimum temperature thermometer showing breaks in the alcohol column (adjacent to the 68o, 73o and 75oF indices). Drawn down by the meniscus (sitting at 106.8oF) the metal bar marks the lowest reading of the day (97.3oF in this case). Not shown is a large vapour break in the lower stem of the thermometer, which after four decades of storage rendered it unserviceable. The interval scale in the photograph is magnified by about 50% compared to the actual scale, which was about 2.5 mm (1/10th of an inch) per interval. (BJ photo.)

Minimum temperature observations were noted as problematic in 1916, but the problem was not specified. The original Stevenson screen was an ‘old observatory pattern’, which by 1926 had been lowered to within 2 feet of the ground apparently to reduce wind-shaking. A decade later in June 1936 a new screen was installed; but was it in the same place or did the site move? Then no news about the state of the screen or the condition of the instruments for 28 years – until July 1964, when it was reported that the door, which faced south into the weather was damaged. No news also for the 14 years before the site moved in October 1978. Perhaps by then the screen was neglected like the one in Figure 3.

Despite repeated assurances that Australian Climate Observations Reference Network – Surface Air Temperature (ACORN-SAT) sites like Cape Leeuwin were well researched and peer reviewed, metadata is poor to non-existent. Peer-reviewers did not investigate if documentation was adequate, or if sites were well maintained and data were fit-for-purpose. It is not that long ago that the site moved in October 1978 but site summary metadata miss-specified its previous location and did not report the move. Apparently the Bureau had no records or photographs or files. However, a 1950s plan in a file at the National Archives of Australia unequivocally showed the ‘wind gauge’ northwest of the lighthouse and the Stevenson screen northeast (Figure 4) where it was also visible in photographs taken in 1975.   

Figure 3. A neglected 230-litre screen no longer in use at an agricultural research centre in NSW. (BJ photo.)

The importance of metadata cannot be understated. Scientists say that if sites and instruments remained the same, changes in the data are attributable to the climate. Ignoring (or not documenting) changes that happened or adjusting data for changes that had no impact (or using unspecified, unreplicable statistical tests to make adjustments) is one way to create pre-determined trends. Subjective homogenisation methods based on faulty metadata are unscientific; they result in trends that have nothing to do with the climate and should be abandoned.

Figure 4. Page 104 from file ‘Cape Lewin (Leeuwin) – transport lighthouse’ National Archives of Australia Barcode 1929975; showing location of the wind vane (anemometer) and Stevenson screen, which was about 38 m east of the path.

Using Cape Leeuwin as a case study, the physically based reference frame outlined and demonstrated in Part 1 of the series is applied to the problem of exploratory data analysis (EDA) – the rapid assessment of AWS datasets used to construct daily temperature maps and to homogenise ACORN-SAT data used to calculate Australia’s warming. The approach was developed from previous original research published on /category/bom/. In line with the theme of “Are Australia’s automatic weather stations any good?” this paper outlines an EDA methodology that can be applied by climate scientists to investigate the fitness of individual datasets for determining trend and changes in the climate.   

The approach uses Pearson’s linear correlation coefficient (r) to measure the strength, significance and sign (+ or -) of the association between mean maximum temperature (Tmax) and its expected deterministic covariable, rainfall; and non-parametric LOWESS regression (LOcally Weighted Scatterplot Smoothing) to visualise the shape of the association, the spread of values about the line and identify outliers relative to 95% bootstrapped confidence intervals for the fit.

Leaving aside problems evident in data from 1907, which were also investigated; the AWS, which commenced operating in February 1993, was moved from a relatively sheltered position between two sheds to an up-draft zone closer to the 13 m drop-off above the ocean in April 1999. Corresponding with the move, rainfall-catch abruptly declined by 26%. Mean Tmax at the new site was biased high relative to pre-1993 manual data and although not significant, Tmax implausibly increased with rainfall. Seven of 27 datapoints were identified as outliers including 2011 (21.20oC) and 2012 (21.13oC), which were an unlikely 1.61oC and 1.53oC higher than pre-AWS Tmax (19.6oC).  

It was concluded that as AWS data don’t reflect the weather they are unlikely to be useful for detecting trends and changes in the climate.

Click here to download the full paper including photographs and tables of data

(In line with our open-data policy, datasets used in the study are provided in Appendix 2 (pp. 11-16).)

We also welcome comments on the methodology.

[1] Former NSW Department of Natural Resources research scientist and weather observer.

Amberley, QLD. Case study

Part 1: A robust fitness test for maximum temperature data

Dr. Bill Johnston[1]


The first post in this series about the reliability of automatic weather stations (AWS) provides physical justification for using the First Law of thermodynamics as the reference frame against which weather station data may be assessed. While advection warms the air, which increases maximum temperature (Tmax), latent heat removed by evaporation of rainfall is locally cooling. The First Law theorem predicts a dynamic balance between average annual Tmax and annual rainfall, which is evaluated statistically.

Adopting Amberley RAAF as the case study, analysis of Tmax ~ rainfall residuals identified non-rainfall changes that impacted on trend; while statistical significance and variation explained objectively measured overall conformance with the First Law theorem. Analysis of Tmax for six other sites showed the methodology was robust, replicable, widely applicable and therefore useful for benchmarking the operational performance of Australia’s automatic weather station network.

Data are coarse and all sites were poorly documented. Incomplete and misleading metadata, inadequate site control and biased homogenisation methods undermine claims that Australia’s climate has changed or warmed. 


With their current focus on climate warming, climate scientists need robust quality assurance methods for verifying that maximum temperature data (Tmax) used for testing hypothesis about the historic and contemporary climate and its possible changeableness going forward are fit for purpose. Although thousands of datasets are potentially accessible not all are equally useful. The question examined in this series of posts about automatic weather stations is: are their data fit for the purpose of determining change or long-term trends; or are they only useful for monitoring day-to-day weather?

The problem

Since the 1990s Australia’s Bureau of Meteorology has all but abandoned its monitoring network.  Met-offices like at Ceduna, Oodnadatta, Cobar and Mildura sit empty; some like Hobart and Sydney Observatory have been sold or re-purposed, while others like Coffs Harbour, Albany, Canberra, Tindal and Mount Isa have been demolished and removed. Rapid-sampling platinum resistance probes have replaced thermometers, automatic weather stations have replaced observers and 230-litre Stevenson screens used to house instruments, have been replaced by 60-litre ones with that program accelerating over recent years. Due to their restricted capacity to buffer against transient eddies; the 60-litre screens are likely to be biased-high on warm days (Figure 1).

Sensitive instruments housed in 60-litre screens, which are hard-wired to the Bureau’s computer in Melbourne is a potent source of on-going bias. Neighbouring sites that have changed to AWS more or less in unison used to cross-validate and adjust each other’s data in real-time, reinforces rather than adjusts potential errors and embedded anomalies. Its unlikely for example, that data for any sites are truly independent and it is likely that the combined behaviour of infrequently maintained unmanned sites operating with small screens enhances warming. There are other problems too including the propensity for 1-second observations to report random spikes – flurries of warmer air rising from pavements or created by vehicles or people going past. Of the 86,400 one-second values recorded by an AWS each 24-hours, only two of those carry forward as data – the highest is the maximum for the day and the lowest is the minimum.   

Error control using cross validation also requires that neighbouring site data be of acceptable quality and unaffected by parallel non-climate effects, which is impossible to gauge from an office hundreds or thousands of kilometres from the site.

Figure 1. Internal view of the large Stevenson screen at Sydney Observatory in 1947 (Top above, black and white image) and the small screen at Wagga Wagga airport in June 2016 (lower above, colour image). While thermometers in the 230-litre screen are exposed on the same plane, the electronic probe at Wagga Wagga is placed behind the frame about 2 cm closer to the rear of the screen, which faces north to the sun. According to metadata, the 60-litre screen at Wagga Wagga was installed on 10 January 2001 and although thermometers were removed on 28 April 2016 intercomparative data is unavailable.

The burning question is whether data reported by AWS reflect the change in screen size (60-litres vs. 230-litres), behaviour of the electronic instrument (vs. observed thermometers), conversion of electrical resistance to temperature (calibration error), data processing (detection and filtering of erroneous values; averaging; cross-validation); reduced site control and maintenance (grass mowing, cleaning equipment etc.); the climate, or something else.

The First Law Theorem

The First Law of Thermodynamics, which is a universal theorem, is used a reference frame for assessing the fitness of maximum temperature (Tmax) data. Data are expected to behave rationally and not in some random, chaotic way and the methodology outlined in the following posts has been devised to test and evaluate conformance with the theorem. Using Amberley RAAF as a case study, the first paper in this series outlines the physical basis underpinning the approach. Six widely dispersed airport sites were also analysed to replicate and verify that methods are robust and widely applicable. Subsequent posts will evaluate AWS datasets from across Australia and those for sites in the vicinity of capital cities.

As droughts are always hot and dry, and rainy years mild and moist, lack of negative correlation between Tmax and rainfall; low explanatory power (variation explained); confounded signals and weird values – high Tmax in rainy years and vice versa, are causes of concern.

The First Law theorem provides a rational basis for determining if Australia’s automatic weather stations are any good, or if the data they produce consists of incoherent random numbers that bear little relationship to the ‘real’ climate.

Click here to download the full paper including photographs and tables of data

[1] Former Senior Research Scientist. Email:

On Peer Review

by David Mason-Jones

A limited tool at best – never a proof  

Many people may have a mental image of peer review as a process where white-coated scientists re-run the experiment in laboratories or repeat the research out in the real world.

After this they have long meetings in board rooms to discuss the paper and, finally, they confirm that the research and results described in the paper are rock solid and beyond doubt. They then approve the paper for publication in a journal.

The belief in peer review as proof of scientific fact becomes conflated with concepts like truth, beyond doubt, trustworthy, reliable, beyond dispute, the gold standard of science and so on. Sadly, for people who hold these beliefs, peer review is nothing of the sort. 

So, what is peer review?

Peer review is a step in the publishing process where an editor/publisher attempts to weed out papers that are spurious or obviously in error. It is a process where papers are vetted to ensure they present a cogent argument based on recognized scientific analysis. 

At a superficial level peer review can be as simple as a spelling or syntax check, that scientific terms are correctly used and that the paper reads okay. This might sound lightweight for a journal at the forefront of scientific knowledge but presentation is important. 

This part of the process can also include checks on the visual aids used in the paper such as photographs, diagrams, graphs and tables. Are these clear and understandable? Do they support the points made in the body of the paper? Are they relevant or do they just look good?

At a deeper level the peer review process addresses issues like: ‘Is the hypothesis sound and relevant and is it supported by the Introduction?’ Is the logic of the paper sound? Are the methods sound? Do they relate to the hypothesis? Is the argument well constructed, brief and to the point? Is there anything missing in the chain of logic? Does the paper present new information or does it support existing knowledge?

Peer reviewers are appointed by the editor and most scientific journals are quite specialized as to the field of science upon which they are writing, so reviewers need to confirm whether the paper satisfies the scape of the journal.

Given that there are many different journals and editors, it becomes likely across the spectrum that there can be different standards and requirements as to what the editors and publishers require. It is fair to say the term ‘peer review’ has taken on a life of its own, free from real meaning and certainly not a re-running of an experiment.

Not a proof

Peer review is not intended as proof and yet peer review is trotted out all the time as the gold standard of scientific proof.

The ‘proof’ or disproof of the findings of the paper comes when it is put under the blowtorch of criticism of the wider scientific community and, most importantly, when it is put under the blowtorch of Test by Replication. 

Peer reviewers are not claiming they have done the experiment again or made the same observations or done the same maths and obtained the same results. They are not claiming to have replicated the research. This is a really important point.

Behind closed doors

There can be an area of grey when it comes to the transparency of the peer review process because the peer reviewers can choose to do their work anonymously. This conflicts with a characteristic of the scientific method which requires that science be open.

It is true that this anonymity aspect is not always the case and reviewers can choose to be anonymous or open. Despite this discretion, it is common for peer the peer review process to be dome behind closed doors. Where this happens, the peer reviewers are simply put in a position where their background, track record, expertise and even their strongly held opinions cannot openly be taken into account. The peer reviewer may simply be prejudiced against the thesis of the proponent and, by exercising this prejudice discretely and anonymously, can stymie the publication of a paper that is otherwise a valuable contribution.  

Usually a paper would have two or three peer reviewers and the writer – the proponent – has the right to contest the comments coming back to the editor from the reviewers. But where the reviewers remain anonymous, it just makes an open scientific discussion harder. 

Conflating peer review with proof – an example

An example of the conflation of peer review with proof came with an ABC Television Media Watch segment some years ago. Not only were the two concepts conflated, but the high profile Reef scientist, Ove Hoegh-Guldberg, Ph.D., Professor of Marine Science, University of Queensland, gave a ludicrous analogy of the credibility of peer review.

      Hoegh-Guldberg was quoted as asserting that the idea that reef science can’t be trusted because it’s only peer reviewed, was, ‘… just ridiculous.’ 

Hoegh-Guldberg was then cited as saying that peer review was, ‘… the same process we use when we’re studying aeronautics, which produces planes that we travel on …’

On first hearing this it sounds like a compelling analogy. But think it through. There is no way an aeronautics engineer would accept something say, a new alloy for inclusion in an aircraft design, on the basis of peer review alone. The engineers would go for the higher standard of replicability. The alloy would be tested to destruction many times in a process of replication over and again to see if it stood up to the claims made about it.

The engineers would depend on three things; replication, replication, replication.

Replication, not peer review, would be the key to the proof.

Weeding out fraud, spoofing, dumb errors

In its quality control aspect, peer review might be able to weed out totally spurious papers, rants by complete cranks, fraudulent works, mathematical incompetents, mischievous papers by tricksters and even April Fools’ Day jokes. It is not, however, even guaranteed to perform that role very well.   

Vulnerable to spoofing

A recent hoax involving peer review was first reported in ‘The Times’ newspaper in the UK and subsequently reprinted in ‘The Australian’ newspaper on 9th January, 2019. The journalist, Rhys Blakely, is The Times science correspondent and the article in The Australian is headlined, ’Academic faces sack over hoax that fooled academic journals.’ The article outlined the plight of Peter Broghossian, an Associate Professor of Philosophy at Portland State University in Oregon, USA, who is facing the university’s censure over his role in the well-intentioned hoax.

Led by Broghossian, several academic wags spoofed the peer review process and wrote 20 spurious papers, all of which had the trappings of serious scientific papers. They submitted them to academic journals for publication. The papers were meaningless rubbish but that is not the way the peer reviewers saw it. Of the 20 bogus papers put out to peer review, seven were accepted for publication – that is 35% of the total!

Dr. Broghossian and his colleagues were shocked by the ease with which the papers were accepted. Although it may have been a hoax, it confirmed peer review is not the robust gatekeeper of truth that many people believe.

Another example 

This involves papers written by research student, Oona Lonnstedt, who conducted research at James Cook University and gained a Ph.D. at the Australian Research Council Centre of Excellence for Coral Reef Studies. Lonnstedt then went back to Sweden where she did the further research which has tripped her up.

The paper which set the suspicions running was at Uppsala, Sweden. It was about the effect on small fish of ingesting ocean micro plastics and how this affected the ability of the fish to grow, hunt and survive. The paper was published in the high profile journal ‘Science’ in 2016, and was challenged by two concerned scientists within a week of publication. The challenge came after the paper had been peer review.

Lonnstedt’s paper has been examined by Uppsala University and been retracted. The report of the University’s Board for Investigation of Misconduct in Research was published in December, 2017, and found that Lonnstedt had fabricated data. Both Lonnstedt and her supervisor were found to have engaged in research malpractice.

Another body in Sweden, The Central Ethical Review Board, found that Lonnstedt and her supervisor had committed scientific dishonesty.

Again, it is noteworthy that the peer review process did not detect the issue of scientific dishonesty in this case.  

Resplandy et al, 2018

The case of Resplandy et al, 2018, also illustrates the unreliability of peer review. This paper was published in the prestigious journal ‘Nature’ in 2018. (Resplandy et al, Quantification of ocean heat uptake from changes in atmospheric O2 and CO2 composition).

Upon publication it soon became evident to certain readers that there was fundamental flaw in the analysis. The flaw multiplied the degree of uncertainty on the paper so much that, even after a correction had been made, the publishers eventually decided that the paper could not be allowed to stand and was retracted.

It is interesting that the first reader to raise the alarm was not a marine scientist, nor a climate scientist, nor a person with a Ph.D. Rather, he was a analyst in the finance industry and this shows that one should not be intimidated by a host of ‘experts’ who have peer reviewed a paper.

Retraction Watch

Retractions of peer reviewed are not rare and I invite you to visit the Retraction Watch website which commenced in 2010. At the start, the founders knew that retractions were happening on a regular basis but wondered if there would be enough to sustain a website. They thought they might have been able to identify around 80 cases in the first year but, in the event, they found over two hundred. 

The rate for Retraction Watch has continued unabated. As of January, 2020, the site has reported on 21,792 retractions. All of these had been peer reviewed prior to publication.


The belief in peer review as proof of scientific fact is false. The flip side of this belief – that the lack of peer review shows a paper is untrue – is also false.

Peer review is not a proof of anything and is not intended to be. It is vulnerable to fraud, hoaxes, spoofing and simple errors of maths. Peer review is not a replication of the original experiment or research.