Analysis of parallel temperature data using t-tests

Part 2. Brisbane Airport

Dr Bill Johnston

Using paired and un-paired t-tests to compare long timeseries of data observed in parallel by instruments housed in the same or different Stevenson screens at one site, or in screens located at different sites, is problematic. Part of the problem is that both tests assume that the air being monitored is the control variable. That air inside the screen is spatially and temporally homogeneous, which for a changeable, turbulent medium is not the case.  

Irrespective of whether data are measured on the same day, paired t-tests require the same parcels of air to be monitored by both instruments 100% of the time. As instruments co-located in the same Stevenson screen are in different positions their data cannot be considered ‘paired’ in the sense required by the test. Likewise for instruments in separate screens, and especially if temperature at one site is compared with daily values measured some distance away at another.

As paired t-tests ascribe all variation to subjects (the instruments), and none to the response variable (the air) test outcomes are seriously biased compared to un-paired tests, where variation is ascribed more generally to both the subjects and the response.

The paired t-test compares the mean of the differences between subjects with zero, whereas the un-paired test compares subject means with each other. If the tests find a low probability (P) that that the mean difference is zero, or that subject means are the same, typically less than (P<) 0.05, 5% or 1 in 20, it can be concluded that subjects differ in their response (i.e., the difference is significant). Should probability be less than 0.01 (P<0.01 = 1% or 1 in 100) the between-subject difference is highly significant. However, significance itself does not ensure that the size of difference is meaningful in the overall scheme of things.

Assumptions

All statistical tests are based on underlying assumptions that ensure results are trustworthy and unbiased. The main assumption for is that differences in the case of paired tests, and for unpaired tests, data sequenced within treatment groups are independent meaning that data for one time are not serially correlated with data for other times. As timeseries embed seasonal cycles and in some cases trends, steps must be taken to identify and mitigate autocorrelation prior to undertaking either test.  

A second, but less important assumption for large datasets, is that data are distributed within a bell-shaped normal distribution envelope with most observations clustered around the mean and the remainder diminishing in number towards the tails.

Finally, a problem unique to large datasets is that the denominator in the t-test equation becomes diminishingly small as the number of daily samples increase. Consequently, the t‑statistic becomes exponentially large, together with the likelihood of finding significant differences that are too small to be meaningful. In statistical parlance this is known as Type1 error – the fallacy of declaring significance for differences that do not matter. Such differences could be due to single aberrations or outliers for instance.

A protocol

Using a parallel dataset related to a site move at Townsville airport in December 1994, a protocol has been developed to assist avoiding pitfalls in applying t-tests to timeseries of parallel data. At the outset, an estimate of effect size, determined as the raw data difference divided by the standard deviation (Cohens d) assesses if the difference between instruments/sites is likely to be meaningful. An excel workbook was provided with step-by-step instructions for calculating day-of-year (1-366) averages that define the annual cycle, constructing a look-up table and deducting respective values from data thereby producing de-seasoned anomalies. Anomalies are differenced as an additional variable (Site2 minus Site1, which is the control).

Having prepared the data, graphical analysis of their properties, including autocorrelation function (ACF) plots, daily data distributions, probability density function (PDF) plots, and inspection of anomaly differences assist in determining which data to compare (raw data or anomaly data). The dataset that most closely matches the underlying assumptions of independence and normality should be chosen and where autocorrelation is unavoidable, randomised data subsets offer a way forward. (Randomisation may be done in Excel and subsets of increasing size used in the analysis.)

Most analyses can be undertaken using the freely available statistical application PAST from the University of Oslo: https://www.nhm.uio.no/english/research/resources/past/ Specific stages of the analysis have been referenced to pages in the PAST manual.

The Brisbane Study

The Brisbane study replicates the previous Townsville study, with the aim of showing that protocols are robust. While the Townsville study compared thermometer and automatic weather station maxima measured in 60-litre screens located 172m apart, the Brisbane study compared Tmax for two AWS each with 60-litre screens, 3.2 km apart, increasing the likelihood that site-related differences would be significant.

While the effect size for Brisbane was triflingly small (Cohens d = 0.07), and the difference between data-pairs stabilised at about 940 sub-samples, a significant difference between sites of 0.25oC was found when the number of random sample-pairs exceeded about 1,600. Illustrating the statistical fallacy of excessive sample numbers, differences became significant because the dominator in the test equation (the pooled standard error) declined as sample size increased, not because the difference widened. PDF plots suggested it was not until the effect size exceeded 0.2, that simulated distributions showed a clear separation such that the difference between Series1 and Series2 of 0.62oC could be regarded as both significant and meaningful in the overall scheme of things.

Importantly, the trade-off between significance and effect size is central to avoiding the trap of drawing conclusions based on statistical tests alone.

Dr Bill Johnston

4 June 2023

Two important links – find out more

First Link: The page you have just read is the basic cover story for the full paper. If you are stimulated to find out more, please link through to the full paper – a scientific Report in downloadable pdf format. This Report contains far more detail including photographs, diagrams, graphs and data and will make compelling reading for those truly interested in the issue.

Click here to access a full pdf report containing detailed analysis and graphs

Second Link: This link will take you to a downloadable Excel spreadsheet containing a vast number of data used in researching this paper. The data supports the Full Report.

Click here to download a full Excel data pack containing the data used in this research

Why statistical tests matter

Fake-news, flash-bangs

and why statistical tests matter

Dr Bill Johnston

www.bomwatch.com.au

Main Points

Comparing instruments using paired t-tests, verses unpaired tests on daily data is inappropriate. Failing to verify assumptions, particularly that data are independent (not autocorrelated), and not considering the effect of sample size on significance levels creates illusions that differences between instruments are significant or highly significant when they are not. Using the wrong test and naïvely or bullishly disregarding test assumptions plays to tribalism not trust.

Investigators must justify the tests they use, validate that assumptions are not violated, that differences are meaningful and thereby show their conclusions are sound. 

Discussion

Paired or repeated-measures t-tests are commonly used to determine the effect of an intervention by observing the same subjects before and after (e.g., 10 subjects before and after a treatment). As within-subjects variation is controlled, differences are attributable to the treatment. In contrast, un-paired or independent t‑tests compare the means of two groups of subjects, each having received one of two interventions (10 subjects that received one or no treatment vs. 10 that were treated). As variation between subjects contributes variation to the response, un-paired t-tests are less sensitive than paired tests.

Extended to a timeseries of sequential observations by different instruments (Figure 1), the paired t-test evaluates the probability that the mean of the difference between data-pairs (calculated as the target series minus the control) is zero. If the t‑statistic indicates the mean of the differences is not zero, the alternative hypothesis that the two instruments are different prevails. In this usage, significant means there is a low likelihood, typically less than 0.05, 5% or one in 20, that the mean of the difference equals zero. Should the P-value be less than 0.01, 0.001, or smaller, the difference is regarded as highly significant. Importantly, significant and highly significant are statistical terms that reflect the probability of an effect, not whether the size of an effect is meaningful.

To reiterate, paired tests compare the mean of the difference between instruments with zero, while un-paired t‑tests evaluate whether Tmax measured by each instrument is the same.

While sounding pedantic, the two tests applied to the same data result in strikingly different outcomes, with the paired test more likely to show significance. Close attention to detail and applying the right test is therefore vitally important.

Figure 1. Inside the current 60-litre Stevenson screen at Townsville airport. At the front are dry and wet-bulb thermometers, behind are maximum (mercury) and minimum (alcohol) thermometers, held horizontally to minimise “wind-shake” which can cause them to re-set, and at the rear, which faces north, are dry and wet-bub AWS sensors. Cooled by a small patch of muslin tied by a cotton wick that dips into the water reservoir, wet-bulb depression is used to estimate relative humidity and dew point temperature. (BoM photograph).

Thermometers Vs PRT Probes

Comparisons of thermometers and PRT probes co-located in the same screen, or in different screens, rely on the air being measured each day as the test or control variable, thereby presuming that differences are attributable to instruments. However, visualize conditions in a laboratory verses those in a screen where the response medium is constantly circulating and changing throughout the day at different rates. While differences in the lab are strictly attributable, in a screen, a portion of the instrument response is due to the air being monitored. As shown in Figure 1, instruments that are not accessed each day are more conveniently located behind those that are, thereby resulting in spatial bias. The paired t-test, which apportions all variation to instruments is the wrong test under the circumstances.

Test assumptions are important

The validity of statistical tests depends on assumptions, the most important of which for paired t-tests is that differences at one time are not influenced by differences at previous times. Similarly for unpaired tests where observations within groups cannot be correlated to those previous. Although data should ideally be distributed within a bell-shaped normal-distribution envelope, normality is less important if data are random and numbers of paired observations exceed about 60. Serial dependence or autocorrelation reduces the denominator in the t-test equation, which increases the likelihood of significant outcomes (false positives) and fatally compromises the test.

Primarily caused by seasonal cycles the appropriate adjustment for daily timeseries is to deduct day-of-year averages from respective day-of-year data and conduct the right test on seasonally adjusted anomalies.

Covariables on which the response variable depends are also problematic. These includes heating of the landscape over previous days to weeks, and the effects of rainfall and evaporation that may linger for months and seasons. Removing cycles, understanding the data, using sampling strategies and P-level adjustments so outcomes are not biased may offer solutions.

Significance of differences vs. meaningful differences

A problem of using t-tests on long time series is that as numbers of data-pairs increase, the denominator in the t-test equation, which measures variation in the data, becomes increasingly small. Thus, the ratio of signal (the instrument difference) to noise (the standard error, pooled in the case of un-paired tests) increases. The t‑value consequently becomes exponentially large, the P-level declines to the millionth decimal place and the test finds trifling differences to be highly significant, when they are not meaningful. So, the significance level needs to be considered relative to the size of the effect.

For instance, a highly significant difference that is less than the uncertainty of comparing two observations (±0.6oC) could be an aberration caused by averaging beyond the precision of the experiment (i.e., averaging imprecise data to two, three or more decimal places).

The ratio of the difference to the average variation in the data [i.e., (PRTaverage minus thermometeraverage) divided by the average standard deviation], which is known as Cohens d, or the effect size, also provides a first-cut empirical measure that can be calculated from data summaries to guide subsequent analysis.

Cohens d indicates whether a difference is likely to be negligible (less than 0.2 SD units), small (>0.2), medium (>0.5) or large (<0.8), which identifies traps to avoid, particularly the trap of unduly weighting significance levels that are unimportant in the overall scheme of things.

The Townsville case study

T-tests of raw data were invalidated by autocorrelation while those involving seasonally adjusted anomalies showed no difference. Randomly sampled raw data showed significance levels depended on sample size not the difference itself, thus exposing the fallacy of using t‑tests on excessively large numbers of data-pairs. Irrespective of the tests, the effect size calculated from the data summary of 0.12 SD units is trivial and not important.     

Conclusions

Using paired verse unpaired t-tests on timeseries of daily data inappropriately, not verifying assumptions, and not assessing the effect size of the outcome creates division and undermines trust. As illustrated by Townsville, it also distracts from real issues. Using the wrong test and naïvely or bullishly disregarding test assumptions plays to tribalism not trust.

A protocol is advanced whereby autocorrelation and effect size are examined at the outset. It is imperative that this be carried out before undertaking t-tests of daily temperatures measured in-parallel by different instruments.

The overarching fatal error is using invalid tests to create headlines and ruckus about thin-things that make no difference, while ignoring thick-things that would impact markedly on the global warming debate.

Two important links – find out more

First Link: The page you have just read is the basic cover story for the full paper. If you are stimulated to find out more, please link through to the full paper – a scientific Report in downloadable pdf format. This Report contains far more detail including photographs, diagrams, graphs and data and will make compelling reading for those truly interested in the issue.

Click here to download the full paper Statistical_Tests_TownsvilleCaseStudy_03June23

Second Link: This link will take you to a downloadable Excel spreadsheet containing a vast number of data points related to the Townsville Case Study and which were used in the analysis of the Full Report.

Click here to access the full data used in this post Statistical tests Townsville_DataPackage

Part 6. Halls Creek, Western Australia

Is homogenisation of Australian temperature data any good?

Dr Bill Johnston[1]

scientist@bomwatch.com.au

Background

Homogenisation of Australian temperature data commenced in the late 1980s and by 1996 under the watchful eye of Bureau of Meteorology (BoM) scientist Neville Nicholls, who at that time was heavily involved with the World Meteorological Organisation (WMO) and the fledgling Intergovernmental Panel on Climate Change (IPCC), the first Australian high-quality homogenised temperature dataset (HQ1) was produced by Simon Torok. This was followed in succession by an updated version in 2004 (HQ2) that finished in 2011, then the Australian Climate Observations Reference Network – Surface Air Temperature (ACORN-SAT) dataset, with version 1 (AcV1) released in 2012 being updated until 2017. AcV2 replaced ACV1 from 2018, with the most recent iteration AcV2.3 updated to December 2021.

Why is homogenisation important?

Data homogenisation represents the pinnacle of policy-driven science, meaning that following release of the First Assessment Report by the Intergovernmental Panel on Climate Change (IPCC) in 1990, for which Neville Nicholls was a substantial contributor, the Australian government set in-place a ‘climate’ agenda. Although initially rejected by Cabinet, in 1989 Labor Senator Graham Richardson proposed a 20% reduction in 1988 Australian greenhouse gas emission levels by 2005. The target was adopted in October 1990 as a bipartisan policy (i.e., by both major Australian political parties) and endorsed by a special premiers conference in Brisbane as the InterGovernmental Agreement on the Environment in February 1992 (https://faolex.fao.org/docs/pdf/aus13006.pdf). Following that meeting in February, the Council of Australian Governments (COAG) was set-up by Labor Prime Minister Paul Keating in December 1992.

As outlined by the Parliamentary Library Service: https://www.aph.gov.au/About_Parliament/ Parliamentary_Departments/Parliamentary_Library/pubs/rp/rp1516/Climate2015 this was the mechanism whereby the most important and far-reaching policy agenda since Federation in 1901, was ushered into place without a vote being cast by the unsuspecting electorate. However, in order to support the policy:

  • Land-surface temperatures had to be shown to be warming year-on-year, particularly since 1950.
  • Models were needed that predicted climate calamities into the future.
  • Natural resources -related science, which was previously the prerogative of the States, required reorganisation under a funding model that guided outcomes in the direction of the policy agenda. 
  • Particular attention was also paid to messaging climate alarm regularly and insistently by all levels of government.  

As it provides the most tangible evidence of climate warming, trend in maximum temperature (Tmax) is of overarching importance. It is also the weakest link in the chain that binds Australians of every creed and occupation to the tyranny of climate action. If homogenisation of Tmax data is unequivocally shown to be a sham, other elements of the policy, including evidence relied on by the IPCC are on shaky ground. This is the subject of the most recent series of reports published by www.bomwatch.com.au.

The question in this paper is whether trend and changes in the combined Tmax dataset for Halls Creek, reflect site and instrument changes or changes in weather and climate.

Halls Creek maximum temperature data

Detailed analyses of Halls Creek Tmax using objective, replicable physically-based BomWatch protocols found data were affected by a change at the old post office in 1917, another in 1952 after the site moved to the Aeradio office at the airport in 1950, and another in 2013 due to houses being built within 30m of the Stevenson screen two years before it relocated about 500m southeast to its present position in September 2015. Three step-changes resulted in four data segments; however, mean Tmax for the first and third segments were not different.

While the quality of data observed at the old post office was inferior to that of sites at the airport, taking site changes and rainfall into account simultaneously left no trend or change in Tmax data that could be attributed to climate change, CO2, coal mining, electricity generation or anything else. Furthermore, step-changes in the ratio of counts of data less than the 5th and greater than the 95th day-of-year dataset percentiles (low and high extremes respectively) were attributable to site changes and not the climate. Nothing in the data therefore suggests the climate of the region typified by Halls Creek Tmax has warmed or changed.        

 Homogenisation of Halls Creek data

As it was originally conceived, homogenisation aimed to remove the effects non-climate impacts on data, chief amongst those being weather station relocations and instrument changes, so homogenised data reflected trends and changes in the climate alone. In addition, ACORN-SAT sought to align extremes of data distributions so, in the words of Blair Trewin, data “would be more homogeneous for extremes as well as for means”. To achieve this, Trewin used first-differenced correlated reference series and complex methods to skew data distributions at identified changepoints based on transfer functions. This was found to result in an unrealistic exponential increase in upper-range extremes since 1985.

Reanalysis using BomWatch protocols and post hoc tests and scatterplots showed that in order to achieve statistically significant trends, homogenisation cooled past temperatures unrealistically and too aggressively. For instance, cool-bias increased as observed Tmax increased. It was also found that the First Law of Thermodynamics on which BomWatch protocols are based, did not apply to homogenised Tmax data. This shows that the Bureau’s homogenisation methods produce trends in homogenised Tmax data that are unrelated to the weather, and therefore cannot reflect the climate.

As it was consensus-driven and designed to serve the political ends of WWF and Australia’s climate industrial elites, and it has no statistical, scientific or climatological merit, the ACORN-SAT project is a disgrace and should be abandoned in its entirety.   


[1] Former NSW Department of Natural Resources research scientist and weather observer.

Two important links – find out more

First Link: The page you have just read is the basic cover story for the full paper. If you are stimulated to find out more, please link through to the full paper – a scientific Report in downloadable pdf format. This Report contains far more detail including photographs, diagrams, graphs and data and will make compelling reading for those truly interested in the issue.

Click here to download the full paper with photos graphs and data.

Second Link: This link will take you to a downloadable Excel spreadsheet containing a vast number of supporting data points for the Potshot (Learmonth) paper.

Click here to download the Excel spreadsheet.

Part 5. Potshot and ACORN-SAT

Is homogenisation of Australian temperature data any good?

Part 5. Potshot and ACORN-SAT

Dr Bill Johnston

Former NSW Department of Natural Resources research scientist and weather observer.

scientist@bomwatch.com.au

Careful analysis using BomWatch protocols showed that ACORN-SAT failed in their aim to “produce a dataset which is more homogeneous for extremes as well as for means”. Their failure to adjust the data for a step-change in 2002 shows unequivocally that methodology developed by Blair Trewin lacks rigour, is unscientific and should be abandoned.

Read on …

Potshot was a top-secret long-shot – a WWII collaboration between the United States Navy, the Royal Australian Air Force and Australian Army, that aimed to deter invasion along the lightly defended north-west coast of Western Australia and take the fight to the home islands of Japan. It was also the staging point for the 27- to 33-hour Double-Sunrise Catalina flying-boat service to Ceylon (now Sri Lanka) that was vital for maintaining contact with London during the dark years of WWII. It was also the base for Operation Jaywick, the daring commando raid on Singapore Harbour by Z-Force commandos in September 1943.

The key element of Potshot was the stationing of USS Pelias in the Exmouth Gulf to provide sustainment to US submarines operating in waters to the north and west. To provide protection, the RAAF established No. 76 OBU (Operational Base Unit) at Potshot in 1943/44 and, at the conclusion of hostilities, OBU Potshot was developed as RAAF Base Learmonth, a ‘bare-base’ that can be activated as needed on short-notice. Meteorological observations commenced at the met-office in 1975. Learmonth is one of the 112 ACORN-SAT sites (Australian Climate Observations Reference Network – Surface Air Temperature) used to monitor Australia’s warming. Importantly, it is one of only three sites in the ACORN-SAT network where data has not been homogenised.

Potshot was the top-secret WWII base that transitioned to RAAF Base Learmonth at the conclusion of WWII.  

  • By not adjusting for the highly significant maximum temperature (Tmax) step-change in 2002 detected by BomWatch, ACORN-SAT failed its primary objective which is to “produce a dataset which is more homogeneous for extremes as well as for means”.
  • Either Blair Trewin assumed the 12-year overlap would be sufficient to hide the effect of transitioning from the former 230-litre Stevenson screen to the current 60-litre one; or his statistical methods that relied on reference series were incapable of objectively detecting and adjusting changes in the data.
  • In either case it is another body-blow to Trewin’s homogenisation approach. Conflating the up-step in Tmax caused by the automatic weather station and 60-litre screen with “the climate” and lack of validation within the ACORN-SAT project generally, unethically undermines the science on which global warming depends.
    • Due to their much-reduced size, and lack of internal buffering, 60-litre Stevenson screens are especially sensitive to warm eddies that arise from surfaces, buildings etc. that are not representative of the airmass being monitored.
    • Increased numbers of daily observations/year ≥95th day-of-year dataset percentiles relative to those ≤5th day-of-year percentiles, which explains the up-step, is a measurement issue, not a climatological one.

By omission in the case of Learmonth, as ACORN-SAT produces trends and changes in homogenised data that do not reflect the climate, the project and its peers including others run under the guise of the World Meteorological Organisation of which Trewin is a major player, should be abandoned.   

Dr. Bill Johnston

12 January 2023     

Two important links – find out more

First Link: The page you have just read is the basic cover story for the full paper. If you are stimulated to find out more, please link through to the full paper – a scientific Report in downloadable pdf format. This Report contains far more detail including photographs, diagrams, graphs and data and will make compelling reading for those truly interested in the issue.

Click here to download the full paper with photos graphs and data.

Second Link: This link will take you to a downloadable Excel spreadsheet containing a vast number of supporting data points for the Carnarvon paper.

Click here to download the Excel spreadsheet

Postscript

Dr Trewin and his peers, including those who verify models using ACORN-SAT data, scientists at the University of NSW including Sarah Perkins-Kilpatrick, those who subscribe to The Conversation or The Climate Council are welcome to fact-check or debate the outcome of our research in the open by commenting at www.bomwatch.com.au. The Datapack relating to the Potshot Report is available here Learmonth_DataPack

Part 4. Carnarvon, Western Australia

Is homogenisation of Australian temperature data any good?

Part 4. Carnarvon, Western Australia

Dr Bill Johnston[1]

scientist@bomwatch.com.au

The ACORN-SAT project is deeply flawed, unscientific and should be abandoned.

Read on …

Maximum temperature data for Carnarvon, Western Australia is surprisingly no use at all for tracking climatic trend and change. With Learmonth close second at Longitude 114.0967o, Carnarvon (113.6700o) is the western-most of the 112 stations that comprise the ACORN-SAT network (Australian Climate Observations Reference Network – Surface Air Temperature) used to monitor Australia’s warming (Figure 1). Carnarvon is also just 1.4o Latitude south of the Tropic of Capricorn but unlike Learmonth, which receives moderate rain in February, Carnarvon receives practically none from the end of August through to March. Due to its vast underground aquifer, the lower Gascoyne is the most productive irrigation area in WA and most of the vegetables, melons and fruits sold in Perth are grown around Carnarvon.

Figure 1. The distribution of ACORN-SAT sites in WA, as well as all the neighbouring sites used by ACORN-SAT v.3 to homogenise Carnarvon data. 

Located about 900 km north of Perth, Carnarvon post office, built in 1882, was an important link in the expanding WA telegraph network and the aerodrome was an important refuelling stop. Runways were lengthened by the Royal Australian Air Force and during WWII it was used as a forward operation base

Servicing the main 1930s west-coast air corridor from Perth to Darwin, and Java and on to Europe, Aeradio was established by the Air Board in Geraldton, Carnarvon, Port Hedland, and at aerodromes every 300 km or so further north, and around Australia in 1939/40. A close-up view of Carnarvon Aeradio snipped from a 1947 aerial photograph is shown in Figure 2. While weather observers trained by the Bureau of Meteorology (BoM) in Melbourne undertook regular weather observations, prepared forecasts and provided pilot briefings, radio operators maintained contact with aircraft and advised of inclement conditions. As units were well-distributed across the continent, combined with post officers that reported weather observations by telegraph, the aeradio network formed the backbone of ACORN-SAT. Homogenisation of their data with that of post offices, lighthouses and pilot stations ultimately determines the apparent rate of warming in Australia’s climate.

Figure 2. A snip of the Carnarvon aerodrome operations precinct in 1949 showing location of facilities and that are area in the vicinity of the met-enclosure (met) was watered. (Three additional radio masts were located within in the watered area and watering was probably necessary to ensure grounding of the earth-mat buried in the dry soil.) 

Despite several rounds of peer review the question of whether trends and changes in raw data are homogenised Tmax data reflect site and changes or the true climate has not been independently assessed.

The research is important. Alteration of data under the guise of data homogenisation, flows through to BoM’s annual climate statements, CSIRO’s State of the Environment, and State of the Climate reports, reports put out by IPCC, COP, and the World Economic Forum, and ultimately government supported scare campaigns run by WWF, the Climate Council and other green groups, that underpin the unattainable goal of net-zero.

Commencing around 1990, Australia’s homogenisation campaign has been overseen by Dr Neville Nicholls, Emeritus Professor at Monash University, a loud and vocal supporter of the warming hypotheses. Changing the data on which it is based, then claiming in collegially peer-reviewed publications that the climate is warming, risks considerable economic, strategic and political harm. Australia is being weakened from within by Nicholls and his protégés, including those within CSIRO, and every aspect of the economy, from de-industrialisation, to attacks on agriculture, to looming conflict with China is predicated by temperature data homogenisation.

Summary findings

  • Good site control is fundamental to any experiment investigating long-term trend and change. However, the site at the post office was shaded, watered and generally poor, the Aeradio site was also watered, while the site at the meteorological office was subject to multiple changes after 1990.
  • Homogenisation failed to undertake basic quality assurance and frequency analysis, so could not objectively adjust for the effect of extraneous factors such as watering, site chances etc. Consequently, as ACORN-SAT data for Carnarvon were not homogeneous either for extremes or annual means it failed its primary objective.

While changing data to agree with hypotheses is unscientific and dishonest, the most obvious homogenisation subterfuge is the adjustment of changes that made no difference to the data, while ignoring those that did. Second, using reference series comprised of neighbouring datasets without ensuring they are homogeneous.

The use of correlated data that likely embed parallel faults to disproportionally correct faults in ACORN-SAT data and thereby embed trends in homogenised data, has no statistical or scientific merit. As the ACORN-SAT project is misleading irredeemably deeply flawed and is a clear danger to Australia’s prosperity and security, it should be abandoned.

A reoriented and rescaled overlay of an aerial photograph showing the watered airport precinct relative to the location of the post office in 1947, and the November 2021 Google Earth Pro satellite image, locating the same sites. Runways were unsealed in 1947 and there were still several splinter-proof aircraft shelters visible (marked ‘s’). By 2021 they had moved the non-directional beacon (NDB) from behind the power house (ph), and the site to the right of the meteorological office (MO) had been moved out of the way of the access road. The MO closed in 2016.

3 January 2023

Two important links – find out more

First Link: The page you have just read is the basic cover story for the full paper. If you are stimulated to find out more, please link through to the full paper – a scientific Report in downloadable pdf format. This Report contains far more detail including photographs, diagrams, graphs and data and will make compelling reading for those truly interested in the issue.

Click here to download the full paper with photos graphs and data

Second Link: This link will take you to a downloadable Excel spreadsheet containing a vast number of supporting data points for the Carnarvon paper.

Click here to download the Excel spreadsheet for Carnarvon

Note: Line numbers are provided in the linked Report for the convenience of fact checkers and others wishing to provide comment. If these comments are of a highly technical nature, relating to precise Bomwatch protocols and statistical procedures, it is requested that you email Dr Bill Johnston directly at scientist@bomwatch.com.au referring to the line number relevant to your comment.   


[1] Former NSW Department of Natural Resources research scientist and weather observer.

Part 3. Meekatharra, Western Australia

Is homogenisation of Australian temperature data any good?

Dr Bill Johnston[1]

scientist@bomwatch.com.au

The ACORN-SAT project is deeply flawed, unscientific and should be abandoned.

Read on …   

Situated 500 km east from Shark Bay, south of the Gibson Desert and adjacent to the Great Victoria Desert, Meekatharra is a hot, dry isolated outback town in mid-west Western Australia. Famously referred to as the end of the earth by Australia’s former Prime Minister, Malcolm Fraser when his aircraft was diverted from Perth in 1977 due to inclement weather, Meekatharra is now the epicentre of a mining boom and the airport serves as a hub for fly-in fly-out workers and a base for the Royal Flying Doctor Service (RFDS).

Constructed as an all-weather ‘bare-base’ aerodrome with long, sealed runways in 1943, linking Perth, the secret bomber base at Corunna Downs near Marble Bar, and Darwin, Meekatharra was one of only a few aerodromes in outback WA capable of handling heavy bombers. It was relinquished by the RAAF to the Department of Civil Aviation as a Commonwealth airport after 1946, and ownership transferred to the Shire of Meekatharra in 1993.

Weather observations commenced at the post office on the corner of Main and High streets Meekatharra in January 1926, having previously been reported from Peak Hill, about 110 km to the NW from 1898. Observations transferred to the former RAAF Aeradio office in 1950, and according to ACORN-SAT metadata, the site moved to a new meteorological office (MO) in about 1975 (Figure 1). However, files held by the National Archives of Australia (NAA) show that before the office was built in 1974, an instrument enclosure, instruments, a theodolite post and wind shield used in launching weather balloons were installed near the proposed new office in 1972 (Figure 2). The overlap with data from the previous Aeradio site, which continued to be used at least until staff relocated, probably in 1975 (Figure 3), was apparently used to smooth the transition to the new site.

ACORN-SAT

Meekatharra is one of 112 Australian Climate Observations Reference Network – Surface Air Temperature (ACORN-SAT) sites used by the Bureau of Meteorology, CSIRO, state governments, WWF and the Climate Council, to convince themselves, kiddies for climate action, and everyone else that the climate is warming irrevocably due to CO2.

Combined with dodgy measurement practices, data homogenisation is used at Meekatharra to create warming in maximum temperature (Tmax) data that is unrelated to the climate. Adjusting for a change in 1934 that was not significant, ignoring that the Aeradio site was watered, and that a period of overlap from 1972 was used to smooth the move to the MO site, allegedly in about 1975, for which no adjustment was made, created trends in homogenised data that were unrelated to the climate. Furthermore, data for the total of 18 sites used to homogenise Meekatharra Tmax, were not homogeneous.

The assertion that ACORN-SAT sites have been carefully and thoroughly researched, and that comparator reference sites selected on the basis of inter-site correlations would be broadly homogeneous around the time site changes occurred is demonstrably untrue. From multiple perspectives, the underlying proposition that series derived from up to 10 reference stations could provide a “high level of robustness against undetected inhomogeneities” is not supported.

As no change in the climate is detectable across the nineteen datasets examined, including Meekatharra, and the methodology is unscientific and deeply flawed, the ACORN-SAT project should be abandoned.

Figure 1. The Meekatharra meteorological office in August 2010 (from the ACORN-SAT Catalogue).

Figure 2. A screenshot of files held by the National Archives of Australia relating to the new 1972 instrument enclosure at Meekatharra (Search term Meekatharra meteorological).

Figure 3. Building plan in 1971 showing the RFDS hanger (108), Aeradio and met-office (101), the fenced enclosure southwest of the office including met (H2) and seismograph huts, towers suspending the aerial array and earth-mat, workshop (102), fuel bowser (107), power plant (106), and workshop and equipment buildings (120 and 124).       


An important link – find out more

The page you have just read is the basic cover story for the full paper. If you are stimulated to find out more, please link through to the full paper – a scientific Report in downloadable pdf format. This Report contains far more detail including photographs, diagrams, graphs and data and will make compelling reading for those truly interested in the issue.

Click here to download the full paper with photos graphs and data.

Note: Line numbers are provided in the linked Report for the convenience of fact checkers and others wishing to provide comment. If these comments are of a highly technical nature, relating to precise Bomwatch protocols and statistical procedures, it is requested that you email Dr Bill Johnston directly at scientist@bomwatch.com.au referring to the line number relevant to your comment.   

[1] Former NSW Department of Natural Resources research scientist and weather observer.

Part 2. Marble Bar, the warmest place in Australia

Is homogenisation of Australian temperature data any good?

Part 2. Marble Bar, the warmest place in Australia

Dr Bill Johnston[1]

scientist@bomwatch.com.au

Maximum temperature data for Marble Bar exemplifies all that is wrong with the Australian Climate Observations Reference Network – Surface Air Temperature (ACORN-SAT) dataset used to depict Australia’s warming. Subjective adjustments based on faulty metadata and highly correlated comparator sites that likely embed parallel faults results in trends in homogenised data that are unrelated to the climate.        

Read on … 

Background

Located 150 km southeast of Port Hedland on the edge of the Great Sandy Desert, Marble Bar is reputed to be the warmest place in Australia. The dry season from April to November is extremely dry, while during the ‘wet’, potential evaporation exceeds rainfall even in the wettest month of February. Nevertheless, irregular cyclonic intrusions from the Timor Sea and eastern Indian Ocean can wreak havoc across the Pilbara including at Marble Bar.

Temperature and rainfall have been measured at several locations under a variety of conditions by post office staff and volunteers since 1901. However, due to its isolation the weather station was rarely inspected by the Bureau of Meteorology (BoM), consequently metadata (data about the data) is unreliable. What was it like ‘doing the met’ under blistering hot conditions and how did equipment including thermometers fare with furnace-like winds, desert dust, and at the other extreme, cold winter nights? Typical of an arid swath of northwestern Australia, Marble Bar is an Australian Climate Observations Reference Network – Surface Air Temperature (ACORN-SAT) dataset used to calculate Australia’s warming (Figure 1).

Figure 1. Marble Bar in relation to other ACORN-SAT sites in northwestern Australia (left), and below, staff outside the original post office in 1900, which was built of local stone (State Library WA).

ACORN-SAT metadata failed to mention that a Stevenson screen was originally supplied in 1911 and that the site moved to the new post office on the corner of Francis and Contest streets in 1948 (Figure 2). Also, that the post office was extended in 1961 to accommodate the telephone line exchange which probably required the Stevenson screen to move up-hill towards the shady watered yard behind the residence. It seems the post office closed in 1984 and its functions including weather observations transferred to the general store opposite the Ironclad Hotel (now licenced post office) 230m east. Later, in 1988 the site allegedly moved east, probably to the Roadhouse, from where it relocated to its current position 100m southwest in 1997. 

Figure 2. The side-view of the 1948 post office facing Francis Street (left) photographed in 2016 with the 1961 telephone line exchange annex distant from the camera, and the former residence facing Contest Street (realestate.com.au online sales brochure). Original plans show a path between the residence and the rear of the post office and several out-buildings behind including fuel store, generator/battery shed, and garage.

Objective detection of changes in data using robust statistical methods and post hoc attribution is more reliable than using ACORN-SAT and site summary metadata which is often inaccurate and incomplete. Although now restored, by 1945 the original government offices incorporating the post office had been damaged by cyclones and was riddled with termites. Planned for in 1944, the new post office shown in Figure 2 was not opened until August 1948.

A down-step in Tmax data in 1945, which ACORN-SAT interpreted as site move appeared to be due to observers consistently rounding observations down to the nearest wholeoF, which after converting to Celsius effectively reduced average rainfall-adjusted Tmax by about 0.5oC. While rounding ceased in 1960, data remained cooler than previously until 1988, when it was said the site had become overgrown and was moved “1 km to a residence”. Within that time the Stevenson screen moved clear of the 1961 telephone line exchange, which affected the frequency of high vs low extremes; observations were metricated from 1 September 1972, which affected precision; and in 1984 the site moved to the general store (the current licenced post office), where whole and ½oC were over-reported from 1995 to 1997. It was only after the site relocated in 1998 that data were reasonably consistently observed to the nearest decimal place.

Lack of precision (low R2adj) in the relationship between Tmax, rainfall and site changes, indicates the quality of Marble Bar data is relatively poor. While site changes caused Tmax data to warm 0.9oC since 1901, no warming is attributable to the climate.

Homogenisation

The Bureau’s homogenisation methods are plagued by faulty metadata and the biased selection of neighbouring sites used to make adjustments.       

Read on …   

The most obvious problems with the Bureau’s temperature homogenisation methods are:

  • Metadata for weather stations including Marble Bar is either misleading, scant or non-existent.
  • As no weather stations have stayed the same, it is highly likely within regions that many had undergone concurrent changes/upgrades since they were first established. Examples include post-WWII changes at aerodromes in the late 1940s, installing telephone line exchanges at post offices in the 1950s, metrication in 1972, installation of the major trunk network microwave towers in post office yards by Telecom by 1975, the staged introduction of 60-litre Stevenson screens, automatic weather stations etc.     
  • As many weather stations are affected by synchronous changes and all exhibit similar seasonal cycles, selection of comparator datasets from the pool of possible contenders on the basis of linear correlation of first differences is bound to rank those having parallel faults as candidates for making ACORN-SAT adjustments.

Using faulty metadata to subjectively determine times when something changed, allows changepoints to be manipulated to achieve pre-determined trends. It also discourages further investigation of their cause. For instance, the 1944/1945 down-step at Marble Bar was due to a precision problem not a site move. Other changes included that from 1966 to 1985 the frequency of daily observations greater than 95th day-of-year percentiles was significantly depressed, probably due to watering or shade. Reconstructing what actually happened based on effects observed in data requires changepoints to be identified statistically using robust, statistical methods and post hoc investigation of causes.

Using Pearsons linear correlation to select up to 40 neighbouring sites for constructing comparative reference series is biased. As monthly averages cool from summer to winter and warm from winter to summer, residual cycles in first-differenced data inflate significances of correlations. Thus, from the pool of potential neighbours whose data are not homogeneous, linear correlation is likely to select those having parallel faults. Furthermore, increasing the number of comparators cannot overcome the high likelihood that station changes within regions are somewhat synchronised. 

Objective, replicable homogenisation would investigate time-related changes in properties of datasets using objective statistical tests (of which there are a number) and relate changes in the data to what is known about the site post hoc. For example, by comparing detected changepoints with BoM metadata, documents, maps, plans and aerial photographs held by the National Archives and the National Library of Australia, state authorities, museums, historical societies, newspaper reports etc. Even if supporting information is not available, statistical detection based on the probability of an abrupt sustained change against the NULL hypothesis of no change should be sufficient evidence that change occurred.

Historic data were not collected to be used decades in the future to detect trend and change. Due to inaccurate metadata, poor site control (screen and instrument deterioration, watering, shade), and prior to metrication on 1 September 1972, lack of precision by observers, Marble Bar data could not be regarded as high-quality. As the Bureau’s homogenisation methods are deeply flawed the ACORN-SAT project should be abandoned.

Two important links – find out more

First Link: The page you have just read is the basic cover story for the full paper. If you are stimulated to find out more, please link through to the full paper – a scientific Report in downloadable pdf format. This Report contains far more detail including photographs, diagrams, graphs and data and will make compelling reading for those truly interested in the issue.

Click here to download the full paper with photos graphs and data.

Second Link: This link will take you to a downloadable Excel spreadsheet containing a vast number of Marble Bar data points for the years from 1901 to 2020 and which was used in the analysis of the Marble Bar weather records to support the Full Report.

Click here to download the Excel spreadsheet with data points 1901 to 2020

Note: Line numbers are provided in the linked Report for the convenience of fact checkers and others wishing to provide comment. If these comments are of a highly technical nature, relating to precise Bomwatch protocols and statistical procedures, it is requested that you email Dr Bill Johnston directly at scientist@bomwatch.com.au referring to the line number relevant to your comment.   


[1] Former NSW Department of Natural Resources research scientist and weather observer.

Part 1. Methods case study: Parafield, South Australia

Is homogenisation of Australian temperature data any good?

Part 1. Methods case study: Parafield, South Australia

Dr Bill Johnston

scientist@bomwatch.com.au

Imagine arriving at a crime scene with a toolkit of technical knowhow, where the only evidence of what happened is some maximum temperature data and incomplete notes about where they were observed and conditions that may have affected them.

Time starts NOW …..

Background

Methods used by Bureau of Meteorology (BoM) senior research scientist Dr Blair Trewin, lead author of Working Group I of the IPCC’s 2021 Sixth Assessment report, and a member of the World Meteorological Organisation Expert Team on Climate Monitoring and Assessment, and before him Neville Nicholls, Simon Torok, Paul Della-Marta (and others) and Karl Braganza, are in urgent need of review. 

All have published papers in peer-reviewed scientific journals relating to data homogenisation and based on those papers, which are in the public domain, claims have been made repeatedly by The Conversation, the ABC, The Guardian and Fairfax media and even The Australian, and for Braganza, to the 2019/20 Bushfires Royal Commission, that temperature measured at BoM weather stations has increased in recent decades due to anthropogenic emissions of CO2.

Maximum temperature (Tmax) varies up and down depending on how much net energy is convected away as latent heat via evaporation from plants, soil and surfaces (collectively referred to as evapotranspiration). As evapotranspiration at dryland sites cannot exceed the rainfall, latent heat loss equals 2.26 MJ/kg of water evaporated, equivalent to is 2.26 MJ/mm of rainfall received.

Evapotranspiration of average rainfall at Parafield (443 mm) for example, would remove 1001.2 MJ of energy (443mm times 2.26 MJ), which is 15.7% of average total solar exposure of 6345 MJ/yr. The balance, not accounted for by evapotranspiration heats the air in contact with the ground, and it is that portion of the local heat balance that is measured during the heat of the day by Tmax thermometers held 1.2m above the ground in Stevenson screens (Figure 1). It follows that for any weather station, dry years are warm and the drier it is the warmer it gets.

Figure 1. Maximum and minimum thermometers held horizontally; and dry and wet-bulb thermometers held vertically in the well-maintained 60-litre Stevenson screen at Cranbourne Botanic Gardens, Melbourne.

The First Law Theorem, that available energy is expended via latent and sensible heat pathways, provides a physical reference frame for evaluating the quality of weather station data based on the relationship between Tmax and rainfall, which should be linear and negative. The proportion variation explained by rainfall, known as the coefficient of determination (R2 or more exactly R2 adjusted for the number of terms in the model (R2adj)) provides a measure of the quality of Tmax data (Figure 2). Watering, deterioration of equipment and lackadaisical practices reduce the goodness of fit (R2adj <0.05 or 50% of variation explained), while for an excellent site it may be as high or higher than 0.7 (>70%).

The first tool in the toolbox, naïve linear regression of the form average Tmax ~ annual rainfall, investigates the fitness of Tmax data using rainfall as the comparator or control variable. The coefficient (Tmax/100mm of rainfall), its sign (+ or ‑), significance (P level) and variation explained (R2adj) assist diagnosis of the dataset.

Read on …   

Figure 2. The naïve relationship between average Tmax and annual rainfall at Parafield. Although the slope is negative and highly significant (P <0.001), only 16% (R2adj = 0.16) of variation is explained.

The line in Figure 2 only explains the rainfall portion of the Tmax signal and so while rainfall reduces Tmax ‑0.04oC/100mm and the coefficient is highly significant (the P level is less than 0.001), only 16% of variation is explained, thus, the relationship is very imprecise. So, what of the unexplained variation – the residual non-rainfall portion of the signal?

Calculated as the difference between the line (the equation), and each Tmax data-point, most statistical packages provide a table of residuals, which may be positive or negative. However, programs like Excel require them to be calculated manually (there are examples on the internet).

Independent statistical methods are used to examines Tmax ~ rainfall residuals – that part of the signal not related to rainfall, which if all variation is explained, should be random and free of systematic effects.

Read on …   

Residuals from naïve regression are usually small, zero-centred numbers. Adding the dataset grand-mean to each value reconstitutes the original scale without changing their relativity or other properties. Except that the points in Figure 3 appear random, the Figure provides little additional information. However, while the data seem sound, low precision suggests a variable may be missing.

Figure 3. Rescaled residuals from the Tmax ~ rainfall relationship in Figure 2.

Linear regression partitions the original dataset into the portion of the Tmax signal attributable to rainfall, which is the physically deterministic covariable described by the linear fit in Figure 2, and variation that is independent of rainfall, which are the re-scaled residuals in Figure 3.

Next, the rescaled residuals in Figure 3 are analysed in the order they were observed and checked for changes that may indicate background, non-rainfall effects (Figure 4).

Figure 4. Step-change analysis of re-scaled residuals identified step-changes in 1973 which was caused by a site move before 1968, and another in 1999, probably caused by replacing a former 230-litre Stevenson screen with a more sensitive 60-litre one. (Data from 1955 to 1973 exist, but have not been digitised)       

Step-changes Tmax residuals in Figure 4 were highly significant, meaning there was a very low probability they were due to chance. Analysis was undertaken using the Excel 2007 version of Sergi Rodionov’s Sequential Three-step Analysis of Regime Shifts (STARS) v6.3 (ICES Journal of Marine Science, 62: 328e332 (2005) doi:10.1016/j.icesjms.2005.01.013), which is in the public domain (https://sites.google.com/view/regime-shift-test/downloads). While other step-change tools are available (see https:// www.marinedatascience.co/blog/2019/09/28/comparison-of-change-point-detection-methods/), STARS is convenient, reliable and comparable with other methods.   

Closing the case. Metadata, archived aerial photographs maps, plans, satellite images etc. are used where possible to confirm step-change scenarios and to find the combination that best fits the data. There is only one optimal outcome, which is that segmented responses to rainfall are the same (segmented regressions are parallel) and that rainfall-adjusted group means are different (segmented regressions are not coincident). While the data stepped up, no evidence was found that the climate had changed or warmed.

The BomWatch approach is objective, robust and replicable and does not involve data manipulation. It is also amenable to batch-processing of paired Tmax-rainfall datasets in the Bureau of Meteorology’s database, which would put to rest the hype about climate change and warming once-and-for-all. 

Read on …   

Bias in the BoM’s homogenisation methods

Comparative homogenisation methods used by the Bureau are highly questionable. Problems include:

  1. Use of faulty and unreliable metadata for specifying/verifying changepoints in target site data (data to be homogenised);
  2. First-differenced linear correlation with the target-site data, selects comparators that likely embed parallel faults;
  3. Overly complicated methods lack transparent checks and balances that provide confidence that their methods are trustworthy.

Data for Parafield airport (Bureau ID 23013) commence on 1 April 1939 but site-summary metadata does not provide coordinates for the site. Google Earth Pro shows the current Almos automatic weather station (AWS) is near the centre of the airport about 1 km south of the control tower at Latitude ‑4.7977, Longitude 138.6281. While data show a naïve trend of 0.31oC/decade (3.1oC/century) a gap in available data from 1955 to 1973 is unexplained. Due to step-changes before 1973 and in 1999, data are not homogeneous, nevertheless, together with other sites, Parafield was used to homogenise ACORN-SAT sites at Oodnadatta, Tarcoola, Woomera, Ceduna, Kyancutta, Port Lincoln, Nuriootpa, Adelaide (Kent Town) and Robe in 2011 (Figure 5).

Figure 5. The first (2011) iteration of ACORN-SAT used data for Parafield Airport, 15 km west of Adelaide, with other cross-correlated sites to homogenise ACORN-SAT sites at Oodnadatta (859 km distant), Tarcoola, Woomera, Ceduna (546 km), Kyancutta, Port Lincoln, Nuriootpa, Adelaide (Kent Town) and Robe. ACORN-SAT sites are indicated by red squares. Weather stations in South Australia having >20 years of data are indicated by open circles, some of which are named.

In contrast to Acorn-Sat v.1, the 2021 iteration (v2.2) used Parafield with other cross-correlated sites to only adjust Adelaide (Kent Town), Cape Borda, Port Lincoln, Robe, Kyancutta, Nhill and Ceduna (Figure 6). So, something in the method caused Parafield to be used for some sites in 2011, but not the same sites in 2021. Illustrating the complexity of the Bureau’s methods, radar graphs show location of the 24 sites used to homogenise Ceduna in 2021 and the 30 sites used for Oodnadatta. (The great circle distance from Oodnadatta to Deniliquin airports is 1263 km and to Cobar 1096 km.)

Figure 6. ACORN-SAT v.2 used Parafield airport data with other cross-correlated sites, to only homogenise ACORN-SAT sites at Adelaide (Kent Town), Cape Borda, Port Lincoln, Robe, Kyancutta, Nhill and Ceduna, which is a geographically less-dispersed suite compared to v.1. The radar plots locate the of 24 sites in total, used to homogenise ACORN-SAT data for Ceduna and the 30 sites used for Oodnadatta.

Homogenisation of Australia’s temperature record underpins the Australian government’s decarbonisation agenda including upheavals in energy supply networks, rapid increases in energy costs that will force energy-intensive industries offshore, and the destruction of habitats and despoiling of landscapes by wind turbines, solar panels and woody-weeds.

While true costs have not been revealed, the impact on future generations will be at a scale never envisaged by those advocating for change. While elites will accumulate wealth and power, those agitating on their behalf for climate justice and other paradoxical causes are laying the foundations for their own highly indebted, personally limiting Orwellian future. Without abundant, reliable, affordable energy and exports of minerals, coal and agricultural commodities to support orderly market-based transitions, including to nuclear power, Australia risks becoming a voiceless, crippled, heavily manipulated society devoid of a productive base.

Conclusions

  • The Bureau’s homogenisation methods lack rigor, replicability and quality control and are likely to result in trends that have nothing to do with the climate. Selection of comparator neighbouring datasets on the basis of first-differenced correlations axiomatically identifies those with parallel faults.
  • Use of reference series composed of data that are not homogeneous to detect and adjust changes in ACORN-SAT data has no scientific or statistical merit and should be abandoned.     

An important link – find out more

The page you have just read is the basic cover story for the full paper. If you are stimulated to find out more, please link through to the full paper – a scientific Report in downloadable pdf format. This Report contains far more detail including photographs, diagrams, graphs and data and will make compelling reading for those truly interested in the issue.

Click here to download the full paper including, photos and tables of data

Note: Line numbers are provided in the linked Report for the convenience of fact checkers and others wishing to provide comment. If these comments are of a highly technical nature, relating to precise Bomwatch protocols and statistical procedures, it is requested that you email Dr Bill Johnston directly at scientist@bomwatch.com.au referring to the line number relevant to your comment.   

Dr Bill Johnston

25 November 2022

Note: Line numbers are provided in the linked Report for the convenience of fact checkers and others wishing to provide comment. If these comments are of a highly technical nature, relating to precise Bomwatch protocols and statistical procedures, it is requested that you email Dr Bill Johnston directly at scientist@bomwatch.com.au referring to the line number relevant to your comment.   

Day/Night temperature spread fails to confirm IPCC prediction

By David Mason-Jones, 

Research by Dr. Lindsay Moore

The work of citizen scientist, Dr. Lindsay Moore, has failed to confirm an important IPCC prediction about what will happen to the spread between maximum and minimum temperatures due to the Enhanced Greenhouse Effect. The IPCC’s position is that this spread will narrow as a result of global warming.

Moore’s work focuses on the remote weather station at Giles in Western Australia and run by Australia’s peak weather monitoring body, the Bureau of Meteorology (BoM).

Why Giles? 

Giles is the most remote weather station in mainland Australia and its isolation in a desert makes it an ideal place to study the issue of temperature spread. It is virtually in the middle of the Continent.It is far from influencing factors such as Urban Heat Island effect, land use changes, encroachment by shading vegetation, shading by buildings and so on, that can potentially corrupt the data. Humidity is usually low and stable and it is far from the sea. In addition, as a sign of its importance in the BoM network, Giles is permanently staffed.

As stated, the IPCC hypothesis is that the ‘gap’ will become steadily smaller as the Enhanced Greenhouse Effect takes hold. As temperature rises the gap will narrow and this will result in an increase in average temperature, so says the IPCC.

Moore’s research indicates that this is just not happening at this showcase BoM site. It may be happening elsewhere, and this needs to be tested in each case against the range of all data-corrupting effects, but it is not happening at Giles.

Notes about the graphs. The top plot line shows the average Tmax for each year – that is, the average maximum daytime temperature. The middle plot shows the average Tmin for each year – that is, the average minimum night time temperature.

The lower plot shows the result of the calculation Tmax-Tmin. In laypersons’ terms it is the result you get when you subtract the average yearly minimum temperature from the average yearly maximum temperature. If the IPCC hypothesis is valid, then the lower plot line should be falling steadily through the years because, according to the IPCC, more carbon dioxide in the atmosphere should make nights warmer. Hence, according to the IPCC’s hypothesis, the gap between Tmax and Tmin will become smaller – ie the gap will narrow. But the plot line does not show this.

The IPCC’s reasoning for its narrowing prediction is that global warming will be driven more by a general rise in minimum temps that it will be by a general rise in maximums. This is not my assertion, nor is it Dr. Moore’s, it is the assertion of the IPCC and can be found in the IPCC’s AR4 Report. 

Dr. Moore states, “In the AR4 report the IPCC claims that elevated CO2 levels trap heat, specifically the long wave radiation escaping to space.

“As a result of this the IPCC states at page 750 that, ‘almost everywhere night time temperatures increase more than day time temperatures, that decrease in number of frost days are projected over time, and that temperatures over land will be approximately twice average Global temp rise,” he says citing page 749 of the AR4 report. 

So where can we go to find evidence that the IPCC assertion of a narrowing spread of Tmax-Tmin is either happening or not happening? Giles is a great start point. Can we use the BoM’s own publicly available data to either confirm, or disprove, the narrowing prediction? The short answer is – Yes we can.

But, before we all get too excited about the result Dr. Moore has found, we need to recognise the limitation that this is just one site and, to the cautious scientific mind, may still be subject to some bizarre influence that somehow skews the result away from the IPCC prediction. If anyone can suggest what viable contenders for ‘bizarre influences’ might be at Giles we would welcome them in the comments section of this post. 

The caution validly exercised by the rigorous scientific mind can be validly balanced by the fact that Giles is a premier, permanently staffed and credible site. The station was also set up with great care, and for very specific scientific purposes, in the days of the Cold War as part of the British nuclear test program in Australia in the 1950’s. It was also important in supplying timely and accurate meteorological data for rocket launches from the Woomera Rocket Range in South Australia in the development of the Bluestreak Rocket as part of the British/Australian space program. This range extended almost all the way across Australia from the launching site at Woomera to the arid North West of Western Australia.

In the early years there were several other weather monitoring stations along the track of the range. Such has been the care and precision of the operation of the station that Giles has the characteristics of a controlled experiment. 

Dr. Moore states, “Giles is arguably the best site in the World because of its position and the accuracy and reliability of its records which is a constant recognised problem in many sites. Data is freely available on the BoM website for this site.”

With regard to the site validly having the nature of a controlled experiment, something about the method of analysis is also notable. The novel adoption of deriving the spread Tmax-Tmin  by doing it on a daily basis neatly avoids meta data issues that have plagued the reliability of data from other stations and sometimes skewed results from other supposedly reliable observation sites.

“I would argue that the only change in environmental conditions over the life of this station is the increase in CO2 from 280 to 410 ppm,” he says.

“In effect this is, I suggest, a controlled experiment with the only identifiable variable input being CO2 concentration,” he says.  

The conclusion reached by Dr. Moore is that an examination of the historical records for this site by accessing the same data through the BoM website unequivocally shows NO significant reduction in Tmax-Tmin. It also shows no rise in Tmin. Anyone can research this data on the Bureau of Meteorology website as it is not paywalled. It is truly sound data from a government authority for the unrestricted attention of citizens and other researchers.  

Dr. Moore concludes, “The logical interpretation of this observation is that, notwithstanding any other unidentified temperature influencing factor, the Enhanced Greenhouse Effect due to elevated CO2 had no discernible effect on temperature spread at this site. And, by inference, any other site.”

He further states, “On the basis of the observations I have made, there can be no climate emergency due to rising CO2 levels, whatever the cause of the rise. To claim so is just scaremongering.

“Any serious climate scientist must surely be aware of such basic facts yet, despite following the science for many years, I have never seen any discussion on this specific approach,” he says.

Finally, Dr. Moore poses a few questions and makes some pertinent points:

He asks, “Can anyone explain, given the current state of the science why there is no rise in minimum temperatures (raw) or, more importantly, no reduction in Tmax-Tmin spread, over the last 65 years of records despite a significant rise in CO2 levels at Giles (280-410ppm) as projected by the IPCC in their AR4 report?” He notes that other published research indicates similar temperature profiles in the whole of the central Australian region as well as similarly qualified North American and World sites.

Seeking further input, he asks, “Can anyone provide specific data that demonstrates that elevated CO2 levels actually do increase Tmin as predicted by the IPCC?” And further, “Has there been a reduction in frost days in pristine sites as predicted by the IPCC?”

On a search for more information, he queries, “Can anyone explain why the CSIRO ‘State of the Climate’ statement (2020) says that Australian average temperatures have risen by more than 1 deg C since 1950 when, clearly, there has been no such rise at this pristine site?” With regard to this question, he notes that Giles should surely be the ‘go to’ reference site in the Australian Continent.

Again he tries to untangle the web of conflicting assertions by reputedly credible scientific organisations. He notes that, according to the IPCC rising average temperatures are attributable to rise in minimum temperatures. For the CSIRO State of the Climate statement to be consistent with this, it would necessitate a rise of around 2 deg C in Tmin. But, at Giles, there was zero rise. He also notes that, according to the IPCC, temperature rises over land should be double World average temperature rises. But he can see no data to support this. 

Dr. Moore’s final conclusion: “Through examination of over 65 years of data at Giles it can be demonstrated that, in the absence of any other identifiable temperature forcing, the influence of the Enhanced Greenhouse Effect at this site appears to be zero,” he says. “Not even a little bit!” 

David Mason-Jones is a freelance journalist of many years’ experience. He publishes the website www.bomwatch.com.au

Dr. Lindsay Moore, BVSC. For approaching 50 years Lindsay Moore  has operated a successful veterinary business in a rural setting in the Australian State of Victoria. His veterinary expertise is in the field of large animals and he is involved with sophisticated techniques such as embryo transfer. Over the years he has seen several major instances in veterinary science where something that was once accepted on apparently reasonable grounds, and adopted in the industry, has later been proven to be incorrect. He is aware that this phenomenon is not only confined to the field of Veterinary Science but is happens in other scientific fields as well. The lesson he has taken from this is that science needs to advance with caution and that knee-jerk assumptions about ‘the science is settled’ can lead to significant mistakes. Having become aware of this problem in science he has become concerned about how science is conducted and how it is used. He has been interested in the global warming issue for around 20 years.   

General link to Bureau of Meteorology website is www.bom.gov.au

Trends in sea level at Cooktown, Great Barrier Reef

Mean sea level change at Cooktown, Great Barrier Reef, Queensland

Dr Bill Johnston

(scientist@bomwatch.com.au)

Main points

  • There is no evidence that melting glaciers, increasing levels of atmospheric CO2­­ or expansion of the oceans due to rising temperatures has caused sea levels to increase at Cooktown. Consequently, the likelihood that sea level will rise by 26 to 29 cm by 2030 as suggested by the IPCC is far-fetched.
  • As trends measured by multiple tide gauges adjacent to the reef differ from satellite-based estimates, and time-lapse aerial photographs since the 1950s show no shoreward encroachment of tidal wetting fronts, satellite data should not be used in critical studies or to inform government policy.
  • The El Niño Southern Oscillation exerts an overarching impact on fluctuations in sea level and other climate and environmental variables.

Background

The Great Barrier Reef Marine Park Authority (GBRMPA) claims that due to global warming, sea level is increasing and that the fastest rate of sea level rise is in the northern sector of the Reef. Further, the Intergovernmental Panel on Climate Change (IPCC) predicts sea level will rise by around 26 to 29 centimetres over the next 9-years (i.e., by 2030) and by 47 to 62 centimetres by 2080.

But is it true or is it just untrustworthy science?

Rapid rates of sea level change should be evident in mean sea level (MSL) measured by tide gauges relative to the land, especially at Cooktown where Maritime Safety Queensland has operated an automatic tide gauge since January 1996 (Figure 1). Also, evidence of shoreline encroachment resulting from sea level rise should be obvious in time-series of aerial photographs available from Queensland Government archives since the 1950s and 1960s. 

Figure 1. The Cooktown storm surge tide gauge (arrowed) located on the wooden-decked wharf prior to its restoration in 2015. (Photo 44740 from the Cultural Atlas of Australia.)  

What we did

High-frequency (10-minute) tide gauge data was downloaded from the Queensland Government Open Data portal, aggregated into monthly averages and analysed using a technique that partitioned variation IN the data caused by influential covariables, from underlying impact variables that impacted ON the data-stream.  

Aerial photographs taken in 1969, 1974, 1979, 1983, 1987, 1989, 1991, 1994 and high-definition Google Earth Pro Satellite imagery were also examined for signs of tidal encroachment at Cherry Tree Bay east of Cooktown across the peninsula.

What we found

The Bureau of Meteorology Southern Oscillation Index (SOI) was the most influential of a range of climate and environmental variables that affected MSL. Rainfall and rainfall two months previously (RainLag2) also explained a statistically significant but small portion of MSL variation. Having accounted for those covariables, extraneous factors impacting on the data-stream caused step-changes in 1997, 2009 and 2015.

Following Tropical Cyclone Justin in March 1997, a major dredging campaign removed 108,000 m3 of accumulated sediment from the harbour floor, which caused the wharf supporting the tide gauge to settle about 40 mm into the bed of the river by January 1998. Dredging of more sediment in 1999 (26,000 m3) did not affect the gauge. However, in March 2009 it settled a further 37 mm probably as a result of disturbances caused by TC Ellie (30 January to 4 February 2009) and TC Hamish (4 to 11 March 2009). The harbour was dredged again following TC Ita in 2014 (60,000 m3), then in January 2015 the former wooden wharf that supported the tide gauge was strengthened and re-decked with a new composite material capable of allowing small trucks to load and unload supplies (https://www.wagner.com.au/main/our-projects/cooktown-wharf/). Dredging and refurbishment caused the tide-gauge to settle a further 32 mm. Step-changes underlying the data-stream show the gauge is not well-secured to the harbour floor.

The highly significant step-changes (P <0.001) totalling 109 mm (SEM 9.4 mm) accounted for all the apparent MSL trend. There is no evidence therefore that sea level is rising in the northern sector of the Reef. The IPCC prediction that sea levels will increase globally by 26 to 29 cm by 2030 is an unlikely scenario.

A Queensland Government aerial photograph taken on 11 September 1969 was re-scaled and oriented so features across the peninsula east of Cooktown including the well-defined Cherry Tree Bay and associated rocky headlands can be directly compared as an overlay on a Google Earth Pro satellite image taken on 16 September 2018.

Marked where they intersect the headlands, tidal wetting fronts are the same along the low-gradient beach. Littoral zones around the headlands that define inter-tidal habitats also directly align. The same shoals and individual shore-line rocks, the small watercourse draining to the beach: all the same. There is no evidence of tidal encroachment and therefore no evidence that sea levels have materially changed over the intervening 49-years (Figure 2).

What we conclude      

Satellite data depended upon by IPCC do not stack-up with tide gauge data or aerial photographs taken between 1969 and 1994 compared with high-definition Google Earth Pro Satellite imagery of the same sandy-beach.

It seems that while CSIRO et al. can model sea level relative to some point at the centre of the earth with mm/year precision using satellites traversing the same patch of heaving ocean every 20-days or so, they and other oceanographers and elite climate scientists lack the skills to analyse tide gauge records or interpret aerial photographs they can freely download from the internet.     

Satellite data upon which speculation relating to sea level rise depends, is pre-loaded with trend and should not be used for critical studies, for spreading alarm or for informing government policy. It is a ridiculous notion that sea levels will increase by almost 300 mm during the next 9-years.

Figure 2. Aerial photograph of Cherry Tree Bay, east of Cooktown taken on 11 September 1969 overlaid on Google Earth Pro (GEP) Satellite image for 16 September 2018; upper-left, GEP opacity 0%, 50%; lower-left 75%, 100%. Tidal wetting fronts, littoral zones, rocks and shoals show no encroachment or change in exposure due to rising sea levels over the intervening 49-years.

Two important links – find out more

First Link: The page you have just read is the basic cover story for the full paper. If you are stimulated to find out more, please link through to the full paper – a scientific Report in downloadable pdf format. This Report contains far more detail including photographs, diagrams, graphs and data and will make compelling reading for those truly interested in the issue.

 Click here to download the full paper with photos graphs and data

Second Link: This link will take you to a downloadable Excel spreadsheet containing a vast number of Data points for the Cooktown tide gauge and which was used in the analysis of the sea level situation at Cooktown to support the Full Report.

Click here to access full table of data supporting the Full Report