Gender Pay Gap Reporting and “Meaningless” Statistics

The gender pay gap reporting measures, mandated by the UK government, mean large companies self-administer gender pay statistics in their organisation. This data can then be viewed by the public. These reporting measures have now entered its second year. The Royal Statistical Society has published a series of recommendations on how to improve this reporting, including better guidance, calculators, and enhanced measures.

This article will consider multiple issues: the supposed ‘myth’ of the gender pay gap, statistical misunderstandings and misrepresentation, differences between the ONS gender pay gap and these reporting measures, decomposing pay differences, and whether the reporting measures are “meaningless”.

Mythos and Demos

The gender pay gap is defined as: the difference between the pay of the average woman and the average man, expressed as a percentage of the average male pay.

To fix that definition properly, we need to further define what we mean by ‘pay’ and ‘average’. ‘Pay’ typically means hourly pay (which could be just for full-time workers). ‘Average’ usually stands for the median, but you do see the mean used.

In the UK, the median hourly gender pay gap for full-time workers, excluding overtime, was estimated to be 8.6% in 2018.

Consequently, the gender pay gap is measuring how different the pay of the average man and average woman are. This is distinct from unequal pay: men and women being paid differently for equivalent work.

To reiterate: the gender pay gap is not a measure of unequal pay.

Zoe Williams wrote in The Guardian about the various ‘debunkings’ of the gender pay gap ‘myth’, from YouTube philosophers to actual academic philosophers. The statement is simple: ‘the gender pay is a myth because it only has one variable — are you a woman or a man and how much do you get paid?’

Political campaigners can conflate the gender pay gap with unequal pay. The Fawcett Society, for instance, call its campaigning day about the gender pay gap ‘Equal Pay Day’. In turn, the mighty myth-busters strike, labelling the whole gap a ‘myth’ because factors other than direct pay discrimination are at work. This is to reflect a mistake, like a dull mirror.

Confounding and Colliding

A common suggestion is that we should ‘control’ for factors known to influence pay. One version of this argument is:

Multivariate analysis of the pay gap indicates that it doesn’t exist.

This is another iteration from a City AM article written by Kate Andrews (Institute for Economic Affairs):

The measures don’t bother to break down any like-for-like comparisons between employees. As a result job, age, background, education, experience, and all other individualised circumstances are not accounted for in the reported numbers.

The argument that we should ‘control’ for factors (such as occupation) misunderstands what role these factors play. It also misunderstands what the goal of our analysis is. We want to study the effect of gender on pay.

Suggesting we should ‘control’ for occupation implies that it is somehow a confounding variable, that causes both differences in gender and pay.

Let’s look at a simplified graph of how the different variables affect one another:

By ‘controlling’ for occupation, we annihilate one way that gender affects pay. Various factors influence individual occupation in the workforce. In my simplified graph, these are unobserved.

Many factors, including gender, collide on occupation. Here, it is not possible to estimate the direct effect of gender on pay. Controlling for occupation causes problems. It opens the causal path through unobserved variables.

A statistical analysis should avoid controlling on a collider. The treatment of confounding and collider variables is opposite. The two should not be confused.

Alternately, suppose we wanted to understand the gender pay gap within age groups. Here, we should control (or adjust) for age, whilst studying differences within each age group. The ONS report does show its preferred gender pay gap measure by age group:

Different Gaps

The Office for National Statistics produces an annual report on the gender pay gap, based on the Annual Survey of Hours and Earnings, which uses a 1% sample of Pay As You Earn data from HMRC.

The ONS preferred measure of the gender pay gap uses the median gross hourly pay of full-time workers, excluding overtime. Contradictory to the IEA’s insinuations, mean measures are not ‘unofficial’ statistics. Looking at mean averages of pay is not preferred, because the right-skewed distribution of pay.

The gender pay gap reporting measures use a different definition, asking for both mean and median pay differences for all-time workers (rather than only full-time). The gender pay gap reporting measures are for individual companies, rather than looking at all employees.

Currently, as the Royal Statistical Society report (assisted by a Chartered Statistician, Nigel Marriott) highlights, there is some misreporting by organisations. The guidance appears to be unhelpful to some companies, and this could be improved.

Decomposition

Instead of controlling on characteristics, such as full-time working, we could instead try to decompose the gender pay gap. In 1973, two economists (Alan Blinder and Ronald Oaxaca) independently suggested a method for breaking down differences in pay.

To use one example, full-time workers earn more than part-time workers, and men are more likely to work full-time. However, if women worked full-time to the same extent as men, what would the unexplained pay difference be? This is what the Blinder-Oaxaca decomposition does: breaking down pay gaps into what can be explained by different average characteristics of women and men (such as full-time working) and what is unexplained by those different characteristics.

In the ONS study, 36% of the overall gender pay gap was explained, and 64% was unexplained. The ‘unexplained’ share should not be interpreted as the pay gap due to discriminatory behaviour — which may possibly play a part. There are two reasons:

  • What is ‘explained’ in the decomposition depends on the characteristics that are included;
  • If discrimination affected a characteristic, like education, then that discrimination is laundered through the ‘explained’ share.

Meaningless?

To return to the Institute for Economic Affairs again, Kate Andrews claims in the latest briefing note:

The requirement to measure pay gaps across entire organisations (rather than between comparable roles within organisations), as well as the omission of necessary data, renders the majority of the findings meaningless.

It is entirely meaningful to find the median pay of men and median pay of women in one organisation, and ask what the difference is.

That paragraph encapsulates the issue: if companies did provide gender pay gaps for comparable roles, the fundamental problem is why women are not in the higher-paid comparable roles to the same extent as men.

Ms Andrews says it is “obviously faulty” to compare the pay of a junior researcher to a CEO. (It should be highlighted that the median pay gap calculation generally does not, in fact, do this.) Furthermore, the note does not elaborate on why CEOs are predominately men.

Data is meant to help build evidence, and allow people to make better decisions. In-role pay discrimination is neither the totality of the gender pay gap, nor are gender pay gap statistics “meaningless” if they fail to answer that narrow research question.

This blog looks at the use of statistics in Britain and beyond. It is written by RSS Statistical Ambassador and Chartered Statistician @anthonybmasters.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store