In the Lateral Flow
A flawed comparison of testing statistics flows on social media.
The Brexit Party chairman had over 3,100 shares of his post asserting:
Lateral Flow Tests, agreed by PHE as 99.7% accurate, show 80% fewer cases in Liverpool v. Govt standard PCR tests.
The flawed comparison ignores that the tests are of different people. The post also mistakes the true-negative ratio for ‘accuracy’.
Richard Tice, the businessman and Brexit Party chairman, wrote:
BREAKING: Lateral Flow Tests, agreed by PHE as 99.7% accurate, show 80% fewer cases in Liverpool v. Govt standard PCR tests. First major comparison, shows whole Govt strategy possibly based on flawed data (as many been saying for months) Urgent statement please [Matt Hancock].
Testing is not perfect. For simplification, this article treats testing as binary: giving positive or negative results. Diagnostic testing is not binary. Retesting and other checks help inform clinical judgements.
There are statistical measures of testing:
- True-negative rate: For uninfected people, the proportion who get a correct negative result. In the jargon, this is the test’s specificity, or 100% minus the false-positive rate.
- True-positive rate: For infected people, the proportion who get a correct positive result. This is the test’s sensitivity, or 100% minus the false-negative rate.
- Accuracy: This is the number of correct results, as a proportion of those tested.
These statistical terms are not intuitive, and can lead to confusion.
Lateral flow tests work by placing a liquid sample on an absorbent pad. That liquid flows along the pad. On that pad, there are strips which react with the material interest. For SARS-CoV-2 testing, the strip has antibodies which bind to the proteins on the virus. If these proteins are present, there is a coloured line. These rapid tests work fast — and operate like pregnancy tests.
University of Oxford and Public Health England studied the Innova lateral flow tests. The estimated true-negative rate was 99.7% (99.5% — 99.8%).
Tice uses this figure to say how ‘accurate’ the test is. This is only one type of statistical accuracy. The estimated true-positive rate was 77%. This is an estimate: it could be somewhat higher or lower.
There were differences in antigen detection in who is using the device. Also, the viral load influenced whether the lateral flow test gave a positive result.
The Liverpool conundrum
Liverpool is undertaking a mass testing pilot. That pilot used the lateral flow tests alongside the standard PCR tests.
Polymerase chain reaction tests look for the virus’s genetic material in the sample. Scientists add enzymes to the nose or throat swab sample. Those enzymes copy strands of the genetic material, amplifying their presence. That way, we can then detect the presence of the virus. The cycle threshold is the number of times researchers amplify to reach detection.
This kind of testing has a very high true-negative rate. The true-positive rate is not as high. For example, a swab sample may not have any virus cultures. If the person has the virus, that leads to a false-negative result.
The provisional figures for Liverpool residents up to 16th November were:
- 71,684 residents had lateral flow tests;
- 51,855 residents had PCR tests;
- A total of 119,054 residents had at least one of the two tests.
This means that 4,485 residents had both tests.
Tice implies the difference in positive ratios is wholly due to the tests themselves. The lateral flow tests are likely to give negative results for people with low viral loads.
There is a difference in the people receiving the tests. The overlap was less than 4,500 people. These two positive proportions are not comparable. It does not suggest PCR testing represents “flawed data”. It is plausible people with symptoms and in contact chains went for PCR testing. This is the flaw in a naive comparison of positive results between the two groups.
Diagnostic testing has different statistical measures. The true-negative rate is not the same as accuracy.
Different people receive different kinds of tests. We should avoid rapid conclusions about rapid testing, and wait for the flow of analysis.