Total False Recall
No, a recent Opinium poll was not based a 10 point Conservative lead in 2017.
It has been claimed that a recent Opinium poll suggested there was a 10 point Conservative lead in 2017, meaning their sample was not representative.
It did not. This article also discusses the problem of false recall.
What the poll said
Opinium conducted a survey of 2,003 UK adults, on 20th — 22nd November 2019. This poll was part of Opinium’s regular survey series for The Observer. The central vote intention estimates were the Conservatives with 47%, Labour on 28%, and the Liberal Democrats on 12%.

In a post shared on tax campaigner Richard Murphy’s blog (written by a contributor), it was claimed that this Opinium poll was “wrong”. In that poll, one question (Q:V005a) asks:
And thinking back to the UK general election in June 2017 , which, if any, of the following parties did you vote for?
46% of respondents selected Conservative, and 36% ticked Labour. Since this is a 10 points lead — when the actual result was a 2.4 point Conservative lead — is “over represented by 7.6%”. Consequently, the author writes the poll is a “magic trick”, “balls” and “wrong”. The co-founder of Novara Media also posted similar criticisms.
This criticism is ill-founded. Like YouGov, Opinium asks how people voted in a General Election soon after the event — using the first available recall in their weightings, rather than subsequent recall questions.
For clarity, this is the first vote recall from either June 2017, when the panellist joined the panel, or the latest survey.
What that table demonstrates is that some people do not correctly remember how they vote:

The false recall conundrum
To demonstrate the problem of using past vote recall in weightings, we can look at an illustrative and fictional past election: 50% backed Red, 40% voted Blue and 10% Green.

Now, imagine that 1 in 5 Red voters wrongly believe they voted Blue in the past election. This is called false recall or ‘differential recall error’.
In this election, every voter is backing the same party they believe they voted for last time. Imagine we draw a perfect sample: 50% backed Blue (including the 10% of voters who falsely recall voting Blue last time), 40% voted Red, and 10% Green.

Our naive survey researcher thinks there are too many Blue voters in this sample, and too few Reds. They weight by the past vote.
Since no-one believes they have switched, current vote intention now looks exactly the past election. False recall has induced a polling error in our illustrative example:

Surely, no-one forgets how they vote? Unfortunately, they do. Some people may also misreport how they voted.
This can cause difficulty in social research, if you are asking what people did and felt years ago. There are signs this is happening in the UK, and may lead to errors in survey estimates.
False recall has been studied by academics too. The imperfection of memory is an established part of psychological research.
In trying to combat false recall, internet panel companies might treat their respondents like a longitudinal panel — taking the earliest available response about how they voted in elections and referendums. Alternately, some companies might elect not to weigh their responses by past vote recall.
Trying to account for false recall is not a ‘trick’. Opinion polling is a difficult exercise, and even seemingly ‘basic’ strategies may be fraught with trouble.
The R code for the graphs is available on R Pubs.