In around 300BC, in Euclid's "Elements" the oldest known proof of Pythagoras' theorem was published. There are possibly more proofs of the theorem that in a right angled triangle "the square of the hypotenuse is equal to the sum of the squares of the other two sides". Early on in high school usually c^2 = a^2 + b^2 is given without proof and students (sadly) "drill" special cases. Like if c=5 and b = 4, what is a? The theorem is assumed as true. But what if an inquisitive student asks "How do we know it's true for all cases?" Short hand we can say: It's been proven. And if the student asks how, we can provide one of any of the approximately 367 proofs there are.
Now E = mc^2, also called the "Mass-Energy Equivalence" can be considered a law of physics. (I just note here as an aside that E=mc^2 is the "non-relativistic" case. It's not generally applicable because it only "works" in the rest mass of the object with mass. But this is irrelevant to my point, so we can skip past this complication). Einstein was the first to discover it. But how? Well, via a proof. He derived it. How do we know E=mc^2? Because it's provably the case. How? Well, what Einstein did was begin with his two "postulates" (first: the laws of physics are the same for all observers and second: the speed of light is constant). From these postulates he proved the so-called "Lorentz transformations". And from there the kinetic energy of particles can be derived and...long story short, he concluded - he proved E = mc^2. So he didn't begin with the assumption that E=mc^2. He began with the two postulates. Everything else just followed as a matter of mathematical necessity.
A proof just means, in "critical rationalist terms" we have found a very good (exceedingly hard to vary) mathematical explanation of something (we may call it a theorem in mathematics or a conclusion in logic or a law in physics (these are not strict terms - they are rough)). Of course, if we find that the premisses upon which Euclid's elements are based are false, or the premisses upon which even deeper theories of geometry are based are false, then we would refute Pythagoras' theorem. And if we found Einstein's postulates were false, this too could refute E = mc^2. But this whole - quite reasonable - notion that we should expect things we know not to be perfect and thus to be improvable is not a prescription for not taking seriously what we know (fallibly) today. We can use Pythagoras' theorem and E = mc^2 to solve actual problems today. So too with David Deutsch's proof of Turing's Principle in the context of quantum theory.
So when we say "David Deutsch proved the Turing Principle(1) in 1985 that "every finitely realizable physical system can be simulated to arbitrary precision by a universal model computing machine operating by finite means" (2) we mean that beginning with some uncontroversial assumptions from mainstream quantum theory along with what was already known from classical computing as discovered by Turing, he was able to reach that conclusion. So he didn't begin there. He reached there on the assumption of known physics.
So to say something is "proved mathematically" is not to give it a special "less-than-fallible" status. But it is to give it its due as a good explanation. But the full explanation as to WHY the "principle" is to be regarded as a "law of physics" isn't just any old "good explanation" because it's an explanation not only in terms of natural language but also largely in terms of a mathematical deductive system. So that gets a special name: a proof.
Now incidentally the proof has a kind of transitive nature to it: on the assumption that quantum theory is true then any physical system can be simulated (to arbitrary precision) by a universal computer. But also: given the principle, then all physical processes can be regarded as computations; that is to say: quantum theory satisfied the Turing Principle.
The significance of the proof (and so the principle) for the nature of personhood (and thus "cognition") is that any physical system (so that includes us...in the form of our brains and minds) can be simulated by a computer - or to "arbitrary precision" by a quantum computer. This does not mean the human brain is a quantum computer (many of us guess it will not be that because the human brain is warm and hence noisy - an environment quantum computers appear to not like) - we guess the human brain is just a classical computer. But it's a classical computer running a special kind of software. Whatever the case if a quantum computer - or indeed just a classical computer, but this is beside the point - can simulate a working human brain then it will be a working human brain - just made out of other stuff. It will be computing - performing the physical processes a brain does. Nothing spooky - nothing requiring new physics. And so - it will be running a mind. It will have a mind and thus it will be a person.
Perhaps take a moment to just consider again that sentence above. It is rather a profound claim. Not only does it takes "spookiness" out of the "what is a mind?" question it regards so called "Artificial General Intelligence" or AGI as a person and thus with the full legal rights and moral status of a person. Anything less is genuinely racism. The spookiness also removes caveats about an "analogy". That "the brain is a computer" is demonstrably the case for the reasons stated above is no analogy. It is mainstream physics. The mind is a kind of software: it's what brains do. It's the abstract software running on the brain. A mind in a human brain already is a simulation: it is simulating the reality delivered to it by the sense data that it interprets. So a computer that is able to simulate a mind really is a mind. Minds are abstract things. This is important because a quantum computer - or any computer - that simulates, let's say, a bullet...has not created an actual bullet. Simulated bullets are not real physical bullets. There's a difference there. If a computer gamer is playing "Call of Duty" and shooting bullets from a gun, neither the gun nor bullets are real. This should go without saying. But minds are not physical. They're abstract. So simulating them "to arbitrary precision" is to create them in reality. It's rather more akin to the person at the warehouse doing stocktake and adding up by hand all the products. They have many sums to calculate. Now if they do the calculation by hand with pen and paper or by using an abacus that's one thing. It's a real calculation. But if they then take the calculation and use a computer to do it: they are in a real sense "simulating" the action of the abacus or ink-and-paper calculation. In either case it's a real calculation and one is not more or less "real" than the other for being done with the hands or with a computer.
So we (humans, in the form of David Deutsch and anyone else who wants to try) can prove the "Turing Principle" and we can notice that as humans are made of atoms the principle applies to us just as it applies to any other physical system in reality. We can be simulated. But more than that we are computers too. But more than that: we are much more than computers. For more on that, see here: http://www.bretthall.org/physics-and-learning-styles.html or here http://www.bretthall.org/alien-intelligence.html
I've sometimes been told the principle is "just an assumption". It isn't. It's been proved. We are told time and time and time again that it's an analogy. It isn't. It's been proved. That doesn't mean it's "infallibly the case" - it just means it's a conclusion...mathematically derived from what is already known about physics. If I was to be asked "How is it proved?" I'd be unable to do better than the paper itself. And that runs for around 17 pages and has to this date won the author a number of prestigious prizes in Physics...so it's not something that can be easily summarized in a blog nor on twitter. So I refer the reader to the original paper.
One of those times above is from the 2014 Edge Question "What scientific idea is ready for retirement?" and the author there is arguing "The brain is a computer". But actually this thesis seems very poorly subscribed if one does a cursory search on google for "brain is a computer" AND "neuroscience" we get stuff like this: faculty.washington.edu/chudler/bvc.html - and that's for kids. Now I am unfamiliar with the present state of actual neuroscience and the professional literature there but if popular accounts are anything to go by, there is still a strain of mystical thinking lurking there. The truth is, the brain is mysterious - but not mystical. We know it must be a computer of some sort, given The Turing Principle - but we don't have almost even the first clue as to what the software is that is running on that computer. That, so far as we know, entirely unique creative software that generates consciousness and the experience of freewill and - most importantly - new explanations. We know there must be some code that can be captured in an algorithm. We just don't know what it is. That of course is another story.
(*) Note that "It's been proved" on the assumption quantum theory is true. So of course if quantum theory is refuted, then the proof is worthless. Like any proof, the soundness/truth of the conclusion is only as good as the premisses one begins with. Now I can imagine scenarios where quantum theory is, technically, refuted, but the newer, improved, deeper theory has quantum theory as a limiting case in which case the proof may still indeed be valid.
(1) Note that what the principle is called is a matter of some confusion. David Deutsch refers to the principle as the "Turing Thesis" in his original paper and the "Turing Principle" elsewhere. Some - like mathematicians Roger Penrose and Robin Gandy have insisted that Alonzo Church conjectured (guessed!) the same thing and so have called the same principle the "Church-Turing" Principle and still others have suggested it be called the "Church-Turing-Deutsch" principle as David actually proved the conjecture beginning with quantum theory as a premise. The upshot of that was that computer science then became a branch of physics because computers were no longer the ideal mathematical objects supposed by Turing (or Church for that matter) but rather real physical objects that obeyed the physical laws known as quantum physics. And of course they must because all computers are made of matter and not Platonic Ideals.
(2) Note that this is not the way the principle is put in the original paper found here: http://www.daviddeutsch.org.uk/wp-content/deutsch85.pdf I have changed the phrasing. The original formulation is "every finitely realizable physical system can be perfectly simulated by a universal model computing machine operating by finite means". David knows better than most that "perfectly" isn't correct and, though I cannot find it, I recall a tweet exchange between himself and quantum physicist Michael Nielsen on exactly this point.
The most valuable thing you can offer to an idea