Quantum Computing and Statistical Significance


In my Nobel Prize and Statistical Significance, I showed some examples of p-values and statistical significance being used at the highest levels of science, despite criticisms that these approaches are bad for science. In this article, I will show another example of p-values and statistical significance and hypothesis testing being used for some great science, and that is in the field of quantum computing.

In basic terms, quantum computing can solve problems much faster than current computers are able to. How much faster? In Google's article Quantum Supremacy Using a Programmable Superconducting Processor, they link to the Nature article, which links to supplementary information used in the paper. In the article, they write

"We developed a new 54-qubit processor, named 'Sycamore', that is comprised of fast, high-fidelity quantum logic gates, in order to perform the benchmark testing. Our machine performed the target computation in 200 seconds, and from measurements in our experiment we determined that it would take the world's fastest supercomputer 10,000 years to produce a similar output."

On to the p-values and statistical significance. In the supplementary information, we read

A critic may say "Yes, but they did this science in spite of frequentism", or "They could have used another method that they didn't use to get similar results". Both of these excuses are, oddly enough, appealing to counterfactual logic (what could have happened but didn't), the same type of logic frequentism uses with p-values that critics don't seem to care for.

In my opinion, this paper and thousands of other results show that scientists find p-values, statistical significance language and concept, sampling, etc., very useful for doing science.

Thanks for reading.

If you enjoyed any of my content, please consider supporting it in a variety of ways: