Measurement Theory

Published 2020-05-04


Currently, great emphasis is placed on controlling quantum decoherence by using quantum error correction - both for undetected errors as well as for detected errors. But an interesting thought is can we limit the errors from happening in the first place? Or can we inherently control the evolution to overcome decoherence?

One idea by Viola and Lloyd: Dynamical decoupling In similar vain to classical bang-bang control, the quantum system is subjected to a sequence of implusive unitary transformations so that the evolution is described by a modified Hamiltonian in which unwanted interactions are suppressed.

What about randomization? Santos and Viola showed that a randomly generated sequence of unitary operations was able to overcome some of the limitations in regular dynamical decoupling: rapidly fluctuating interactions and when the control pules would be too long to implement.

Mitigating the effect of a noisy channel. Correcting dynamics can be done with Quantum Control, which operates on measurement and control. Is projective measurement optimal? Turns out it is not optimal, and there a non-projective measurement that achieves the best trade-off between gaining information (noise) and disturbing it through measurement back-action. Quantum Control of a Single Qubit

So far these ideas are at the physics/Quantum Control level. An idea closer to Quantum Computing: applying random sequences of Pauli operators. This idea is from Noise tailoring for scalable quantum computation via randomized compiling -

Quantum Measurement Theory🔗

One of the many things that make quantum systems interesting is the effects and consequences of measurement.

Traditionally the description of measurement in quantum mechanics is terms of project measurements, namely if you have some observable $\hat{\Lambda}$ then it can be diagonalized using eigenvalues $\lambda$ and projection operators $\hat{\Pi_\lambda}$: $\hat{\Lambda} = \sum_\lambda \lambda \hat{\Pi_\lambda}$ $$\newcommand{\ket}[1]{\left|{#1}\right\rangle} \newcommand{\bra}[1]{\left\langle{#1}\right|}$$

The consequences are that for a pure state $\ket{\psi}$, the probability to obtain some $\lambda$ is $p_\lambda = \bra{\psi} \Pi_\lambda \ket{\psi}$. Likewise the conditional state would be $\ket{\psi_\lambda} = \Pi_\lambda \ket{\psi} / \sqrt{p_\lambda}$. For matrix states, the probability is given by $p_\lambda = Tr[\rho \hat{\Pi_\lambda}]$ and the conoditional state will then be $\rho_\lambda = \frac{\hat{\Pi_\lambda} \rho \hat{P_\lambda}}{p_\lambda}$

But what about the uncoditional state? I.e. what if one makes the measurement, but ignores the result. This will result in a state matrix: $$\rho(T) = \sum_\lambda p_\lambda \rho_\lambda = \sum_\lambda \hat{\Pi_\lambda} \rho \hat{\Pi_\lambda}$$

I've discussed unconditional states with an example of quantum teleportation:

Projective measurement is interesting property because in general, it is an entropy-increasing process. Which is a sharp contrast to what happens classically: the unconditional classical state after non-disturbing measurement is identical to the state before the measurement.

Systems and meters🔗

Unfortunately, projective measurement is inadequate. For one, there will be classical noise due to the measuring apparatus. Another more interesting reason is that there are measurement in which the conditional states will not be left in the eigenstate of the measured quantity. Example would be in photon counting, where measurement would leave a vacuum state. The primary reason for these problems is because measurement is usally not done on the system itself but rather observes what effect the system has on the environment.

So now we look at the combined system of our apparatus $\ket{\theta}$ and our system $\ket{\psi}$: $$\ket{\Psi} = \ket{\theta}\ket{\psi}$$

Some entanglement procedure is done to couple the state: $$\ket{\Psi(T)} = \hat{U} \ket{\theta}\ket{\psi}$$

Now, with a projective measurement $\hat{\Pi_r} = \ket{r}\bra{r} \otimes \hat{I}$, we would have the conditioned state: $$ \Psi_r(T) = \frac{\ket{r}\bra{r} \hat{U} \ket{\theta}\ket{\psi}}{\sqrt{p_r}}$$

The measurement disentangles the system and the apparatus, so we can write it as: $$\Psi_r(T) = \ket{r} \frac{\hat{M_r} \ket{\psi}}{\sqrt{p_r}}$$

where $\hat{M_r} = \bra{r}\hat{U}\ket{\theta}$ and is called the measurement operator. The probability distribution for would then be $p_r = \bra{\psi}\hat{M_r}^\dagger \hat{M_r} \ket{\psi}$

Now, because the system and apparatus are no longer entangled, everything can be viewed purely with the measurement operators. E.g.: $\psi(T) = \frac{\hat{M_r} \ket{\psi}}{\sqrt{p_r}}.$

If we just make one measurement, then the conditioned state is $\ket{\psi_r}$ would be not of great interest. If did a sequence of measurements, then the state matrix of the conditioned state would be: $$ \rho_r = \frac{\hat{M_r} \rho \hat{M_r}^\dagger}{p_r} = \frac{\mathcal{J}[\hat{M_r}] \rho}{p_r}$$

What if we ignored the results? Well then the state would be: $$\rho = \sum_r p_r \rho(T) = \sum_r \mathcal{J}[\hat{M_r}] \rho_r$$.

What this all tells us, is that preforming non-projective measurement, we have no guarentee that repeating the measurement will yield the same result. In fact, the final state may be completely unrelated to the initial state or the results obtained

Why this matters?🔗

Essentially this framework brings up the quantum non-demolition operator ( We are able to derive the statistics of our observable $\hat{R}$ without ever taking on one of the possible values of $r$. Basically, we have done 'measurement without collapse' through the viewpoint of 'correlations without correlata'.

Open Quantum Systems🔗

Decoherence can be described as a dynamical process, the simplest being with the master equation.

By studying open quantum systems, in some case it is possible to monitor and control the system.

Errors and error correction🔗

The dynamics of quantum computing is reduced to a set of unitary operations, applied at discrete times, with no dynamics in between. Historically, errors were modelled as occuring at discrete times with some probability. The detection and subsequenct correction (stabalizer encoding) was then applied at discrete times. This is not a very realistic scenario. Errors are often due to continous interactions with other systems. Similarly, detection of an error takes time and can itself had extra noise. Additionally, it is hard to quantify how much error QEC can correct (lots of different research thresholds)

TODO: research and understand prior work in QEC using continous feedback TODO: understand continous QEC without measurement