Control is a key component in turning science into technology. We often interface with parameters to steer a dynamical system to our liking. For example, an HVAC system attempts to maintain a certain room temperature by applying cooling or heating.

Quantum optimal control theory attempts to steer a quantum system in some desired way by finding optimal parameters or control fields (pulses) inside the system Hamiltonian. Typically, the control task is to prepare a specific quantum state or to realize logic gates in a quantum computer. At a fundamental level, quantum control theory is critical in realizing quantum technologies.

`Pulses.jl`

is a tiny, yet powerful library that performs numerical methods of *open-loop* quantum control (there is no feedback or measurement of a physical quantum device). The control problem is addressed by simulating the dynamics of a quantum system and then iteratively improving the value of a functional that encodes the desired outcome.

## How?🔗

Suppose we are given a quantum system $H_d$ that has some tunable parameters given by $\alpha_i(t)$ which interface with the system via $H_c$

$$ H(t) = H_d + \sum_i \alpha_i(t) H_{ci}. $$

Analogy: Imagine a rocket, where $H_d$ is analogous to the uncontrollable forces of the rocket, $H_c$ are thrusters that can apply force, and $\alpha(t)$ tells us how much fuel to inject into the thrusters.

The quantum evolution is then given by solving the Schrödinger equation

$$ U(t_f, t_i) = \mathcal{T} \exp\left(-i\int_{t_i}^{t_f} H(t) dt\right). $$

Analogy: This is kinda like solving the equations of motion ($F=ma$), or solving the Euler-Lagrange equation, to simulate how a rocket will travel.

We would like for the evolution to generate a specific target unitary $U_\oplus(t_f, t_i)$, which we may be able to do by tuning the parameters $\alpha_i$.

Analogy: Our rocket's destination is Mars, i.e. our target state is: position = Mars, speed = 0.

The goal is to minimize the "error" between our desired unitary and the current solved unitary, i.e. $\mathcal{F} = ||U - U_\oplus||_2^2$. The gradient of the error, with respect to our controllable parameters $\alpha_i$, will give us a landscape which we may traverse to hopefully find a solution. Essentially, we have some update process which tunes the $j$-th iteration of $\alpha_i$ with respect to the gradient:

$$ \alpha_{i,j+1} = \alpha_{i,j} - \epsilon \frac{\partial \mathcal{F}}{\partial a_i} $$

Et volia, repeatedly performing the above will eventually yield $\alpha_i$ that will produce a desired target unitary!

This procedure is commonly referred to as Gradient Ascent Pulse Engineering (GRAPE), which was proposed by N Khaneja et al. (we actually do gradient descent because we want to minimize error)

Of course, `Pulses.jl`

does things a little better... First, an auto-differentiation engine is used to calculate the gradient, basically for free. Secondly, BFGS optimization scheme is used to approximate the hessian, which helps prevent getting trapped into local minimas.

Feel free to check out the code -- it's only 100 lines ;)

## Example🔗

Let's consider everyone's favorite superconducting qubit. For brevity, the overall system Hamiltonian is given by the Cooper-Pair Box, which also gives us the drift Hamiltonian $H_d$ as well as the control Hamiltonians $H_c$.

As an example, let us set the target unitary be the Hadamard gate: $$ U_\oplus = \frac{1}{\sqrt{2}}\begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix}. $$

We give this information to `Pulses.jl`

, and after some Julia compilation it returns: (a) pulses and (b) how the probabilities of states evolved:

The action of a Hadamard on $\ket{0}$ yields $\frac{1}{\sqrt{2}}\left(\ket{0} + \ket{1}\right)$, which gives 0 - 50% of the time and 1 - 50% of the time. This is exactly what is observed in the simulation! In 10ns, applying our pulse the system produced our desired unitary operator!