$\newcommand{\dede}[2]{\frac{\partial #1}{\partial #2} }
\newcommand{\dd}[2]{\frac{d #1}{d #2}}
\newcommand{\divby}[1]{\frac{1}{#1} }
\newcommand{\typing}[3][\Gamma]{#1 \vdash #2 : #3}
\newcommand{\xyz}[0]{(x,y,z)}
\newcommand{\xyzt}[0]{(x,y,z,t)}
\newcommand{\hams}[0]{-\frac{\hbar^2}{2m}(\dede{^2}{x^2} + \dede{^2}{y^2} + \dede{^2}{z^2}) + V\xyz}
\newcommand{\hamt}[0]{-\frac{\hbar^2}{2m}(\dede{^2}{x^2} + \dede{^2}{y^2} + \dede{^2}{z^2}) + V\xyzt}
\newcommand{\ham}[0]{-\frac{\hbar^2}{2m}(\dede{^2}{x^2}) + V(x)}
\newcommand{\konko}[2]{^{#1}\space_{#2}}
\newcommand{\kokon}[2]{_{#1}\space^{#2}} $
# Content
$\newcommand{\L}{\mathcal L}$
$\newcommand{\lrangle}[1]{\langle #1 \rangle}$
## Repe Approximation overview
## Repe Light Matter Interaction
### Dipole approximation
We will see more details next week, but for now I can show you that for many cases:
$j(-k) = \frac{p}{m}$
Noting that $$[p^{2},x] = p^{2}x - xp^{2} = p^{2}x- (px - [p,x])p = p^{2}x - pxp - i\hbar p $$$$
= p^{2}x - p(px - [p,x]) -i\hbar p = -2i\hbar p$$
And noting that $[H,x] \approx [p^{2},x]$
We can then rewrite the transition oppearator as $\vec x \cdot \vec e$
Because $x$ is odd, we will see that only odd $\to$ even and even $\to$ odd transitions will be allowed.
## Symmetry
### Commutator tricks:
#### General
$$[A + B, C] = [A,C] + [B,C]$$
#### Restricted to $[A,[A,B]] = 0$
$$[A^{n},B] = n A^{n-1}[A,B]$$
This looks a bit like a derivative of $A$...
Consider a function $f(x)$
##### Concept question
What can we say about $[f(A),B]$
We can taylor expand this function $f(x)=\sum\limits_{k}^{\infty}\dede{^{k}f}{x^{k}}(0) \frac{x^{k}}{k!}$
Consider $[f(A), B] = [\sum\limits_{k}^{\infty} \dede{^{k}f}{x^{k}}(0) \frac{A^{k}}{k!}, B]= \sum\limits_{k}^{\infty}\dede{^{k}f}{x^{k}}(0) \frac{1}{k!} [ A^{k}, B] =\sum\limits_{k}^{\infty}\dede{^{k}f}{x^{k}}(0) \frac{1}{k!} [k A^{k-1}, B]$
Compare with:
$f(x)' = \sum\limits_{k}^{\infty}\dede{^{k}f}{x^{k}}(0) \frac{k x^{k-1}}{k!}$
Thus we get:
$[f(A), B] = f'(A)[A,B]$
This relation is especially useful for $p,x$ (as $i\hbar$ commutes always).
But, take care with rotations!
Because:
$$[\sigma_{i},[\sigma_{i}, \sigma_{j}]] = [\sigma_{i}, 2i\varepsilon_{ijk}\sigma_{k}] = 2i\varepsilon_{ijk} \cdot 2i \varepsilon_{ikm}\sigma_{m} = 4 \varepsilon_{ikj} \cdot \varepsilon_{ikm}\sigma_{m} = 2\delta_{jm} \sigma_{m} = 2\sigma_{j}$$
### Lie Theory repetition
You might remember from Lie Theory that we originally introduced Lie Theory to simplify the treatment of continuous symmetries.
We had the typical representation theory problem:
- I have a system with some symmetries. The operations on the object interact in complicated (group structured) ways.
- I want to work with an easier more tangible object, that retains that structure (because I want the matrices to not only as matrices, but also retain the symmetry)
In finite group theory this is just the representation. The representation maps group elements to matrices.
In Lie Theory we found that you can represent a Lie Group (group with a manifold structure) from a Lie Algebra.
Specifically we had:
$\mathfrak g = Lie(G) = \{ X | e^{Xt} \in G \forall t \in \mathbb R \}$
The Lie Algebra consists of the exponents, that lead to the group elements.
From this definition we also see how we can obtain the element of the Lie Algebra.
If we take the derivative and evaluate at $t=0$ we get:
$$\dd{}{t} e^{Xt}|_{t=0} = X e^{Xt}|_{t=0} = X$$
In some sense $X$ is "the velocity" of $e^{Xt}$. This will be important for interpreting similar structures in QM.
A visual way to interpret this is the following:
> Image of tangent plane
We can pick any velocity vector on the tangent plane of the Lie Group (the Lie Algebra).
This illustrates the nice linear structure of the Lie Algebra (vectors can be added!)
#### Summary
The Lie Algebra allows us to produce elements of the Lie Group via exponentiation. The Lie Algebra acts as a type of "velocity" (meaning that if $t$ in our formula is actual time the elements of the Group will change proportional to $X$)
Because $X$ can be more than a number, that "velocity" also has a "direction"
### Rotations and Vector operators
One typical case of complication in physics is rotations. They have a very complicated group structure (they don't even commute!)
But because rotations form a manifold we can use Lie theory to understand them.
A rotation $R \in SO(3)$ act on a state as follows:
$\ket\phi \to U(R)\ket\phi$
Where $U(R) \in End(\mathcal H)$ i.e. a linear map of states to states
We see that $U$ is a representation of $SO(3)$ in our Hilbert space. (Remember the defintion of a representation $\varphi: G \to GL(V)$)
Because we know from MMP II that $SO(3)$ can be represented via $su(2)$ (by double cover). We can write:
$U(R) = \exp(-i \theta (n_{x}\hat{J}_{x} + \ldots n_{z}\hat{ J}_{z})) = \exp(-i\theta \vec n \cdot \hat{\vec{J}})$
Note that $\hat J_{x,y,z}$ are Hermitian operators (of the correct dimensions for the vector $\ket \psi$)
This means that any 3d rotation can be represented by combining three "rotation basis elements" from the Lie Group.
Example:
For a two level system the correct $J$ are the pauli matrices:
$U(R) = \exp(-i \theta (n_{x} \sigma_{x} + n_{y}\sigma_{y} + n_{z}\sigma_{z}))$
(note that the $i$ factor comes from the fact that $su(2) = \span\{ i\sigma_{1,2,3}\}$)
#### Velocity and Nöther theorem
We saw previously that we can interpret the Lie Algebra as containing the "velocity vectors" for the Group.
In our case we can interpret $J$ as the velocity of Rotation. This is why we call them the _Angular Momentum Operators_!
We also remember that from classical mechanics we used the nöther theorem to relate conserved quantities with continuous symmetries. We get a hint that Lie Theory wants to say the same thing, we also related a symmeteries with a quantity. But what does it mean for that quantity to be conserved.
To make that last connection we can remember the CBH formula:
$$e^{tX}e^{tY} = \exp ( tX + tY + \frac{t^{2}}{2} [X,Y] + \mathcal O(t^{3}))$$
Which for two Lie Algebra elements that have $[X,Y]$:
$e^{tX}e^{tY}= exp(tX + tY + 0) = exp(tY + tX) = e^{tY}e^{tX}$
This means that $[X,Y] = 0$ implies that the corresponding group elements commute. This legitimizes the name of **the commutator**
From this we can directly see that $e^{itH}e^{itX}e^{-itH} = e^{itX}$ and thus $X$ is a conserved quantity
#### Action of angular momentum
Consider $\hat{\vec V} = \begin{pmatrix} \hat V_{x} \\ \ldots \\ \hat V_{z} \end{pmatrix}$
We want this thing to also rotate as a vector. So we consider:
$\bra \phi \vec{\hat V} \ket \phi \to \bra \phi U^{\dagger}(R) \vec{\hat V} U(R) \ket \phi$
This should in the end look like rotating the individual components:
$\bra \phi \vec{\hat V} \ket \phi \to \bra \phi U^{\dagger}(R) \vec{\hat V} U(R) \ket \phi =^{!} \sum\limits_{l} R_{kl} \bra\phi \hat V_{l} \ket\phi$
Note how $U \in End(\mathcal H)$ while $R \in End(\mathbb R^{3})$
The above is true component wise and independent of projection so we get:
$$U(R)^{\dagger} \hat V_{k} U(R) =^{!} \sum\limits_{l} R_{kl} \hat V_{l} $$
If the operator satisfies this we can call it vector operator.
#### When does this fail
- The thing is that simply rearranging the components of a vector doesn't automatically rotate the the contents. You could imagine the following object:
The reason why this is not actually a vector operator is, because the elements themselves are also need to be adjusted correctly
#### Lie theory:
We first want to figure out how $R_{kl}$ can be produced via Lie Algebra.
For that we remember that $R$ is a $3 \times 3$ matrix as a function of $\vec n$ and $\theta$
We can now remember that the Lie Algebra is found by taking the tangent "velocity" vector of a Group. ie.
$\dd{}{\theta} R(\vec n, \theta) |_{\theta =0}$
We see that this is curve that rotates around $\vec n$
Thinking about how this curve will act on any vector $\vec w$ we find
$\dd{}{\theta} R(\vec n, \theta) |_{\theta =0} \vec w = \vec n\times \vec w$
We now consider:
$$\dd{}{\theta} U(R)^{\dagger} \hat{\vec V} U(R) = \dd{}{\theta} e^{i\theta \vec n \cdot \hat J}\hat{\vec V} e^{-i\theta \vec n \cdot \hat J}$$
$$= (i\vec n \cdot \hat{\vec J}) e^{i\theta \vec n \cdot \hat J}\hat{\vec V} e^{-i\theta \vec n \cdot \hat J} + e^{i\theta \vec n \cdot \hat J}\hat{\vec V}(-i\vec n \cdot \hat{\vec J}) e^{-i\theta \vec n \cdot \hat J} $$
at $\theta = 0$
$$= (i\vec n \cdot \hat{\vec J}) \hat{\vec V} + \hat{\vec V}(-i\vec n \cdot \hat{\vec J}) = i [\vec n \cdot \vec{\hat J}, \vec{\hat V}] $$
We now found two ways to find the Lie Algebra element producing our rotation. By setting them equal we find:
$\vec n \times \vec{\hat{V}} = i[\vec n \cdot \vec{\hat J}, \vec{\hat{V}}]$
Removing the normal vector and rewriting as levi-civita we get:
$$[\hat J_{l}, \hat V_{k}] = \sum\limits_{m}i\varepsilon_{lkm}\hat V_{m}$$
We now decompose the elements of $V$ in a new way:
$V_{0} = V_{z}$
$V_{+1} = -\frac{1}{\sqrt{2}}(V_{x}+ i V_{y})$
$V_{-1} = \frac{1}{\sqrt{2}}(V_{x}- i V_{y})$
This is useful, because we now have:
$[J_{z}, V_{0}] = 0$
$[J_{z}, V_{\pm 1}] = \pm V_{\pm 1}$
i.e.
$[J_{z}, V_{q}] = q V_{q}$
Note: What we did here is we choose $z$ as the quantization axis. The reassignment we did for $V_{\pm}$ was not an accident, but is a direct result from simultaneously diagonalizing $J_{z}$ and $V_{q}$
### Selection rules
We can now put everything together:
#### z
From the Dipole Approximation we found that the relevant matrix elements are:
$\bra {n',j',m'} \hat{\vec x} \ket{n,j,m}$
Which is a vector operator, thus we can do the above decomposition into $V_{-1,0,+1}$
Consider:
$\bra {n',j',m'} [\hat J_{z}, \hat{x_{0}}] \ket{n,j,m} = \bra {n',j',m'} 0 \ket{n,j,m} = 0$
But also
$\bra {n',j',m'} [\hat J_{z}, \hat{x_{0}}] \ket{n,j,m} = \bra {n',j',m'} J_{z}\hat x_{0} - \hat x_{0}J_{z} \ket{n,j,m} = (m'-m)\bra {n',j',m'}\hat{x_{0}} \ket{n,j,m}$
Now the second expression is automatically zero if $m' = m$, but for $m' \neq m$ the braket part needs to be zero (to fulfill the first equation)
We thus find for $m \neq m'$:
$\bra {n',j',m'} \hat{x_{0}} \ket{n,j,m} = 0$
So for a potential oscillating in $z$ direction, we cannot have any transitions that change $m'$!
Or in other words. $z$ polarized light, conserves the quantum number $m$
#### $\pm$
We can do a similar argument for $V_{+}$ and $V_{-}$ and find that:
$\bra {n',j',m'} \hat{x_{\pm}} \ket{n,j,m} = 0$ for $m' \neq m \pm 1$
Thus, when interacting with a circularly polarized beam, we can only have transitions that change $m$ by $\pm 1$
#### Parity:
$PxP = -x$
Plugging in we get:
$\bra {n',j',m'} \hat{\vec x} \ket{n,j,m} = -\bra {n',j',m'} P\hat{\vec x}P \ket{n,j,m}$
Now we consider that $P\ket{n,j,m}$ flips the sign if $j$ is odd
From this we follow:
$j' + j = 2n$ $\implies$ $\bra {n',j',m'} \hat{\vec x} \ket{n,j,m} = 0$
Thus we can only have transitions between odd and even, but not between even and even or odd an odd. Doing some additional representation theory, we can additionally find that the maximal jump is $\pm 1$
## Second quantization
We saw until now that the quantisation of light is only due to the quantized matter.
It turns out that you also need to quantize light (but for different reasons)
We find that the energy inside of an EM field is given by:
$$H = \frac{1}{8\pi}\int d^{3}x |E|^{2} + |B|^{2}$$
This looks suspiciously like the integral over a harmonic oscillator at every point. (We could for example say that $E^2 = \frac{1}{2}m\omega^{2}x^{2}$ and $B^{2}=\frac{1}{2m}p^{2}$)
We will thus try to find a way to describe the electromagnetic field as a collection of harmonic oscillators, which we can then quantize.
For this we'll introduce the field operators, which are like creation & anihilation operators, but they also support a position (so we can say _where_ we want to create and destroy a photon)
Note: we will find that it's actually easier to work in momentum space, but the two are related via fourier transform