Shankar QM - Chapter 1
1.1 - Linear Vector Spaces: Basics
n-dimensional Real Vector Space:
n-dimensional Complex Vector Space:
Vectors in a given vector space
1.2 - Inner Product Spaces
Definitions
Inner product is a map between two vectors
-
=
For complex vector spaces, commuting and inverts the complex part of the inner product -
The inner product of a vector with itself (magnitude) is non-negative -
The only vector with magnitude zero is the zero vector. -
Linearity on the ket term. Bra distributes on ket.
Inner product space is a vector space with an inner product
If one tries to distribute the ket term on the bra term:
Orthogonal vectors:
Norm of a vector:
Orthonormal Basis: All basis vectors have norm 1 and orthogonal between them.
Formula for inner product
Given
1.3 - Dual Spaces and the Dirac Notation
The dual space of a vector space
If the elements of
There exists a canonical map between
In the context of QM, vectors are called ket and co-vectors are called bra.
Given
1.3.1 - Expansion of Vectors in an Orthonormal Basis
1.3.2 - Adjoint Operation
Adjoint equations
If the ket
In braket notation it is allowed to place scalars inside the braket, signifying for kets:
Adjoint expansion of kets and bras
The expansion
Gram-Schmidt Process
This process converts any basis to an orthonormal one.
Non-orthonormal basis:
Transforms to orthonormal basis:
- Rescale the first vector by its own length so it becomes a unit vector. This will be the first unit vector.
- Subtract from the second vector its projection along the first orthonormal vector and rescale to obtain a unit vector
- Subtract from the third vector its projection along the first orthonormal vector and its projection along the second orthonormal vector. and rescale to obtain a unit vector.
- Repeat, subtracting all the projections of each vector to the ones before that
Schwarz and Triangle Inequalities
Schwarz Inequality:
Triangle Inequality:
1.4 - Subspaces
Subspace: A subset of a vector space
Sum of subspaces: Given two subspaces, their sum
1.5 - Linear Operators
Definitions
An operator
The action of
And similarly, for bras:
Examples
Identity operator (
Rotation operator (
Effects on basis
The action of an operator on a basis:
Commutation
In general,
Identities:
Inverse
The inverse of
Distributing the inverse over a series of operations reverses the order:
1.6 - Matrix Elements of Linear Operators
When
Linear operators acting on kets
If
The columns correspond to the components of the transformed basis vectors.
Linear operators acting on bras
Similarly, if
Projection Operators
The projection operator for the ket
Completeness Relation
If one projects
Matrix Elements of Projection Operators
Using the definition of the elements of a matrix representation of an operator, we obtain:
Matrices Corresponding to Product of Operators
The matrix representation of a product of operations is the product of the matrices of each factor:
Adjoint of an Operator
Similarly to how a scalar operating on a vector is conjugated when the adjoint is calculated:
Distribution of adjoint on operators reverses order:
General Rule for Taking Adjoints of a Product
When a product involving operators, bras, kets and scalars, one must reverse the order of all factors and make the substitutions
$
Hermitian, Anti-Hermitian and Unitary Operators
An operator
An operator
The same way numbers can be decomposed into real and imaginary parts by
Unitary Operators
An operator
The product of two unitary operators is also unitary. Additionally, they are associative and always have an inverse. Therefore, they form a group, named unitary group U(n). They are the generalization of rotation in
They preserve the inner product:
The columns of an
1.7 - Active and Passive Transformations
Active Transformation
An active transformation is a linear map that maps vectors from the vector space to itself.
Passive Transformation
A passive transformation instead, is applied to operators and is applied to coordinates instead.
1.8 - The Eigenvalue Problem
Eigenvalue equation:
Because of the linearity of the operator, if
We will not treat them as different vectors for the eigenvalue problem.
Examples
Identity Operator
For the identity operator
- All vectors are eigenkets
- All eigenkets with eigenvalue 1
Projection Operator
For the projection operator on a vector
- Kets parallel to
are eigenkets with eigenvalue 1 - Kets perpendicular to
are eigenkets with eigenvalue 0
Quarter-turn Rotation Operator
For the operator
- Vectors parallel to
with eigenvalue 1 - Vectors parallel to
with eigenvalue . - Vectors parallel to
with eigenvalue .
The Characteristic Equation and the Solution to the Eigenvalue Problem
From
Eigenvector Notation
Eigenvectors may be labeled by their eigenvalue. For example for
Properties
The eigenvalues of a Hermitian operator are real:
If
For every hermitian operator
If the characteristic equation of
If a root is repeated
Degeneracy
For degenerate eigenvalue problems, a ket in a degenerate eigenspace is denoted
The eigenvalues of a unitary operator are complex values with unit modulus
The eigenvectors of a unitary operator are mutually orthogonal. (Assuming no degeneracy)
Diagonalization of Hermitian Matrices
If starting from an orthonormal basis, one changes to the basis generated by the eigenvectors of a hermitian operator
Every hermitian matrix may be diagonalized by a unitary change of basis.
Or in terms of a passive transformation:
If
This is equivalent to the eigenvalue problem. The general case of matrix diagonalization is
Simultaneous Diagonalization of Two Hermitian Operators
Theorem: If
If at least one of the operators are non-degenerate: Every eigenvector of
Classical Mechanics Example
Initial conditions (
Physically Intuitive Basis (1, 2)
This problem can be expressed as the following matrix equation:
One must think of the state of the system as vector that's a function in time:
The most intuitive basis is the one where the unit vectors correspond to unit displacement of moth masses. This basis is denoted by
From now on
From Newtonian mechanics, one can derive:
And in general, the following is true:
The difficulty in solving this is the coupling between two differential equations.
Performing a Change of Basis
The trick is to change basis to one where
Because
In this basis, the equation becomes:
Which is much simpler to solve.
Finding the new basis
Finding the eigenvalues of
Solving this equation, we obtain:
Initial condition in new basis
The inital conditions (
Solving the system of equations in the new basis
The above matrix equation reduces to:
Which is the result we wanted
The Propagator
The result of the above problem can be rewritten as a matrix equation:
The middle matrix is called the propagator. By multiplying the initial state vector with the propagator, we obtain the final state vector. The propagator is independent of the initial state.
This relation can be expressed in vectors as:
The propagator only depends on the eigenvalues and eigenvectors, therefore, the problem can be solved in this manner:
- Solve the eigenvalue problem for
- Construct propagator
- Solution is
The Normal Modes
This equation has a simpler form in the basis
In this basis, if the system has initial state
These are called the normal modes, and correspond to the columns of the propagator.
For this example if the system starts in either
- Both masses oscillating in tandem
- Both masses oscillating in opposite direction to one another.
If the system starts in a linear combination of this two modes, it evolves into the linear combination of the normal modes.
1.9 - Functions of Operators and Related Concepts
c-number: (classical) refers to scalars that commute
q-number: (quantum) refers to operators that don't generally commute
Functions of q-numbers as a power series:
Derivatives of Operators with Respect to Parameters
Definition:
Examples
Geometric Series of Operator
Exponential of operator
Exponential of imaginary hermitian operator
And it's also a unitary operator
Derivative of Exponential
Differential Equation
Difference between c-numbers and q-numbers
For q-numbers the following is true:
Because
1.10 - Generalization to Infinite Dimensions
To generalize to vector spaces of infinite dimension, we need to redefine the inner product:
Basis Vectors
To define basis vectors for this space, they must obey the following properties. For two given basis vectors
This is in order to satisfy the general completeness relation :
Derivative of Dirac Delta Function
If one replaces the basis
Operating
Theta function
Definition:
Related to delta function by:
Operators in Infinite Dimensions
Operators acts on functions to give another function
They can be represented by an infinite dimensional "matrix".
Differential Operator
Defined as:
Hermitian operator in infinite dimensions
Contrary to finite dimensional case,
The condition is:
Hilbert Spaces
The space of functions that can be normalized to unity or to the dirac delta, is the physical Hilbert space
K Operator
Eigenvectors:
By solving
Normalizing:
X Operator
The operator that has as eigenbasis the
The action on a function in the x basis is:
There exists a duality between the K operator and X operator:
O = X or K | X Operator | K Operator |
---|---|---|
Action in X basis: |
||
Action in K basis |
O = X or K | X Operator | K Operator |
---|---|---|
Matrix elements in X basis |
||
Matrix elements in K basis |
O = X or K | X Operator | K Operator |
---|---|---|
Eigenvectors in X basis | ||
Eigenvectors in K basis |
Passing from X to K basis and from K to X basis is equivalent to taking the Fourier transform and inverse Fourier transform of the components of the vector.
Commutator of X and K
X and K do not commute:
Classical Mechanics Example: String
Consider a string from
Reformulating the problem
By representing the state of the string using a vector, we write:
Solving eigenvalue problem
The eigenvalue equation is:
We label the eigenvectors as
Obtaining the Propagator
Projecting the wave equation into
Finding solutions to the problem
When one wants to find the evolution of the string given