Shankar QM - Chapter 1

1.1 - Linear Vector Spaces: Basics

n-dimensional Real Vector Space:
n-dimensional Complex Vector Space:

Vectors in a given vector space are represented using Dirac braket notation:
Where are vector that form a basis for , and are scalars.

1.2 - Inner Product Spaces

Definitions

Inner product is a map between two vectors and onto the scalars, denoted by:
Where is the field of the vector space (either or ), which satisfies the following axioms:

  • =
    For complex vector spaces, commuting and inverts the complex part of the inner product


  • The inner product of a vector with itself (magnitude) is non-negative


  • The only vector with magnitude zero is the zero vector.


  • Linearity on the ket term. Bra distributes on ket.

Inner product space is a vector space with an inner product

If one tries to distribute the ket term on the bra term:
The scalars are conjugated (anti-symmetry on the first term)

Orthogonal vectors:
Norm of a vector:
Orthonormal Basis: All basis vectors have norm 1 and orthogonal between them.

Formula for inner product

Given and the inner product must obey:

If the basis chosen was orthonormal, then and so:

1.3 - Dual Spaces and the Dirac Notation

The dual space of a vector space is denoted is the set of all linear functions from to the scalars.

If the elements of are column vectors, then elements of should map column vectors to scalars, therefore the elements of are row vectors.

There exists a canonical map between and such that if a vector is mapped to a co-vector denoted , the application of to a vector equals the inner product of and , denoted .

In the context of QM, vectors are called ket and co-vectors are called bra.

Given and , then for the above to be true:

1.3.1 - Expansion of Vectors in an Orthonormal Basis

Here, the vector is expanded in an orthonormal basis , where the components are , i.e. the inner product of each basis vector with .

1.3.2 - Adjoint Operation

Adjoint equations

If the ket corresponds to , then because of the conjugation applied corresponds to .

In braket notation it is allowed to place scalars inside the braket, signifying for kets:
and for bras:
More generally, an equation among kets:
This implies an adjoint equation:

Adjoint expansion of kets and bras

The expansion
can also be "adjointed" into:
Notice how the inner product is commuted to , because in the sum that term corresponds to scalars, it has to be conjugated, which is equivalent to commuting the order of the terms.

Gram-Schmidt Process

This process converts any basis to an orthonormal one.
Non-orthonormal basis: , , etc.
Transforms to orthonormal basis: , , etc.

  • Rescale the first vector by its own length so it becomes a unit vector. This will be the first unit vector.
  • Subtract from the second vector its projection along the first orthonormal vector and rescale to obtain a unit vector
  • Subtract from the third vector its projection along the first orthonormal vector and its projection along the second orthonormal vector. and rescale to obtain a unit vector.
  • Repeat, subtracting all the projections of each vector to the ones before that

Schwarz and Triangle Inequalities

Schwarz Inequality:

Triangle Inequality:

1.4 - Subspaces

Subspace: A subset of a vector space that also forms a vector space. A subspace "" of dimensionality will be denoted

Sum of subspaces: Given two subspaces, their sum can be defined as the vector space containing all elements of both vector spaces and all linear combinations.

1.5 - Linear Operators

Definitions

An operator is a linear function .

The action of on can be represented as:
Operators can also act on bras:
Linear operators obey the following rues:

And similarly, for bras:

Examples

Identity operator ( ): and

Rotation operator ( ): Rotates vector by about the axis parallel to the unit vector .

Effects on basis

The action of an operator on a basis: can be used to determine the effect on any vector:

Commutation

In general, The commutator of and is defined as:

Identities:

Inverse

The inverse of denoted satisfies

Distributing the inverse over a series of operations reverses the order:

1.6 - Matrix Elements of Linear Operators

When acts on the basis we obtain the new basis as done in Effects on basis, which we know the j-th component for each new basis vector in the original basis is: which is equal to:
are the matrix elements of in that basis.

Linear operators acting on kets

If acts on to give then the components of may be calculated:
Which corresponds to the following matrix operation:

The columns correspond to the components of the transformed basis vectors.

Linear operators acting on bras

Similarly, if acts on to give the components may be calculated as:

Projection Operators

The projection operator for the ket is defined as:
It operates on a ket , and "projects" it onto the ket .

Completeness Relation

If one projects onto each basis vector, and then adds up all projections, then you end up with again.

Similarly, with bras:

Matrix Elements of Projection Operators

Using the definition of the elements of a matrix representation of an operator, we obtain:
This corresponds to the matrix with zero in all elements except the i-th element in its diagonal.

Matrices Corresponding to Product of Operators

The matrix representation of a product of operations is the product of the matrices of each factor:

Adjoint of an Operator

Similarly to how a scalar operating on a vector is conjugated when the adjoint is calculated:
A vector being operated by omega can be adjointed by taking the hermitian conjugate or adjoint of the operator denoted .
Given a basis, the components of the adjoint may be calculated as:

The matrix representing is the transpose conjugate of the matrix representing .

Distribution of adjoint on operators reverses order:

General Rule for Taking Adjoints of a Product

When a product involving operators, bras, kets and scalars, one must reverse the order of all factors and make the substitutions , and .
$

Hermitian, Anti-Hermitian and Unitary Operators

An operator is hermitian if
An operator is anti-hermitian if

The same way numbers can be decomposed into real and imaginary parts by
operators can be decomposed into hermitian and anti-hermitian parts:

Unitary Operators

An operator is unitary if this automatically implies

The product of two unitary operators is also unitary. Additionally, they are associative and always have an inverse. Therefore, they form a group, named unitary group U(n). They are the generalization of rotation in to

They preserve the inner product:

The columns of an unitary matrix are column vectors which are orthonormal to each other.

1.7 - Active and Passive Transformations

Active Transformation

An active transformation is a linear map that maps vectors from the vector space to itself.

Passive Transformation

A passive transformation instead, is applied to operators and is applied to coordinates instead.

1.8 - The Eigenvalue Problem

Eigenvalue equation:
Where is the eigenket of and its eigenvalue.

Because of the linearity of the operator, if is an eigenket, then so is .
We will not treat them as different vectors for the eigenvalue problem.

Examples

Identity Operator

For the identity operator :

  • All vectors are eigenkets
  • All eigenkets with eigenvalue 1

Projection Operator

For the projection operator on a vector :

  • Kets parallel to are eigenkets with eigenvalue 1
  • Kets perpendicular to are eigenkets with eigenvalue 0

Quarter-turn Rotation Operator

For the operator :

  • Vectors parallel to with eigenvalue 1
  • Vectors parallel to with eigenvalue .
  • Vectors parallel to with eigenvalue .

The Characteristic Equation and the Solution to the Eigenvalue Problem

From we obtain:
Assuming the inverse of ) exists:
But there is no linear operator that transforms the vector to . Therefore the inverse doesn't exist, and so:
Using the Leibniz formula for determinants we can conclude that this is a polynomial equation of variable with coefficients determined by :
is the characteristic polynomial and the above equation is the characteristic equation. Its roots are the eigenvalues, independent of the basis.

Eigenvector Notation

Eigenvectors may be labeled by their eigenvalue. For example for :

This notation assumes that the eigenvalues are different for every vector, which only occurs if the polynomial doesn't have repeated roots (i.e. non-degenerate)

Properties

The eigenvalues of a Hermitian operator are real:
If is hermitian then is real.

For every hermitian operator there exists a basis consisting of its eigenvectors which are orthonormal. The matrix representation of the operator consists of its eigenvalues in its diagonal

If the characteristic equation of is non-degenerate then there is only one orthonormal basis formed by its eigenvectors. If it's degenerate, then there are many such basis. Eigenvectors may span an eigenspace with value .

If a root is repeated times for some eigenvalue , then there will be an eigenspace .

Degeneracy

For degenerate eigenvalue problems, a ket in a degenerate eigenspace is denoted where labels the specific ket.

The eigenvalues of a unitary operator are complex values with unit modulus

The eigenvectors of a unitary operator are mutually orthogonal. (Assuming no degeneracy)

Diagonalization of Hermitian Matrices

If starting from an orthonormal basis, one changes to the basis generated by the eigenvectors of a hermitian operator , because the eigenvectors are also orthonormal, the change of basis corresponds to a rotation of one orthonormal basis to another. This action is performed by unitary operator:

We can conclude then, that hermitian matrices (columns contains orthonormal basis), may be "rotated backwards" to the original orthonormal basis , therefore diagonalizing the matrix. In other words:
Every hermitian matrix may be diagonalized by a unitary change of basis.

Or in terms of a passive transformation:

If is a hermitian matrix, there exists a unitary matrix (Built out of the eigenvectors of ) such that is diagonal.

This is equivalent to the eigenvalue problem. The general case of matrix diagonalization is but if is hermitian, then is unitary, and .

Simultaneous Diagonalization of Two Hermitian Operators

Theorem: If and are two commuting () Hermitian operators, there exists (at least) a basis of common eigenvectors that diagonalizes them both.

If at least one of the operators are non-degenerate: Every eigenvector of is an eigenvector of and vice-versa.

Classical Mechanics Example

Pasted image 20250112233241.png

Initial conditions (, , , )

Physically Intuitive Basis (1, 2)

This problem can be expressed as the following matrix equation:

One must think of the state of the system as vector that's a function in time: this is an abstract vector that represents the position of the system.

The most intuitive basis is the one where the unit vectors correspond to unit displacement of moth masses. This basis is denoted by and :
One may also double-differentiate the equation to obtain its second derivative:
The vectors and are related by an operator , in this basis the operator can be found to be represented by the matrix:

From now on will refer to the operator AND the matrix in this original basis.

From Newtonian mechanics, one can derive:

And in general, the following is true:

The difficulty in solving this is the coupling between two differential equations.

Performing a Change of Basis

The trick is to change basis to one where is diagonal. This new basis will have basis vectors and . The change of basis matrix will be named .

Because is hermitian, will be unitary, and so we can write:
We will label the values of the diagonal of as:
We can easily find what happens if we feed the basis vectors and (in this new basis):
We can then write the vector form of the above equation, as its true for all basis:
From this we conclude that are the eigenvalues of and are the eigenvectors. And because of the definition of , the i-th column of U corresponds to the components of in the , basis.

In this basis, the equation becomes:

Which is much simpler to solve.

Finding the new basis

Finding the eigenvalues of can be done by the determinant method:

Solving this equation, we obtain:
And finding the eigenvectors in the original basis , we can express :

Initial condition in new basis

The inital conditions () in the original basis are given by To find the initial condition in the new basis, we multiply the components by to obtain:

Solving the system of equations in the new basis

The above matrix equation reduces to:
The above are second order homogeneous differential equations, with the following solutions for the given initial conditions:
Now we know the vector for the new basis, we can transform to the old basis by multiplying by
Substituting, we obtain:

Which is the result we wanted

The Propagator

The result of the above problem can be rewritten as a matrix equation:

The middle matrix is called the propagator. By multiplying the initial state vector with the propagator, we obtain the final state vector. The propagator is independent of the initial state.

This relation can be expressed in vectors as:
Note: is not the change of basis matrix from before. It's the propagator

The propagator only depends on the eigenvalues and eigenvectors, therefore, the problem can be solved in this manner:

  1. Solve the eigenvalue problem for
  2. Construct propagator
  3. Solution is

The Normal Modes

This equation has a simpler form in the basis :

In this basis, if the system has initial state , by applying the propagator, we see that the state evolves with only a factor of .
These are called the normal modes, and correspond to the columns of the propagator.

For this example if the system starts in either or the oscillating modes correspond respectively to:

  1. Both masses oscillating in tandem
  2. Both masses oscillating in opposite direction to one another.

If the system starts in a linear combination of this two modes, it evolves into the linear combination of the normal modes.

1.9 - Functions of Operators and Related Concepts

c-number: (classical) refers to scalars that commute
q-number: (quantum) refers to operators that don't generally commute

Functions of q-numbers as a power series:

Calculating this power series to determine it's value, or even if it converges is done in the basis in which is diagonal.

Derivatives of Operators with Respect to Parameters

Definition:
If it's written in some basis, one can just differentiate the element of the matrix.

Examples

Geometric Series of Operator

Only if (all its eigenvalues are less than 1)

Exponential of operator

Exponential of imaginary hermitian operator

And it's also a unitary operator

Derivative of Exponential

Assuming exists.

Differential Equation

Solution:

Difference between c-numbers and q-numbers

For q-numbers the following is true:
The chain rule for exponentiating operators is as follows:
We can commute the term with but NOT with

Because does not commute with , the following IS NOT TRUE:
As that would imply the terms commute.

1.10 - Generalization to Infinite Dimensions

To generalize to vector spaces of infinite dimension, we need to redefine the inner product:

Where and are functions defined from to , which correspond to coordinates of vector elements in the vector space.

Basis Vectors

To define basis vectors for this space, they must obey the following properties. For two given basis vectors and
Where is the Dirac delta function

This is in order to satisfy the general completeness relation :
When the above expression is operated on the left by and on the right by one obtains:

Derivative of Dirac Delta Function

If one replaces the basis with

Or more generally:

Operating on gives . Or equivalently,

Theta function

Definition:

Related to delta function by:

Operators in Infinite Dimensions

Operators acts on functions to give another function

They can be represented by an infinite dimensional "matrix".

Differential Operator

Defined as:
Matrix elements:

is not hermitian.

Hermitian operator in infinite dimensions

Contrary to finite dimensional case, is not a sufficient condition for the operator to be hermitian. It depends on the space

The condition is:

Hilbert Spaces

The space of functions that can be normalized to unity or to the dirac delta, is the physical Hilbert space

K Operator

Is hermitian under the previous condition
Eigenvectors:

By solving we can express the solutions for a basis as:
Where we define

Normalizing:

X Operator

The operator that has as eigenbasis the basis is the X operator.

The action on a function in the x basis is:

There exists a duality between the K operator and X operator:

O = X or K X Operator K Operator
Action in X basis:
Action in K basis
O = X or K X Operator K Operator
Matrix elements in X basis
Matrix elements in K basis
O = X or K X Operator K Operator
Eigenvectors in X basis
Eigenvectors in K basis

Passing from X to K basis and from K to X basis is equivalent to taking the Fourier transform and inverse Fourier transform of the components of the vector.

Commutator of X and K

X and K do not commute:

Classical Mechanics Example: String

Consider a string from to . The displacement obeys:

Given the initial displacement of and the let the initial velocity be determine the evolution of the string.

Reformulating the problem

By representing the state of the string using a vector, we write:
Which is equivalent to:
We may now attempt to find the eigenvectors and eigenvalues of in order to construct the operator and then apply it to

Solving eigenvalue problem

The eigenvalue equation is:
Equivalent to:
This can be solved as a differential equation. The solutions that obey the boundary conditions are:

This are the eigenvectors of .
We label the eigenvectors as . By applying to we find the eigenvalues to be:

Obtaining the Propagator

Projecting the wave equation into we obtain:

And by solving this differential equation:
are components of . Expressing in terms of these components:
And using the found expression of :
Factoring out :
And by the definition of : . The propagator equals:

Finding solutions to the problem

When one wants to find the evolution of the string given one can apply it to the propagator in the X basis:

One can then calculate