This is only an excerpt. The whole publication can be ordered from the category «order for free» in this site.

The Lie Algebras su(N)

An Introduction Walter Pfeifer
112 pp, 46 line figures
Dr. Walter Pfeifer: Stapfenackerweg 9, CH 5034 Suhr, Switzerland
2003, revised 2008

Contents

Contents 1
Preface 3
1 Lie algebras 4
1.1 Definition 4
1.2 Isomorphic Lie algebras 10
1.3 Operators and Functions 10
1.4 Representation of a Lie algebra 13
1.5 Reducible and irreducible representations, multiplets. 14
2 The Lie algebras su(N) 18
2.1 Hermitian matrices 18
2.2 Definition 19
2.3 Structure constants of su(N) 23
3 The Lie algebra su(2) 25
3.1 The generators of the su(2)-algebra. 25
3.2 Operators constituting the algebra su(2) 29
3.3 Multiplets of su(2) 32
3.4 Irreducible representations of the su(2)-algebra 36
3.5 Direct products of irreducible representations and their function sets. 38
3.6 Reduction of direct products of su(2)-representations and multiplets. 42
3.7 Graphical reduction of direct products of su(2)-multiplets. 49
4 The Lie algebra su(3) 51
4.1 The generators of the su(3)-algebra 51
4.2 Subalgebras of the su(3)-algebra. 53
4.3 Step operators and states in su(3) 54
4.4 Multiplets of su(3) 57
4.5 Individual states of the su(3)-multiplet and their multiplicities. 59
4.6 Dimension of the su(3)-multiplet. 66
4.7 The smallest su(3)-multiplets. 68
4.8 The fundamental multiplet of su(3). 69
4.9 The hypercharge Y. 71
4.10 Irreducible representations of the su(3) algebra. 73
4.11 Casimir operators. 76
4.12 The eigenvalue of the Casimir operator C1 in su(3). 77
4.13 Direct products of su(3)-multiplets. 79
4.14 Decomposition of direct products of multiplets by means of Young diagrams. 82
5 The Lie algebra su(4) 85
5.1 The generators of the su(4)-algebra, subalgebras. 85
5.2 Step operators and states in su(4). 90
5.3 Multiplets of su(4). 92
5.4 The charm C. 96
5.5 Direct products of su(4)-multiplets. 97
5.6 The Cartan–Weyl basis of su(4). 98
6 General properties of the su(N)-algebras 106
6.1 Elements of the su(N)-algebra. 106
6.2 Multiplets of su(N). 107
References 110
Index 111

 

Preface

Lie algebras are not only an interesting mathematical field but also efficient tools to analyze the properties of physical systems. Concrete applications comprise the formulation of symmetries of Hamiltonian systems, the description of atomic, molecular and nuclear spectra, the physics of elementary particles and many others.

In particular, Lie algebras very frequently appear and "there is hardly any student of physics or mathematics who will never come across symbols like  and " (Fuchs, Schweigert, 1997, p. XV). For instance, the algebra  describes angular momenta,  is related to harmonic oscillator properties (Talmi, 1993, p. 621) or to rotation properties of systems (Talmi, 1993, p. 797; Pfeifer, 1998, p.113) and  represents states of elementary particles in the quark model (Greiner, Müller, 1994, p. 367)

This book is mainly directed to undergraduate students of physics or to interested physicists. It is conceived to give directly a concrete idea of the  algebras and of their laws. The detailed developments, the numerous references to preceding places, the figures and many explicit calculations of matrices should enable the beginner to follow. Laws which are given without proof are marked clearly and mostly checked with numerical tests. Knowledge of linear algebra is a prerequisite. Many results are obtained, which hold generally for (simple) Lie algebras. Therefore the text on hand can make the lead-in to this field easier.

The structure of the contents is simple. First, Lie algebras are defined and the  algebras are introduced starting from anti-Hermitian matrices. In chapter 3 the  algebras, their multiplets and the direct product of the multiplets are investigated. The treatment of the  multiplets in chapter 4 is more labour-intensive. Casimir operators and methods to determine the dimensions of multiplets are introduced. In chapter 5 the  multiplets are represented three-dimensionally. Making use of the  algebra, the properties of the Cartan-Weyl basis are demonstrated. Chapter 6 points to general relations of the  algebras.

Any reader who detects errors is urged to contact the author via the email address mailbox@walterpfeifer.ch. Of course, the author is willing to answer questions.

1. Lie algebra Lie algebras

In this chapter the Lie algebra is defined and the properties of its underlying vector space are described. Discussing the role of the basis elements of the algebra one is led to the structure constants, to some of their symmetry properties and to their relationship to the adjoint matrices. With these matrices the Killing form is constructed. As a natural example for a Lie algebra, general square matrices are looked at. The notion of "simplicity" is introduced.

Operators which constitute a Lie algebra act on vector spaces of functions. By means of the corresponding expansion coefficients, properties of these operators are shown. The matrices of the expansion coefficients make up a representation of the algebra. If this representation is reducible it can be transformed to the equivalent block diagonal form. The functions which are assigned to an irreducible representation form a multiplet.

1.1 Definition

What is a Lie algebra?

A Lie algebra  comprises the elements  , which may be general matrices with certain  properties (real/complex matrix elements, internal symmetries)  or linear operators etc. The elements can be combined in two different ways. To come first, the elements of a Lie algebra must be able to form Lie products , which are also named Lie brackets  or commutators . For square matrices  and  the well-known relation                                                                                                              

(1.1.1)

defines a commutator, which, of course, is again a matrix. On the other hand, if the elements of a Lie algebra are operators or more general quantities the commutator  has still to be defined, but the right hand side of (1.1.1) need not to be satisfied. We want that, in addition to the formation of the commutator, it must be possible to combine the elements of the Lie algebra linearly, i.e., they constitute a vector space.

Thus, the definition of a Lie algebra  demands the following properties of its elements

a) the commutator of two elements is again an element of the algebra

  (1.1.2)

b) a linear combination  of the elements  and  with the real or complex numbers and  is again an element of the algebra i.e.

  (1.1.3)

Therefore the element 0 (zero) belongs to the algebra.

c) The following linearity is postulated

  (1.1.4)

d) Interchanging both elements of a commutator results in the relation

.  (1.1.5)

With (1.1.5) and (1.1.4) one proves that also  holds. Of course, we have .

e) Finally the Jacobi identity  has to be satisfied as follows

.  (1.1.6)

Using (1.1.1) one shows that this identity holds for square matrices.

Note that we don't demand that the commutators are associative i.e. the relation  is not true in general. Some authors choose other logical sequences of postulates for the Lie algebra. For instance one can define  and deduce (1.1.5) (Carter, 1995, p. 5).

f) In addition to a) up to e) we demand that a Lie algebra has a finite dimension n  i.e. it comprises a set of n linearly independent elements , which act as a basis , by which every element x of the algebra can be represented like this

 .  (1.1.7)

In other words, the algebra constitutes an n–dimensional vector space . Sometimes the dimension is named order. If the coefficients  in (1.1.7) and  in (1.1.3) are real the algebra is named real . In a complex or complexified  algebra the coefficients are complex.

We summarize points a) up to f): A Lie algebra is a vector space with an alternate product satisfying the Jacobi condition.

In accordance with the definition point e) the basis elements  meet the Jacobi identity  (1.1.6). If this is the case, the arbitrary elements

,  and (1.1.8)

satisfy the identity as well. We introduce a symbol for the Jacobi form

,  (1.1.9)

which is linear in a, b, c. Replacing these elements by x, y and z and inserting the expressions for x, y and z we obtain

.  (1.1.10)

Since every Jacobi form on the right hand side vanishes, it is proved that in order to have the identity (1.1.6), it suffices to ask that the condition is satisfied by the basis elements.

Clearly the choice of the basis  is arbitrary. With a nonsingular matrix , which contains the real or complex numbers , a new basis  can be built this way

.  (1.1.11)

The new elements meet the Jacobi identity, which is shown in (1.1.10). It is a well-known fact that the elements  are linearly independent if   do and if  is non-singular. Of course a change of the basis of a complex (or real) Lie algebra by means of complex (or real) coefficients  in (1.1.11) restores the algebra and it keeps its name.

The structure constants

Due to (1.1.2) the commutator of two basis elements belongs also to the algebra and, following (1.1.7) it can be written like this

.  (1.1.12)

The  coefficients are called structure constants , relative to the -basis. They are not invariant under a transformation as per (1.1.11) (see (1.1.17)). Given the set of basis elements, the structure constants specify the Lie algebra completely. Of course, a Lie algebra with complex structure constants is complex itself. Clearly, if the structure constants are real, a real Lie algebra can be constructed on its basis.

The commutator of the elements x and y, (1.1.8), can be expressed by the basis elements as follows

(1.1.13)

Analogously the structure constants of the new basis   , (1.1.11), are determined this way

 .  (1.1.14)

Since  is nonsingular,  the inverse matrix  exists and

 holds,  (1.1.15)

which we insert in (1.1.14) like this

 .  (1.1.16)

Thus, the structure constants of the new basis read

 .  (1.1.17)

We insert (1.1.12) in the Jacoby identity (1.1.6) and obtain

(1.1.18)

Because the basis elements  are linearly independent, we get n equations for given values i, j, k like this

,    m = 1, ... , n.  (1.1.19)

We write the antisymmetry relation (1.1.5) of commutators as  and insert equation (1.1.12), which yields . Due to the linear independence of the basis elements we obtain n equations for given values i,k of the following form

 .  (1.1.20)

In Section 2.3 we will see that in -algebras the structure constants relative to the -basis are antisymmetric in all indices and not only in the first two ones like in (1.1.20).

The adjoint matrices

Here we introduce the adjoint or -matrices. As per (1.1.2) the commutator of an arbitrary element  of a Lie algebra with basis elements  must be a linear combination of the basis elements similarly to (1.1.12). Therefore we write

(1.1.21)

The coefficients  are elements of the -matrix . It is easy to show that it is linear in the arguments like this:

(1.1.22)

and the following commutator relation holds:

.  (1.1.23)

replacing  by in (1.1.21) yields

.  (1.1.24)

Comparing with (1.1.12) we obtain

.  (1.1.25)

We will come back to the adjoint matrices in Sections 1.4 and 3.1.

These -matrices appear also in the so-called "Killing form", which plays an important part in the analysis of the structure of algebras.

The Killing form

The "Killing form"  corresponding to any two elements  and  of a Lie algebra is defined by

.                                                                                                                (1.1.26)

The symbol  denotes the trace of the matrix product. It can be shown that the Killing form  is symmetric and bilinear in the elements  and . We write the Killing form of the basis elements  and  of a Lie algebra

.                                                                   (1.1.27)

With (1.1.25) we have

.  (1.1.28)

In Sections 2.3 and 3.1 we will deal with the Killing form of  and , respectively.

Simplicity

From the theoretical point of view it is important to know whether a Lie algebra is simple. We give the appropriate definitions concisely.

A Lie algebra is said to be simple if it is not Abelian and does not possess a proper invariant Lie subalgebra.

The terms used here are defined as follows:

      -     A Lie algebra  is said to be Abelian if  for all . Thus, in an Abelian Lie algebra all the structure constants are zero.

      -     A subalgebra  of a Lie algebra  is a subset of elements of  that      themselves form a Lie algebra with the same commutator and field as that   of . This implies that  is real if  is real and  is complex if  is    complex.

      -     A subalgebra  is said to be proper if at least one element of  is not contained in .

      -     A subalgebra  of a Lie algebra  is said to be invariant if  for all  and . An invariant subalgebra is also named ideal.

We will meet simple Lie algebras in Sections 2.2 and 4.2.

Example

Obviously, square N-dimensional matrices ( -matrices, sometimes called matrices of rank N) constitute a Lie algebra. The commutators and linear combinations are again N-dimensional matrices and the conditions a) up to e) are satisfied. The basis matrices for matrices with complex matrix elements can be chosen like this, where :

(1.1.29)

The set (1.1.29) can be written using the square matrices , which shows the value 1 at the position  and zeros elsewhere. The canonical basis  forms the basis (1.1.29).

Using these  basis elements the definition point f) is met with real coefficients  in (1.1.7). In a word, the complex matrices of order  form a real vector space of dimension . On the other hand, the same vector space of complex -matrices can be constructed using complex coefficients and the basis . The resulting complex algebra is named  and it has the dimension . In Section 2.2 we will interrelate it with .

 

1.2 Isomorphic Lie algebras

A Lie algebra with elements  is isomorphic to an other Lie algebra with elements  if an unambiguous mapping    exists which is reversible, i.e.  . It has to meet the relations

(1.2.1)

for all scalars  and .

The structure of both Lie algebras is identical and we expect that both have the same dimension n. Supposed  are the basis elements of the first Lie algebra, then  is a basis of the isomorphic Lie algebra. Of course, two isomorphic Lie algebras have the same structure constants.

Here the Ado theorem is given without proof:

Every abstract Lie algebra is isomorphic to a Lie algebra of matrices with the commutator defined as in equation  (1.1.1).

Consequently, the properties of Lie algebras can be studied by investigating the relatively vivid matrix algebras.

 

1.3 Operators and Functions

The general set-up

In addition to the definition points a) up to f) of section 1.1 we demand that operators which form a Lie algebra act linearly on functions. These functions or states or "kets" make up a d-dimensional vector space with linearly independent basis  functions  or . That is, an arbitrary function  of this space can be written as

(1.3.1)

with real or complex coefficients  .

We demand that the operators of the Lie algebra  acting on a function produce a transformation of the function so that the result lies still in the original vector space. If we let act the element x of the Lie algebra on the basis function , it yields the following linear combination of basis functions

   .  (1.3.2)

Notice the sequence of the indices in . Making use of (1.3.1) we can write down the action of  x  on an arbitrary function  like this

 ,  (1.3.3)

which is a consequence of the linearity of x. In section 1.4 we will see that  is linear in x, i.e. . Therefore equation (1.3.3) results in

 .  (1.3.4)

We see that the action of the operators constituting a Lie algebra on the functions is mainly described by the coefficients .

 

Furthermore we demand that the vector space of the functions is an inner product  space. That is to say, for every pair of functions  and  an inner product  is defined with the following well-known properties

i.         (complex conjugate)

ii.      

iii.          (c: complex or real number)

iv.     

v.       , if and only if  = 0.

Note that we obtain  from i and iii. The state  is named adjoint  of  .

In an inner product vector space with linearly independent basis functions    it is always possible to construct an orthogonal basis           , which satisfies

   i, k = 1, 2, ... ,d.  (1.3.5)

In the following, we will presuppose that the basis functions are orthonormalized, which is not a restriction. Assuming this property we form the inner product of both sides of eq. (1.3.2) with the basis function

 .  (1.3.6)

In section 1.4 we will refer to the quadratic matrix , which contains the matrix elements .

 

Further properties

We look into further properties of the operators constituting a Lie algebra and of the affiliated basic functions. In analogy to (1.3.6) we make up the inner product of both sides of eq. (1.3.1) with the basis function  as follows

 ,  (1.3.7)

which we insert again in eq. (1.3.1) like this

 .  (1.3.8)

The functions  and  are ket states and can be marked by the ket symbol  this way

 .  (1.3.9)

Formally we can regard the expression  as an operator which restores the state  like the identity operator   . That is the completeness  relation

.  (1.3.10)

We apply it in order to formulate a matrix element of a sequence of two operators, say x and y,  like this

.  (1.3.11)

Making use of (1.3.6) we have the result

(1.3.12)

I.e., the matrices  multiply in analogy with the corresponding operators.

Now, we investigate the matrix  containing the matrix elements          , the adjoint of this matrix and adjoint operators. By definition, the adjoint matrix of ,  , is transposed and conjugate complex, i.e.,

.  (1.3.13)

condition i of the inner product definition yields

.  (1.3.14)

We define the adjoint operator  by

.  (1.3.15)

Using this relation we write the matrix element of a sequence of two operators this way  or . Thus we have found

(1.3.16)

in accordance with the matrix relation (2.2.3).

 

1.4 Representation of a Lie algebra

Definition

Suppose that  and  are elements of a Lie algebra  and that to every  there exists a -matrix  such that

 and  (1.4.1)

.  (1.4.2)

Then these matrices are said to form a -dimensional representation of .

Clearly, the set of matrices  forms a Lie algebra over the same field (with real or complex coefficients ) as .

In (1.1.21) the adjoint or  matrices were introduced. Starting from (1.1.22) and (1.1.23) for simple Lie algebras it can be proved that the matrices  constitute a representation of the algebra . This is the regular or adjoint representation.

We come back to the elements , (1.3.2) and (1.3.8), which make up the matrix . We now maintain that the matrices , , ... constitute a representation of the Lie algebra , ... . As in section 1.3 we presuppose that the set of basis functions  is associated to the Lie algebra and the relation (1.3.2) holds as  with  , see (1.3.6). According to (1.3..5) the states  can be regarded as orthonormalized, i.e.  .

We prove our assertion. First we deal with the linearity property of the matrices . For the moment we handle with the matrix element (lk):

(1.4.3)

Of course, the same relation holds for the whole matrices:

.  (1.4.4)

Next we treat the commutator relation, which will be similar to (1.4.2). For the commutator  we don't take the abstract form (1.1.2) but we choose

.  (1.4.5)

Following (1.4.4) we write

 .  (1.4.6)     

With eq. (1.3.12) we obtain

.  (1.4.7)

Therefore, the equations (1.4.4) and (1.4.7) show that the  represent the Lie algebra  .

 

1.5 Reducible and irreducible representations, multiplets.

Let's suppose that the vector space of functions of a Lie algebra is the direct sum of two subspaces  and , i.e. that the basis functions are split into two sets  and . Furthermore, we assume that the subspaces are invariant, i.e., for every operator  of the Lie algebra the relation (1.3.2) is modified like this

   (1.5.1)

That is, the transformation of the functions takes place only in the vector subspace of the original basis function. This means that

   (1.5.2)

Consequently only those coefficients  are non-zero which meet the conditions (1.5.2). Therefore the matrix  has the following form

   (1.5.3)

If in a Lie algebra all matrices  are divided in this way in two or more squares along the diagonal, the representation is named reducible . some authors call it "completely reducible".

On the other hand, if the operators constituting the algebra transform every function among functions of the entire vector space, there is no nontrivial invariant subspace (as defined at the beginning of this section). In this case, the matrices  cannot be decomposed in two or more square matrices along the diagonal, and the representation is named irreducible. The vector space of the functions of such a representation is called a multiplet. For particle physics, multiplets are very important.

We go back to the representation (1.5.3). The squares in the matrix are irreducible representations, i.e. the reducible representation is decomposed into the direct sum of irreducible representations.

If we interchange the basis functions or if we transform the basis more generally, it happens that we lose the structure of (1.5.3) i.e. the non-zero matrix elements can be spread over the entire matrix and the representation seems not to be reducible. However, one can reduce such a representation to the form (1.5.3) by a similarity transformation, where every matrix of the representation is transformed in the same way as set out below. H. Weyl proved that every finite dimensional representation of a semi-simple algebra decomposes into the direct sum of irreducible representations. Since the algebras  are simple, this theorem holds also for them.

 We investigate the transformations of representations. If  is a non-singular matrix, starting from a representation of the Lie algebra with the d-dimensional matrices , we are able to construct a new representation constituted by the elements

    .  (1.5.4)

The matrix  is independent of .

First, we have to show that the matrices     meet the linearity relation (1.4.4), namely

(1.5.5)

Then, the commutator relation is treated like this

      (1.5.6)

Consequently the set of matrices  also forms a d-dimensional representation .  and  are said to be equivalent representations.

A given representation  is reducible, if one can find the matrix  in (1.5.4) so that all representation matrices  of the equivalent representation have the same block-diagonal form (1.5.3) with two or more squares. If the matrices of the basis elements,  have obtained this form, obviously every matrix of the equivalent representation shows the same form (Note: the commutator of two matrices with the same block diagram structure generates a matrix which has the same structure).

Assume that the basis  functions  are assigned to the representation . What can be said about the basis functions  which belong to the equivalent representation  We claim that the basis functions

             (with  from (1.5.4))  (1.5.7)

meet the relation . We let the linear operator  act on equation (1.5.7). Making use of (1.3.2) we get

     .

Because  is non-singular the equation (1.5.7) can be inverted like this    , which we insert using (1.5.4) like this

   (1.5.8)

If the matrices   of the equivalent representation have the form (1.5.3) with two or more squares on the diagonal, the vector space of the functions  is divided in invariant subspaces. Due to (1.5.8) the operator  (and all the operators of the Lie algebra) transform the states of a subspace among themselves as described in the example at the beginning of this section. The structure of the basis functions is given in (1.5.7).

This is only an excerpt. The whole publication can be ordered from the category «order for free» in this site.

 

 

^ Top of page ^