Lie's theorem

In mathematics, specifically the theory of Lie algebras, Lie's theorem states that,[1] over an algebraically closed field of characteristic zero, if is a finite-dimensional representation of a solvable Lie algebra, then stabilizes a flag ; "stabilizes" means for each and i.

Put in another way, the theorem says there is a basis for V such that all linear transformations in are represented by upper triangular matrices.[2] This is a generalization of the result of Frobenius that commuting matrices are simultaneously upper triangularizable, as commuting matrices form an abelian Lie algebra, which is a fortiori solvable.

A consequence of Lie's theorem is that any finite dimensional solvable Lie algebra over a field of characteristic 0 has a nilpotent derived algebra (see #Consequences). Also, to each flag in a finite-dimensional vector space V, there correspond a Borel subalgebra (that consist of linear transformations stabilizing the flag); thus, the theorem says that is contained in some Borel subalgebra of .[1]

Counter-example

For algebraically closed fields of characteristic p>0 Lie's theorem holds provided the dimension of the representation is less than p (see the proof below), but can fail for representations of dimension p. An example is given by the 3-dimensional nilpotent Lie algebra spanned by 1, x, and d/dx acting on the p-dimensional vector space k[x]/(xp), which has no eigenvectors. Taking the semidirect product of this 3-dimensional Lie algebra by the p-dimensional representation (considered as an abelian Lie algebra) gives a solvable Lie algebra whose derived algebra is not nilpotent.

Proof

The proof is by induction on the dimension of and consists of several steps. (Note: the structure of the proof is very similar to that for Engel's theorem.) The basic case is trivial and we assume the dimension of is positive. We also assume V is not zero. For simplicity, we write .

Step 1: Observe that the theorem is equivalent to the statement:[3]

  • There exists a vector in V that is an eigenvector for each linear transformation in .
Indeed, the theorem says in particular that a nonzero vector spanning is a common eigenvector for all the linear transformations in . Conversely, if v is a common eigenvector, take to its span and then admits a common eigenvector in the quotient ; repeat the argument.

Step 2: Find an ideal of codimension one in .

Let be the derived algebra. Since is solvable and has positive dimension, and so the quotient is a nonzero abelian Lie algebra, which certainly contains an ideal of codimension one and by the ideal correspondence, it corresponds to an ideal of codimension one in .

Step 3: There exists some linear functional in such that

is nonzero.

This follows from the inductive hypothesis (it is easy to check that the eigenvalues determine a linear functional).

Step 4: is a -module.

(Note this step proves a general fact and does not involve solvability.)
Let be in , and set recursively . For any , since is an ideal,
.
This says that (that is ) restricted to is represented by a matrix whose diagonal is repeated. Hence, . Since is invertible, and is an eigenvector for X.

Step 5: Finish up the proof by finding a common eigenvector.

Write where L is a one-dimensional vector subspace. Since the base field k is algebraically closed, there exists an eigenvector in for some (thus every) nonzero element of L. Since that vector is also eigenvector for each element of , the proof is complete.

Consequences

The theorem applies in particular to the adjoint representation of a (finite-dimensional) solvable Lie algebra ; thus, one can choose a basis on with respect to which consists of upper-triangular matrices. It follows easily that for each , has diagonal consisting of zeros; i.e., is a nilpotent matrix. By Engel's theorem, this implies that is a nilpotent Lie algebra; the converse is obviously true as well. Moreover, whether a linear transformation is nilpotent or not can be determined after extending the base field to its algebraic closure. Hence, one concludes the statement:[4]

A finite-dimensional Lie algebra over a field of characteristic zero is solvable if and only if the derived algebra is nilpotent.

Lie's theorem also establishes one direction in Cartan's criterion for solvability: if V is a finite-dimensional vector over a field of characteristic zero and a Lie subalgebra, then is solvable if and only if for every and .[5]

Indeed, as above, after extending the base field, the implication is seen easily. (The converse is more difficult to prove.)

Lie's theorem (for various V) is equivalent to the statement:[6]

For a solvable Lie algebra , each finite-dimensional simple -module (i.e., irreducible as a representation) has dimension one.

Indeed, Lie's theorem clearly implies this statement. Conversely, assume the statement is true. Given a finite-dimensional -module V, let be a maximal -submodule (which exists by finiteness of the dimension). Then, by maximality, is simple; thus, is one-dimensional. The induction now finishes the proof.

The statement says in particular that a finite-dimensional simple module over an abelian Lie algebra is one-dimensional; this fact remains true without the assumption that the base field has characteristic zero.[7]

Here is another quite useful application:[8]

Let be a finite-dimensional Lie algebra over an algebraically closed field of characteristic zero with radical . Then each finite-dimensional simple representation is the tensor product of a simple representation of with a one-dimensional representation of (i.e., a linear functional vanishing on Lie brackets).

By Lie's theorem, we can find a linear functional of so that there is the weight space of . By Step 4 of the proof of Lie's theorem, is also a -module; so . In particular, for each , . Extend to a linear functional on that vanishes on ; is then a one-dimensional representation of . Now, . Since coincides with on , we have that is trivial on and thus is the restriction of a (simple) representation of .

See also

References

  1. Serre, Theorem 3
  2. Humphreys, Ch. II, § 4.1., Corollary A.
  3. Serre, Theorem 3
  4. Humphreys, Ch. II, § 4.1., Corollary C.
  5. Serre, Theorem 4
  6. Serre, Theorem 3'
  7. Jacobson, Ch. II, § 6, Lemma 5.
  8. Fulton & Harris, Proposition 9.17.

Sources

  • Fulton, William; Harris, Joe (1991). Representation theory. A first course. Graduate Texts in Mathematics, Readings in Mathematics. 129. New York: Springer-Verlag. doi:10.1007/978-1-4612-0979-9. ISBN 978-0-387-97495-8. MR 1153249. OCLC 246650103.
  • Humphreys, James E. (1972), Introduction to Lie Algebras and Representation Theory, Berlin, New York: Springer-Verlag, ISBN 978-0-387-90053-7.
  • Jacobson, Nathan, Lie algebras, Republication of the 1962 original. Dover Publications, Inc., New York, 1979. ISBN 0-486-63832-4
  • Jean-Pierre Serre: Complex Semisimple Lie Algebras, Springer, Berlin, 2001. ISBN 3-5406-7827-1
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.