# Fredholm Endomorphisms of Index 0

A while ago I posted a note on the ArXiV about my attempt at applying the theory of Fredholm operators from functional analysis to more general context of -algebras. I wanted to work through an argument here because it seemed like it has a suspiciously nice proof; I’ve learned to be skeptical of such proofs.

Let be a linear transformation of an infinite dimensional vector space with basis . We can break up the vector space into four subspaces (two in the domain and two in the codomain). First are the kernel, which is a subspace of the domain, and the image of , which is subspace in the codomain; we’ll denote these by the typical and . For the purposes of intuition about the topic, you can think about the kernel of a linear transformation as a measurement of how far away is from being injective. In the codomain, there is a similar measurement for surjectivity called the cokernel, which is defined by . The final subspace is , which is the quotient space which the first isomorphism theorem assures is isomorphic to the image.

The goal of the above article is to provide a classification of endomorphisms which are close to invertible, but not necessarily invertible. The measurement for this that the paper proposes is inspired by the Fredholm index of a bounded endomorphism. Before getting to that, let’s remind ourselves of the setting: will denote the family of matrices, indexed by which have only finitely many nonzero elements in any row or column. This matrix has a unique minimal ideal , the matrices which have only finitely many nonzero entries. The algebra will be denoted by . While these three algebras are useful for building intution, they will be less useful for the following arguments. For that we’ll be using their endomorphism analogues.

Given a linear transformation of a countable-dimensional vector space with basis , there is a natural representation of as a column-finite matrix , that is, an infinite matrix in which every column is finite.

The endomorphism-ring equivalent of is the ring of bounded endomorphisms, . For the given vector space , define a descending sequence of subspaces by

.

Then an endomorphism is bounded if it has the property that for any , there is some with the property that . To quote the paper where I first heard about this endomorphism ring, “A moment’s reflection on the standard correspondence between representation of endomorphisms as matrices confirms that

.

In a similar way, we’ll call the (non-unital) subalgebra of which consists only of the bounded endomorphisms with finite range and assert, that . In a similar way as above is a minimal ideal of .

Returning to infinite matrices, but feeling comfortable that we can transfer between matrices and endomorphisms, a matrix is called *Fredholm* if it is invertible in .

**Lemma:**

If an endomorphism is Fredholm, then and are both finite dimensional.

The fact that both of these subspaces are finite dimensional, and intuition tells us that the kernel and cokernel can measure the extent to which an endomorphism is injective and surjective respectively, we define the *index* of the Fredholm endomorphism to be

.

Is this a blunt instrument to measure endomorphisms? Sure, but its simplicity has some facility. For define to be the operator which shifts entries the vector forward by entries, replacing the now vacant entries with zeroes. A moment’s thought shows us that . One can similarly define a “backward shift” by for $j<0$ and show that . Note that is the same as the identity endomorphism on .

This post is already getting a bit long, so let’s end it with the following proposition which has the aforementioned easy proof.

**Proposition:**

A Fredholm endomorphism has index zero if and only if it can be written as the sum of an invertible endomorphism from and an endomorphism from .

**Proof:** The backwards direction is the proof of Proposition 2.9 in my aformentioned note. So let be a Fredholm endomorphism of index zero. Then, necessary, the kernel and cokernel of are isomorphic. Let be an isomorphism between and , which are both finite dimensional subspaces of . The first isomorphism from linear algebra also provides that is an isomorphism where is the restriction of to the subspace (you can also think about it as applied to cosets. Then for , define an endomorphism . As it is a coproduct of injective endomorphisms which takes onto , this is an isomorphism, i.e. an invertible endomorphism from .

It’s straightforward to see that both and are bounded, hence their coproduct is also. Moreover which is finite dimensional. Hence the claim.

I have a feeling that this proof is too easy, but I can’t put my finger on what is wrong with it. Any ideas? My only thought is that possibly the redefinition of as $\ker(f) \oplus V/\ker(f)$ involves a basis change which makes not bounded? But I’m not sure about that…

## Leave a Reply