I’ve been getting tons of mail from people in response to the announcement of the mapping of

the E_{8} Lie group, asking what a Lie group is, what E_{8} is, and why the mapping of E_{8} is such a big deal?

# Category Archives: Topology

# Simplices and Simplicial Complexes

One thing that comes up a lot in homology is the idea of simplices and simplicial complexes. They’re interesting in their own right, and they’re one more thing that we can talk about

that will help make understanding the homology and the homological chain complexes easier when we get to them.

A simplex is a member of an interesting family of *filled* geometric figures. Basically, a simplex is an N-dimensional analogue of a triangle. So a 1-simplex is a line-segment; a 2-simplex is a triangle; a three simplex is a tetrahedron; a four-simplex is a pentachoron. (That cool image to the right is a projection of a rotating pentachoron from wikipedia.) If the lengths of the sides of the simplex are equal, it’s called a *regular simplex*.

# Homotopy

I’ve been working on a couple of articles talking about homology, which is an interesting (but difficult) topic in algebraic topology. While I was writing, I used a metaphor with a technique that’s used in homotopy, and realized that while I’ve referred to it obliquely, I’ve never actually talked about homotopy.

When we talked about homeomorphisms, we talked about how two spaces are homeomorphic (aka topologically equivalent) if and only if one can be *continuously deformed* into the other – that is, roughly speaking, transformed by bending, twisting, stretching, or squashing, so long as nothing gets torn.

Homotopy is a formal equivalent of homeomorphism for *functions* between topological spaces, rather than between the spaces themselves. Two continuous functions f and g are *homotopic* if and only if f can be continuously transformed into g.

The neat thing about the formal definition of homotopy is that it finally gives us a strong formal handle on what this *continuous deformation* stuff means in strictly formal terms.

So, let’s dive in and hit the formalism.

Suppose we’ve got two topological spaces, **S** and **T**, and two continuous functions f,g:**S**→**T**. A homotopy is a function *h* which associates every value in the unit interval [0,1] with a function from **S** to **T**. So we can treat *h* as a function from **S**×[0,1]→**T**, where ∀x:h(x,0)=f(x) and h(x,1)=g(x). For any given value x, then, h(x,·) is a curve from f(x) to g(x).

Thus – expressed simply, the homotopy is a function that precisely describes the transformation between the two homotopical functions. Homotopy defines an *equivalence relation* between continuous functions: continuous functions between topological spaces are topologically equivalent if there is a homotopy between them. *(This paragraph originally included an extremely confusing typo – in the first sentence, I repeatedly wrote “homology” where I meant “homotopy”. Thanks to commenter elspi for the catch!)*

We can also define a type of homotopy equivalence between topological spaces. Suppose again that we have two topological spaces **S** and **T**. **S** and **T** are homotopically equivalent if there are continuous functions f:**S**→**T** and g:**T**→**S** where gºf is homotopic to the identity function for T, 1_{T}, and fºg is homotopic to the identity function for S, 1_{S}. The functions f and g are called *homotopy equivalences*.

This gives us a nice way of really formalizing the idea of continuous deformation of *spaces* in homeomorphism – every homeomorphism is also a homotopy equivalence. But it’s not both ways – there are homotopy equivalences that are *not* homeomorphisms.

The reason why is interesting: if you look at our homotopy definition, the equivalence is based on a continuous deformations – *including* contraction. So, for example, a ball is not homeomorphic to a point – but it *is* homotopically equivalent. The contraction all the way from the ball to the point doesn’t violate anything about the homotopical equivalence. In fact, there’s a special name for the set of topological spaces that are homotopically equivalent to a single point: they’re called *contractible* spaces. *(Originally, I erroneously wrote “sphere” instead of “ball” in this paragraph. I can’t even blame it on a typo – I just screwed up. Thanks to commenter John Armstrong for the catch.*

** Addendum:** Commenter elspi mentioned another wonderful example of a homotopy that isn’t a homeomorphism, and I thought it was a good enough example that I wish I’d included it in the original post, so I’m promoting it here. The mobius band is homotopically equivalent to a circle – compact the band down to a line, and the twist “disappears” and you’ve got a circle. But it’s pretty obvious that the mobius is *not* homeomorphic to a circle!. Thanks again, elspi – great example!

# Building Towards Homology: Vector Spaces and Modules

One of the more advanced topics in topology that I’d like to get to is *homology*. Homology is a major topic that goes beyond just algebraic topology, and it’s really very interesting. But to understand it, it’s useful to have some understandings of some basics that I’ve never written about. In particular, homology uses *chains* of *modules*. Modules, in turn, are a generalization of the idea of a *vector space*. I’ve said a little bit about vector spaces when I was writing about the gluing axiom, but I wasn’t complete or formal in my description of them. (Not to mention the amount of confusion that I caused by sloppy writing in those posts!) So I think it’s a good idea to cover the idea in a fresh setting here.

So, what’s a vector space? It’s yet another kind of abstract algebra. In this case, it’s an algebra built on top of a field (like the real numbers), where the values are a set of objects where there are two operations: addition of two vectors, and *scaling* a vector by a value from the field.

To define a vector space, we start by taking something like the real numbers: a set whose values form a *field*. We’ll call that basic field F, and the elements of F we’ll call *scalars*. We can then define a *vector space over F* as a set **V** whose members are called *vectors*, and which has two operations:

- Vector Addition
- An operation mapping two vectors to a third vector, +:
**V**×**V**→**V** - Scalar Multiplication
- An operation mapping a scalar and a vector to another scalar: *:F×
**V**→**V**

Vector addition forms an abelian group over **V**, and scalar multiplication is distributive over vector addition and multiplication in the scalar field. To be complete, this means that the following properties hold:

**(****V**,+) are an Abelian group- Vector addition is associative: ∀a,b,c∈
**V**: a+(b+c)=(a+b)+c - Vector addition has an identity element,
*0*; ∀a∈**V**:a+0=0+a=a. - Vector addition has an inverse element: ∀a∈
**V**:(∃b∈**V**:a+b=0.) The additive inverse of a vector a is normally written -a. (*Up to this point, this defines (*)**V**,+) is a group. - Vector addition is commutative: ∀a,b∈
**V**: a+b=b+a.*(The addition of this commutative rule is what makes it an abelian group.)*

- Vector addition is associative: ∀a,b,c∈
**Scalar Multiplication is Distributive**- Scalar multiplication is distributive over vector addition: ∀a∈F,∀b,c∈
**V**, a*(b+c)=a*b+a*c - Scalar multiplication is distributive over addition in F: ∀a,b∈F,∀c∈
**V**: (a+b)*c = (a*c) + (b*c). - Scalar multiplication is associative with multiplication in F: ∀a,b∈F,c∈
**V**: (a*b)*c = a*(b*c). - The multiplicative identity for multiplication in F is also the identity element for scalar multiplication: ∀a∈
**V**: 1*a=a.

- Scalar multiplication is distributive over vector addition: ∀a∈F,∀b,c∈

So what does all of this mean? It really means that a vector space is a structure over a field where the elements can be added (vector addition) or scaled (scalar multiplication). Hey, isn’t that exactly what I said at the beginning?

One obvious example of a vector space is a Euclidean space. Vectors are arrows from the origin to some point in the space – and so they can be represented as ordered tuples. So for example, ℜ^{3} is the three-dimensional euclidean space; points (x,y,z) are vectors. Adding two vectors (a,b,c)+(d,e,f)=(a+d,b+e,c+f); and scalar multiplication x*(a,b,c)=(x*a,x*b,x*c).

Following the same basic idea as the euclidean spaces, we can generalize to matrices of a particular size, each of which is a vector space. There are also ways of creating vector spaces using polynomials, various kinds of functions, differential equations, etc.

In homology, we’ll actually be interested in *modules*. A module is just a generalization of the idea of a vector space. But instead of using a field as a basis the way that you do in a vector space, in a module, the basis is just a general ring; so the basis is less constrained: a field is a commutative ring with multiplicative inverses of all values except 0, and distinct additive and multiplicative identities. So a module does *not* require multiplicative inverse for the scalars; nor does it require scalar multiplication to be commutative.

# Twisted Spaces: Fiber Bundles

It’s been a while since I’ve written a topology post. Rest assured – there’s plenty more topology to come. For instance, today, I’m going to talk about something called a *fiber bundle*. I like to say that a fiber bundle is a cross between a product and a manifold. (There’s a bit of a geeky pun in there, but it’s too pathetic to explain.)

The idea of a fiber bundle is very similar to the idea of a manifold. Remember, a manifold is a topological space where every point is inside of a neighborhood that *appears* to be euclidean, but the space as a whole may be very *non-*euclidean. There are all sorts of interesting things that you can do in a manifold because of that property of being *locally* almost-euclidean – things like calculus.

A fiber bundle is based on a similar sort of idea: a *local* property that does *not* necessarily hold globally – but instead the local property being a property of individual points, it’s based on a property of *regions* of the space.

So what is a fiber bundle, and why should we care? It’s something that looks *almost* like a product of two topological spaces. The space can be divided into regions, each of which is a small piece of a product space – but the space as a whole may be twisted in all sorts of ways that would be impossible for a true product space.

# Another Example Sheaf: Vector Fields on Manifolds

There’s another classic example of sheaves; this one is restricted to manifolds, rather than general topological spaces. But it provides the key to why we can do calculus on a manifold. For any manifold, there is a sheaf of *vector fields* over the manifold.

# Examples of Sheaves

Since the posts of sheaves have been more than a bit confusing, I’m going to take

the time to go through a couple of examples of real sheaves that are used in

algebraic topology and related fields. Todays example will be the most canonical one:

a sheaf of continuous functions over a topological space. This can be done for *any* topological space, because a topological space *must* be continuous and gluable with

other topological spaces.

# A Second Stab at Sheaves

I’ve mostly been taking it easy this week, since readership is way down during the holidays, and I’m stuck at home with my kids, who don’t generally give me a lot of time for sitting

and reading math books. But I think I’ve finally got time to get back to the stuff

I originally messed up about sheaves.

I’ll start by talking about the intuition behind the idea of sheaves. The basic idea of

a sheave is to provide a way of taking some local property of a topological space, and

demonstrating that it holds everywhere. The classic example of this is manifolds, where the *local* property of being locally almost euclidean around a point is expanded to being almost euclidean around *all* points.

# Stepping Back a Moment

The topology posts have been extremely abstract lately, and from some of the questions

I’ve received, I think it’s a good idea to take a moment and step back, to recall just

what we’re talking about. In particular, I keep saying “a topological space is just a set

with some structure” in one form or another, but I don’t think I’ve adequately maintained

the *intuition* of what that means. The goal of today’s post is to try to bring back

at least some of the intuition.

# Big to Small, Small to Big: Topological Properties through Sheaves (part 2)

Continuing from where we left off yesterday…

Yesterday, I managed to describe what a *presheaf* was. Today, I’m going to continue on that line, and get to what a full sheaf is.

A sheaf is a presheaf with two additional properties. The more interesting of those two properties is something called the *gluing axiom*. Remember when I was talking about manifolds, and described how you could describe manifolds by [*gluing*][glue] other manifolds together? The gluing axiom is the formal underpinnings of that gluing operation: it’s the one that justifies *why* gluing manifolds together works.

[glue]: http://scienceblogs.com/goodmath/2006/11/better_glue_for_manifolds.php