Monthly Archives: February 2007

Another Revolution in Physics Crackpottery: Electromagnetic Gravity

It’s that time again – yes, we have yet another wacko reinvention of physics that pretends to have math on its side. This time, it’s “The Electro-Magnetic Radiation Pressure Gravity Theory”, by “Engineer Xavier Borg”. (Yes, he signs all of his papers that way – it’s always with the title “Engineer”.) This one is as wacky as Neal Adams and his PMPs, except that the author seems to be less clueless.

At first I wondered if this were a hoax – I mean, “Engineer Borg”? It seems like a deliberately goofy name for someone with a crackpot theory of physics… But on reading through his web-pages, the quantity and depth of his writing has me leaning towards believing that this stuff is legit.

Continue reading

The Order of the Science Scouts of Exemplary Repute and Above Average Physique

Many of my fellow ScienceBloggers have recently declared their membership in
Order of the Science Scouts of Exemplary Repute and Above Average Physique. I’ve been busy, so I haven’t been able to get around to signing up until now. That’s a shame, since some of the badges appear to have been designed specifically for me!

Continue reading

Carnival of Mathematics is coming soon!

Just a quick reminder: the second Carnival of Mathematics is coming up this friday, to be hosted here at GM/BM. If you’ve written any math related articles, get me a link by thursday at the latest. You can either send it to me here at markcc at gmail.com, or via the carnival submission form.

Building Towards Homology: Vector Spaces and Modules

One of the more advanced topics in topology that I’d like to get to is homology. Homology is a major topic that goes beyond just algebraic topology, and it’s really very interesting. But to understand it, it’s useful to have some understandings of some basics that I’ve never written about. In particular, homology uses chains of modules. Modules, in turn, are a generalization of the idea of a vector space. I’ve said a little bit about vector spaces when I was writing about the gluing axiom, but I wasn’t complete or formal in my description of them. (Not to mention the amount of confusion that I caused by sloppy writing in those posts!) So I think it’s a good idea to cover the idea in a fresh setting here.

So, what’s a vector space? It’s yet another kind of abstract algebra. In this case, it’s an algebra built on top of a field (like the real numbers), where the values are a set of objects where there are two operations: addition of two vectors, and scaling a vector by a value from the field.

To define a vector space, we start by taking something like the real numbers: a set whose values form a field. We’ll call that basic field F, and the elements of F we’ll call scalars. We can then define a vector space over F as a set V whose members are called vectors, and which has two operations:

Vector Addition
An operation mapping two vectors to a third vector, +:V×VV
Scalar Multiplication
An operation mapping a scalar and a vector to another scalar: *:F×VV

Vector addition forms an abelian group over V, and scalar multiplication is distributive over vector addition and multiplication in the scalar field. To be complete, this means that the following properties hold:

  • (V,+) are an Abelian group
    • Vector addition is associative: ∀a,b,c∈V: a+(b+c)=(a+b)+c
    • Vector addition has an identity element, 0; ∀a∈V:a+0=0+a=a.
    • Vector addition has an inverse element: ∀a∈V:(∃b∈V:a+b=0.) The additive inverse of a vector a is normally written -a. (Up to this point, this defines (V,+) is a group.)
    • Vector addition is commutative: ∀a,b∈V: a+b=b+a. (The addition of this commutative rule is what makes it an abelian group.)
  • Scalar Multiplication is Distributive
    • Scalar multiplication is distributive over vector addition: ∀a∈F,∀b,c∈V, a*(b+c)=a*b+a*c
    • Scalar multiplication is distributive over addition in F: ∀a,b∈F,∀c∈V: (a+b)*c = (a*c) + (b*c).
    • Scalar multiplication is associative with multiplication in F: ∀a,b∈F,c∈V: (a*b)*c = a*(b*c).
    • The multiplicative identity for multiplication in F is also the identity element for scalar multiplication: ∀a∈V: 1*a=a.

So what does all of this mean? It really means that a vector space is a structure over a field where the elements can be added (vector addition) or scaled (scalar multiplication). Hey, isn’t that exactly what I said at the beginning?

One obvious example of a vector space is a Euclidean space. Vectors are arrows from the origin to some point in the space – and so they can be represented as ordered tuples. So for example, ℜ3 is the three-dimensional euclidean space; points (x,y,z) are vectors. Adding two vectors (a,b,c)+(d,e,f)=(a+d,b+e,c+f); and scalar multiplication x(a,b,c)=(xa,xb,xc).

Following the same basic idea as the euclidean spaces, we can generalize to matrices of a particular size, each of which is a vector space. There are also ways of creating vector spaces using polynomials, various kinds of functions, differential equations, etc.

In homology, we’ll actually be interested in modules. A module is just a generalization of the idea of a vector space. But instead of using a field as a basis the way that you do in a vector space, in a module, the basis is just a general ring; so the basis is less constrained: a field is a commutative ring with multiplicative inverses of all values except 0, and distinct additive and multiplicative identities. So a module does not require multiplicative inverse for the scalars; nor does it require scalar multiplication to be commutative.

Basics: Optimization

Yet another term that we frequently hear, but which is often not properly understood, is the concept of optimization. What is optimization? And how does it work?

The idea of optimization is quite simple. You have some complex situation, where
some variable of interest (called the target) is based on a complex
relationship with some other variables. Optimization is the process of trying to find
an assignment of values to the other variables (called parameters) that produces a maximum or minimum value of the target variable, called
the optimum or optimal value

The practice of optimization is quite a bit harder. It depends greatly
on the relationships between the other variables. The general process of finding
an optimum is called programming – not like programming a computer; the term predates the invention of computers.

Continue reading

Friday Random Ten, Feb 16

  1. Frameshift, “Walking through Genetic Space”: a track from an album inspired by the writings of Steven Jay Gould about genetics and evolution. The leader of the project is the lead singer of Dream Theater; the end result has a very DT like feeling to it. The album overall is quite good; bit this track is a slow ballad, and a ballad about genetics just doesn’t really work.
  2. Robert Fripp and David Sylvian, “Jean the Birdman”: Fun, interesting piece of work, from a project that David Sylvian and Robert Fripp did a few years back. Sylvian’s usual crooning voice, over his and Fripp’s guitar work. Very cool.
  3. King Crimson, “Starless and Bible Black”. A track from one of my all-time favorite albums – free improv from King Crimson in the “Red” days.
  4. Gordian Knot, “Muttersprach”: instrumental neo-prog rock from Sean Malone and whoever he can get to work with him. This track features a solo by Steve Hackett, the guitarist from the early days of Genesis.
  5. Jonathon Coulton, “Mandelbrot Set”: One of the greatest math geek songs of all time. What math geek could not love a rock song that literally includes the procedure for computing the mandelbrot set as part of the lyrics: “Take a point called Z in the complex plane/
    Let Z1 be Z squared plus C/
    And Z2 is Z1 squared plus C/
    And Z3 is Z2 squared plus C and so on/
    If the series of Z’s should always stay/
    Close to Z and never trend away/
    That point is in the Mandelbrot Set”
  6. Väsen, “Slunken” Traditional Swedish music, prominently featuring the Nickelharpa – aka keyed violin. Väsen is absolutely amazin if you get a chance to hear them live.
  7. Tony Trischka, “Doggy Salt”: a track off of Tony’s latest, which is mostly duets played with other banjo players, including Earl Scruggs, Bela Fleck, and Steve Martin. Pure fun – exuberant music played by amazing musicians having the time of their lives.
  8. Tan Dun, “Water Passion after St. Matthew, 1st Movement”. A new operatic passion by the Chinese composer Tan Dun. Tan Dun is one of the finest composers working today, with a great range in his composing style. If you’ve seen the movie “Hero”, the soundtrack is also his work. The Water Passion is an extremely ambitious work, and damned if it isn’t completely successful. He manages to merge bits of traditional Chinese opera, modern semitone composition, and Bach-style fugues into a coherent and beatiful piece of music.
  9. Mogwai, “Moses? I Amn’t”: You didn’t think you were going to get through one of my friday random tens without any post-rock, now did you?
  10. Igor Stravinsky, “Concertino”: chamber music from Stravinsky, one of the musical geniuses of the 20th century. It’s very interesting listening to this shortly after Tan Dun; you can hear the influence that Stravinsky had.

Crazy Stack Games: Programming in Kipple

Insane Stacking

Todays pathology is playing with stacks. Lots of lots of stacks. Stacks for data. Stacks for control. Stacks out the wazoo. It’s called Kipple for no particularly good reason that I know of.

Kipple happens to be one of the pathological languages that I highly recommend trying to write some programs in. It’s crazy enough to be a challenge, but there is a basic logic to how you program to it – which makes figuring out how to write programs rewarding rather than just frustrating.

Continue reading

Not quite Basics: The Logician's Idea of Calculus

In yesterdays basics post, I alluded to the second kind of calculus – the thing that computer scientists like me call a calculus. Multiple people have asked me to explain what our kind of calculus is.

In the worlds of computer science and logic, calculus isn’t a particular thing:
it’s a kind of thing. A calculus is a sort of a logician’s automaton: a purely
symbolic system where there is a set of rules about how to perform transformations of
any value string of symbols. The classic example is lambda calculus,
which I’ve written about before, but there are numerous other calculi.

Continue reading

Basics: Calculus

Calculus is one of the things that’s considered terrifying by most people. In fact, I’m sure a lot of people will consider me insane for trying to write a “basics” post about something like calculus. But I’m not going to try to teach you calculus – I’m just going to try to explain very roughly what it means and what it’s for.

There are actually two different things that we call calculus – but most people are only aware of one of them. There’s the standard pairing of differential and integral calculus; and then there’s what we computer science geeks call a calculus. In this post, I’m only going to talk about the standard one; the computer science kind of calculus I’ll write about some other time.

Continue reading

Basics: Limits

One of the fundamental branches of modern math – differential and integral calculus – is based on the concept of limits. In some ways, limits are a very intuitive concept – but the formalism of limits can be extremely confusing to many people.

Limits are basically a tool that allows us to get a handle on certain kinds
of equations or series that involve some kind of infinity, or some kind of value that is almost defined. The informal idea is very simple; the formalism is also pretty simple, but it’s often obscured by so much jargon that it’s hard to relate it to the intuition.

Continue reading