Thursday, October 22, 2009

Fast Inverse Square Root...

If you're not a geeky math type, there's nothing to see here.  Move along.

Wikipedia has a very readable article on the fast inverse square root algorithm that I first heard about in Doom's graphics engine.  Turns out its roots go back a bit further than I'd thought.

This is a great example of a phenomenon that has largely disappeared: clever, small algorithms that found very fast ways to make difficult (and ordinarily, time-consuming) computations.  The need for these algorithms has faded away with the development of ever more powerful computing platforms.  The inverse square root is now a single clock cycle computation on gigahertz graphics CPUs, with pipelined multiple cores reducing the effective cost even further. 

The need for these clever algorithms hasn't completely disappeared – you'll still find them in things like cryptographic systems, compression, etc.  But these are far more complex algorithms than something like an inverse square root.

In my earliest days as an engineer, in the late '70s or early '80s (I forget which!), I remember working with a company here in San Diego that had a product needing 16 bit pseudo-random numbers.  One of the principal engineers there had developed a teensy little algorithm – just a few lines of assembly language – that did the job for them.  Nobody there knew how or why this algorithm worked (despite its simplicity!).  All they really knew was that (a) it passed all their testing, and (b) it was very, very fast.  This made it quite valuable to them.

Today one can find many off-the-shelf pseudo-random number generators, some of them well-understood and documented.  They're all vastly faster than the one that firm was so proud of, and fast enough (on modern hardware) for all but the most unusual, demanding applications.

That story has been repeated many times, and one consequence of it is that these days it's actually a little unusual to find a software engineer (especially a younger one) who spends much time thinking about algorithmic efficiency at all.  Most of the time, this makes no difference at all – in fact, I suspect it makes these engineers produce better code in less time (optimization is a notorious complexity adder). 

But I can't help but pine (a little bit) for the days when even the most pedestrian accounting application could be made visibly more responsive through the work of a clever software engineer who could squeeze more computational power out of the slow computers of the day...

1 comment:

  1. Interesting that you bring this up. I was just discussing with a team member yesterday about how so much of the software development going on for ours and other projects consists of plugging into open source components. And that it seems there are so few programmers around that have actually done assembly, had to worry about efficiency, walked an execution stack in a debugger. The result is incredible inefficiencies and code that doesn't scale so it has to be revisited as soon as it sees any kind of real load. Granted the trade-off is that we have much faster time-to-market since we don't have to invent all the technologies ourselves, but it doesn't stop me from remenicing about the good-ol-days.

    Larry

    ReplyDelete