> How about "qualitative leap"?
makes sense to me, the old quantity to quality thing. in this case, while there was always a steady attempt at improving algorithm speed, a 52,000 time jump, coupled with development of transistor --> digital circuit meant new possiblities leapt into being that hadnt been thought of during the steady attempt at improvements (obviously oversimplified).
Chuck Grimes added:
> Cantor's dust?
hehehe.....
in this vein, more like devil's staircase (Cantor's function):
http://mathworld.wolfram.com/DevilsStaircase.html
a function which is a constant (flat) almost everywhere on the interval 0 to 1, yet manages to rise from 0 to 1 in the same interval.
or maybe turned on its side? then the qualitative leaps take place on the rationals.
a little more history (didnt know this about gauss till today):
from:
http://www.usyd.edu.au/su/geosciences/geology/people/postgrad/kritski/Teaching/signal2/signal2.html
4.4.7 The fast Fourier transform
In order to speed up the computing of discrete Fourier transforms, it was found that a special Fast Fourier Transform algorithm is advantageous if the number of data samples N is a power of 2, i.e. N = 2p. This method came into the world of signal analysis in 1965, and was subsequently known as Cooley-Tukey algorithm until it was realized that its usage goes far back in history, to the beginning of the 19th century. Now it is generally referred to as Fast Fourier Transform (FFT). In about 1805, C.F. Gauss, who was then 28, was computing orbits by a technique of trigonometric sums equivalent to todays discrete Fourier synthesis. To obtain the coefficients from a set of a dozen regularly spaced data, he could if he wished explicitly implement the formula that we recognize today as the discrete Fourier transform. To do this he would multiply the N data values g(t) by the weighting factors exp(-i 2 pi f t), sum the products, and repeat these N multiplications N times, once for each value of f. But he found that, in case where N is a composite number with factors such that N=n1n2, a computing advantage was gained by partitioning the data into n2 sets of n1 terms. Where N was composed of three or more factors, a further advantage could be obtained.
see also:
http://oban.esat.kuleuven.ac.be/iroptiondna/knipsels/10algorithms.htm
les schaffer