Another really nice one is Lanczos interpolation, although that only works well if you have at least four points (or 2 + the filter size). I wrote a demo for Smooth.js a while back that shows how that filter works in 2D: http://osuushi.github.com/plotdemo016.html
The most useful thing I ever learned about interpolation was how it relates to resampling to different rates and the specific techniques used to perform resampling when dealing with the Nyquist limits.
If you're removing samples, the biggest concern is whether the original signal has been bandlimited to the target rate, which can be achieved in the interpolator or by preparing the signal elsewhere.
If you're adding samples, then the interpolation function is burdened with reconstruction and the avoidance of artifacts in the process. This tends to inevitably lead towards the use of windowed sinc.
For this reason, scaling down images and raising the pitch of sound samples work out to be easier tasks than scaling up or lowering the pitch.
Of course, for the application shown in the article, the use of interpolation is in a different realm entirely.
I'd like to add that the reason we use windowed sinc is because we're lazy and it works well. There are techniques for making better FIR filters, e.g., Parks-McClellan/Remez for small filters or other techniques for larger ones.
Also, there is no inherent difference between upsampling and downsampling. Lowering the pitch of an audio sample is no harder than raising it, and in both cases you make the same tradeoff when designing the filter (aliasing, bandwidth, impulse response, and processing time).
The problem is with resizing images, because using the windowed sinc function makes the (incorrect) assumption that the desired result can be directly sampled from the band-limited input. The fact that we use Lanczos interpolation as the standard "high-quality" image interpolation algorithm shows that this assumption is false. (I should elaborate: Lanczos is still windowed sinc, but is subjectively better even though it has demonstrably higher amounts of aliasing than competing windows. Or rather, the assumption that is false is the assumption that the same priorities we use for designing filters for signal analysis can be used for resampling images.)
Well, yes and no. Mipmaps just make the process faster for a given level of quality by making the resampling ratios closer to 1.0. The closer the ratio gets to 1.0, the fewer filter coefficients you need.
The same thing is done in audio. If you want to upsample 64x, you can do it faster by upsampling 8x twice. Since the signal given to the second upsampler is already bandlimited, the filter can have a much longer falloff and use fewer coefficients.
This is exactly why you want to use a nice filter create the mipmapms but it's okay to use a simple bilinear filter for the final render.
(So what I'm saying is that you can also use mipmaps for upsampling, but nobody likes the storage requirements.)
Imagine you are watching the ball, but instead of traveling along a line, it is tracing a circle- however you are viewing it from the one-dimensional angle. Is that smoothstep, or would that be a different equation? They look similar to my naked eye.
The reason smoothstep looks similar to a sinusoid is because it is a polynomial approximation. I remember Dan Perlin discussing it in his (book|paper|website), but googling around doesn't seem to show anything that I remember. For Perlin Noise, a sinusoid was the highest-quality interpolation because it was continuous everywhere, even at the endpoints. Smooth-step is discontinuous at the endpoints, but it is close enough for discrete computer graphics.
Yes, thank you. After reading a bit more, smoothness is what I'm looking for. One of the _derivatives_ of the smoothstep is discontinuous at the endpoint.
I wouldn't call smoothstep an approximation to a sinusoid - both are sigmoidal, but that's about it. This smoothstep in particular is just a natural cubic spline interpolator, arguably the simplest sigmoidal interpolating function.