• Welcome to the Speedsolving.com, home of the web's largest puzzle community!
    You are currently viewing our forum as a guest which gives you limited access to join discussions and access our other features.

    Registration is fast, simple and absolutely free so please, join our community of 40,000+ people from around the world today!

    If you are already a member, simply login to hide this message and begin participating in the community!

Mathematics Thread

What is the hardest math you learned?

  • Calculus (specify in comments)

    Votes: 67 35.6%
  • Trigonometry

    Votes: 24 12.8%
  • (Enriched) Geometry

    Votes: 17 9.0%
  • Equations (specify)

    Votes: 7 3.7%
  • Graphing Equations (specify)

    Votes: 4 2.1%
  • Basic Algebra (specify)

    Votes: 9 4.8%
  • I can do better!

    Votes: 47 25.0%
  • What kind of math is that?!

    Votes: 13 6.9%

  • Total voters
    188
Joined
Aug 16, 2014
Messages
2,987
Location
Webster Groves, MO
WCA
2013BARK01
Not sure whether you put this thread in "Help/Suggestions" intentionally, but it doesn't belong there.


[noparse]
[/noparse]Factorial!

Factorial! = Factorial * (Factorial - 1)!
= Factorial * (Factorial - 1) * (Factorial - 2)!
= Factorial * (Factorial - 1) * (Factorial - 2) * ... * 3 * 2 * 1
=
uhh
great, I just broke something :(
 

Christopher Mowla

Premium Member
Joined
Sep 17, 2009
Messages
1,184
Location
Earth
YouTube
Visit Channel
I just published a video entitled: Easiest (Single Variable and Mulitvariate) Data Interpolation Method ever made? (May 2021).

(For anyone curious.)

I won't apologize for the length, given that videos on multivariate interpolation are close to an hour and show zero concrete examples. (My video shows concrete examples and derives the method from scratch. And I think anyone at the college level should be able to understand it.)

Not saying it's a good or bad method. It is what it is.
 

xyzzy

Member
Joined
Dec 24, 2015
Messages
2,876
Ooooh, interpolation. Happens to be a thing I've studied a bit about. I think your method has quite a few major drawbacks that limit its practicality.

1. The multivariate version doesn't work.

You can't always decompose a multivariate function into a sum of univariate functions. Consider the following. Let \( f: (x,y)\mapsto xy \) be a bivariate function over \( x,y\in\{0,1\} \), so \( f(0,0)=f(0,1)=f(1,0)=0 \) and \( f(1,1)=1 \). If this were decomposable into univariate functions \( f(x,y)=g(x)+h(y) \), then we would have
\( 1=f(1,1)\\
=g(1)+h(1)\\
=g(1)+h(0)-h(0)-g(0)+g(0)+h(1)\\
=f(1,0)-f(0,0)+f(0,1)\\
=0, \)
a contradiction. This problem can arise when two input samples share at least one coordinate. (If all the duplicate coordinates are for only one of the axes, then the version of the interpolation method you presented is buggy, but it can at least be repaired.)

(Related: The Kolmogorov-Arnold superposition theorem says that you can always decompose continuous multivariate functions into a finite number of continuous univariate functions, but it's a bit more involved than just adding up one function per component. The surprising part of the theorem is that the process is only a bit more involved!)

2. The handling of nonintegers is very unnatural.

The method as described in the video can only work if all the sampling locations lie on decadic rationals (the real numbers that have terminating decimal expansions). For example, having one of the independent variables at 1/3 (= 0.333…) would break it, because you can't multiply this by any power of 10 to get an integer. So maybe you repair this method by just multiplying by the LCM of the denominators… but what about irrational numbers? You can't multiply \( \sqrt2 \) by any integer to get an integer.

Even if we stick to just the decadic rationals, it's still problematic because the interpolation doesn't behave nicely under small perturbations of the sampling locations. Shifting a sampling location from 5 to 5.000001 dramatically changes the interpolated function, for example.

3. It's actually not simpler than Lagrange interpolation.

The cosines are only easy to evaluate on the integer multiples of \( \pi/2 \). If, say, I wanted to evaluate your interpolated function at 0.5, I'd end up having to calculate things like \( \cos(\pi\sqrt2) \), which don't really have a nice closed form.

The way sine and cosine are calculated is usually with a polynomial approximation. If the desired precision is known ahead of time, the coefficients to the optimally approximating polynomial can precomputed; otherwise, a Taylor series could be used. But in the end, you're still calculating polynomials.

4. It doesn't serve the purpose of what people normally use interpolation for.

Why do people want to use interpolation? The key use of interpolation is when you can only collect a finite number of samples of a function, but you still want to describe what happens in between the samples you collected.

Some interpolation methods have convergence guarantees: as you increase the number of samples you take (and assuming your sampling isn't clustered in a small part of the whole domain), then under some smoothness assumptions on the ground truth (e.g. twice continuously differentiable), the interpolated reconstruction will converge to the ground truth (in sup-norm/L2-norm/whatever) at a certain speed.

As an extreme example, consider nearest neighbour interpolation in 1D. Let \( f:[0,1]\to\mathbb R \) be the underlying function. Suppose that \( f \) is continuous (this is the smoothness assumption); by the Heine-Cantor theorem, it's also uniformly continuous. Then as the mesh of the samples gets finer, the nearest neighbour interpolation of those samples \( \tilde f \) will converge in sup-norm to the underlying function \( f \). If we tack on an additional assumption that \( f \) is Lipschitz continuous (a stronger smoothness assumption), then we can further say something about the convergence rate: \( \lVert\tilde f-f\rVert_\infty=O(\text{mesh}) \).

Your interpolation method creates highly oscillatory functions, which make them an inherently poor fit for smooth underlying functions. They're also oscillatory in a very specific way, which makes them also a poor fit for oscillatory functions that don't happen to have exactly the same pattern.

I think the disconnect here comes from what you're using interpolation for; it looks like you're using it as a means for condensing disparate pieces of information into a single formula that isn't "defined piecewise". This is… a sometimes questionable goal.

5. If restricted to integer sampling points and integer inputs, this is zero-order hold.

… Which is almost the same thing as nearest neighbour interpolation. This has its uses, but if you're going to use nearest neighbour, you might as well use it directly instead of going through a whole load of cosines.
 

Christopher Mowla

Premium Member
Joined
Sep 17, 2009
Messages
1,184
Location
Earth
YouTube
Visit Channel
1. The multivariate version doesn't work.
You obviously meant doesn't work for all cases, but this was a good catch! Your provided counterexample was plain and simple. Thanks!

2. The handling of nonintegers is very unnatural.
From what you mentioned, I wasn't aware that the output value of a continuous function should be able to produce the fractional form of a rational number. I thought computing with computers rounds numbers, and thus I didn't see a difference (because we could just assume m = # of digits you want 1/3 to have so that it registers to be 1/3 with whatever you're using to compute numbers). But fair enough.

3. It's actually not simpler than Lagrange interpolation.
I mentioned in this part of the video that you don't have to calculate any cosine whose argument is not on the Unit Circle. But fair enough.

4. It doesn't serve the purpose of what people normally use interpolation for.
Can't say I disagree with you. (I believe I expressed this opinion throughout the video.)

5. If restricted to integer sampling points and integer inputs, this is zero-order hold.
Very cool comparison. And again, fair enough.
 
Top