(07/25/2021, 11:58 PM)tommy1729 Wrote: An interesting idea is this :
Are all these " beta methods " equivalent ? as James asks.
And
Is there a way to accelerate the convergeance of the iterations ?
Series acceleration is well known but iteration acceleration not so much.
Those are 2 nice questions , but what is the interesting idea you might ask ??
Well that those 2 questions are related !!
let
f_1(s) = exp( t(1 s) * f(s-1))
f_2(s) = exp( t(2 s) * f(s-1))
and the resp analytic tetrations from them : F1(s) and F2(s).
Remember that tetration(s + theta(s)) is also tetration where theta(s) is a suitable analytic real 1-periodic function.
so F2(s) = F1(s + theta1(s)) , F1(s) = F2(s + theta2(s)).
BUT THIS ALSO IMPLIES THAT
f*_2(s) = exp( t( 2*(s + theta2(s)) ) * f(s-1)) .. RESULTING IN F2_*(s) is actually equal to
F2_*(s) = F2(s + theta(s)) = F1(s)
HENCE USING t(1 s) = t(s) is the same as using t( 2 s + 2 theta(s)) !!
So this relates to the main questions posed :
when are 2 solutions equal ?
How to accelerate convergeance ?
As for the acceleration , ofcourse the complexity and difficulty of theta and computing theta are key.
But numerically it is expected using t( 2 s + 2 theta(s) ) converges faster. ( because using t(2s) does converge faster than using t(s) ) .
---
Tom(s,v) = exp( t(v * s) * exp(Tom(s-1,v)) )
resulting in
tet(s+1,v) = exp( tet(s,v) ).
I Like that notation.
---
regards
tommy1729
Oh you must've posted this right as I posted mine, I missed it.
Yes, I agree with you entirely here. I think it's similar to what Kouznetsov did when he constructed his general form of the superfunction equation. Where, Kouznetsov chose an asymptotic function,
\(
f_M(z) = L +\sum_{n=1}^M a_n exp{zL n}\\
\text{tet}_{K}(z) = \lim_{n\to\infty} \exp^{\circ n} f_M(z-n)
\)
Where \( M \) was just a degree of "how well we are approximating." But, it had no effect on the final tetration--it still created Kneser.
I think we are in a similar situation here. Where all these asymptotic tetrations are all going to be \( \text{tet}_\beta \). And they are characterized by the fact \( \lim_{\Im(s) \to \infty} \text{tet}(s) = \infty \). I can't think of an obvious uniqueness condition though. Kneser has the benefit of being normal at infinity; non-normality tends to mean there's lots of room for errors and slight adjustments. Plus; we don't have the added benefit of a unique Fourier theta mapping--where we can just call on the uniqueness of Fourier coefficients (like what Paulsen and Cogwill did).
My only thoughts how we might do this was hit with a dead-end as I tried to write it up. It doesn't feel natural. But if we talk about,
\(
F_\lambda(s) = \lim_{n\to\infty} \log^{\circ n} \beta_\lambda(s+n)\\
\)
Which is the unique tetration with period \( 2\pi i / \lambda \) and holomorphy on an almost cylinder \( \mathbb{T} \) (which just means \( \overline{\mathbb{T}} \simeq \mathbb{C}/2\pi\mathbb{Z} \)). And then using a different kind of mapping we can transform between tetrations by creating a \( 1 \)-periodic function \( \lambda(s+1) = \lambda(s) \); then,
\(
\text{tet}_{WEIRD}(s) = F_{\lambda(s)}(s)\\
\)
Is a tetration function; and we'd be able to find a \( \lambda \) for Kneser; or any tetration really. But I can't think of a uniqueness condition that would guarantee that, there exists a unique \( \lambda \) such that,
\(
\text{tet}(s) = F_{\lambda(s)}(s)\\
\lim_{\Im(s) \to \infty} \text{tet}(s) = \infty\\
\)
I just made it to the point where there exists \( \lambda^+, \lambda^- \) which are holomorphic in the upper/lower half planes (resp.) in which,
\(
\text{tet}_\beta(s) = F_{\lambda^+(s)}(s)\,\,\text{for}\,\,\Im(s) > 0\\
\text{tet}_\beta(s) = F_{\lambda^-(s)}(s)\,\,\text{for}\,\,\Im(s) < 0\\
\lim_{|\Im(s)| \to \infty} \lambda^{\pm}(s) = 0\\
\)
So that we have a fourier series,
\(
\lambda^+(s) = \sum_{k=1}^\infty c_k e^{2\pi i ks}\\
\lambda^-(s) = \sum_{k=1}^\infty \overline{c_k} e^{-2\pi i ks}\\
\)
I couldn't think of any obvious arguments that the sequence \( c_k \) is unique though... So I gave up on that paper and focused on better investigating the programming. And no matter how you change the initial asymptotic tetration function--they all seem to give \( \text{tet}_\beta \). So I'm at least reinforcing the numerical evidence, lol.
Regards, James.

