That's a good idea, Mphlee, but I guess people just don't like my questions on MO. Maybe I'm asking wrong, but it's far more often I never get an answer. Or if I get an answer it doesn't answer the question. Out of all the questions I've asked on MO, I can count on one hand the amount of times it's actually helped me.
That being said, I've been trying to solve this problem from a more analytic perspective. The trouble I've been having now, is the slow decay of what I was hoping would be much faster.
If I write:
\[
F(x,y,s,\varphi_1,\varphi_2,\varphi_3) = x \langle s\rangle_{\varphi_1} \langle s+1\rangle_{\varphi_2} y - \left(x \langle s+1\rangle_{\varphi_3} y+1\right) = 0\\
\]
Then:
\[
\begin{align}
\varphi_3(x,y,s) &= C(x,y,s) + \rho_1(x,y,s) \varphi_1(x,y,s) + \rho_2(x,y,s) \varphi_2(x,y,s)\\
\rho_1 &= \frac{\partial \varphi_3}{\partial \varphi_1} \Big{|}_{\varphi_1 = \varphi_2 = 0}\\
\rho_2 &= \frac{\partial \varphi_3}{\partial \varphi_2} \Big{|}_{\varphi_1 = \varphi_2 = 0}\\
\end{align}
\]
Serves as a fantastic first order approximation--as it's the tangent plane about zero. Now, I cannot solve this system of equations as well as I'd like, but I can set up a solution fairly well. Firstly, we can ignore \(x\), so long as \(x > e\) everything follows the same (so drop \(x\) from the picture). The first requirement we need is that:
\[
\varphi_3(y,s) = \varphi_2(y+1,s)\\
\]
So we can rewrite this equation as:
\[
\varphi_3(y,s) = C(y,s) + \rho_1(y,s) \varphi_1(y,s) + \rho_2(y,s) \varphi_3(y-1,s)\\
\]
Now, I can somewhat prove the following. But, even though I can't prove it, numerical experimentation gives us good asymptotics on \(y\). Here, we don't really need to talk about \(s\), so drop it from the picture too. \(s\) only makes an appearance when we're talking about \(\varphi_1\), and even then it only appears as a shift \(s \mapsto s-1\). So let's write this out again:
\[
\varphi_3(y) = C(y) + \rho_1(y) \varphi_1(y) + \rho_2(y) \varphi_3(y-1)\\
\]
The first term \(C(y)\) has some interesting asymptotics. My best guess is that it looks like \(1/\log(y)\). So it tends to zero, but does so very slowly. It's difficult to sus out an exact growth, because my code will start to get inaccurate around \(1E10\), which is because it's calculating the super exponential of \(1E10^{1/1E10} \approx 1\), and this is a natural artifact which occurs near \(1\). Nonetheless, I am confident it tends to zero, and should tend somewhere like \(1/\log(y)\), if not that, maybe something a bit slower, but I doubt it.
The second term \(\rho_1(y)\) is our knight in shining armor. When I started feeling doubtful of this working, \(\rho_1\) brought be back in the game... The value \(\rho_1(y) \le 1/y^e\). And decays at least that fast. If I were to wager a guess, it's something like \(\rho_1(y) \approx 1/y^{x-\delta}\), it's probably asymptotically about \(1/y^x\) though.
The third term is a bit trickier, and doesn't behave as nicely as one would like. But it's not as bad as the first term, which is our trouble value. The third term still has nice behaviour. The value \(\rho_2(y) \approx 1+\frac{1}{y}\).
What does this mean?
Well I can't prove it, but we can expect \(\varphi_3(y) \to 0\) just like \(C(y)\). If \(C(y)\) had a faster convergence, I could put a box and everything would be solved. But because this decays so slow, it's really making things difficult. So I'm going to have to do some kind of change of variables, not sure where yet, but I need to somehow factor out this \(C(y)\).
What this means, is that for very large \(y\), we can expect our difference equation to look like this:
\[
\varphi_3(y) = -\frac{1}{\log(y)} + \frac{\varphi_1(y)}{y^3} + \left(1+\frac{1}{y}\right) \varphi_3(y-1)\\
\]
The value \(\varphi_1\) shoots to zero so fast, that for large values it becomes inconsequential. WHICH IS REALLY REALLY GOOD. We can think of this in terms of the tangent plane of \(F\) about \(\varphi_1,\varphi_2 = 0\). For large \(y\), this essentially just becomes \(\varphi_3 = \frac{-1}{\log(y)} + \varphi_2\), it looks like a run of the mill \(y=x\) graph with a small offset.
Another good thing about \(\varphi_1\) shooting to zero so fast, is that it implies that in the region \(0 \le s \le 1\) the value of \(\varphi\) for \(x \langle s\rangle_\varphi y\) is very very small, where as for \(1 \le s \le 2\) we may have just small values (relative to the other interval). We only care about \(\varphi_1\) when \(s=1\) though, which allows us to glue \(0 \le s \le 2\) together. Not much progress on that part, but knowing that this value almost flatlines here is a very good sign.
Which means, for very large large \(y\), \(\varphi_3\) does not depend on \(\varphi_1\), or its dependence is neglible. So for large values of \(y\), we are really just trying to solve the difference equation:
\[
f(y) = \frac{-1}{\log(y)} + \left(1+\frac{1}{y}\right) f(y-1)\\
\]
The solution of which, as \(y \to \infty\) is zero. So theoretically, the solution can be found, if we can massage this equation to be solvable despite the slow decay of \(\frac{1}{\log(y)}\). I've made a bit of progress, but I'm not sure yet.
But what I am sure of, is that one can show that:
\[
\begin{align}
\varphi_3(y) &\to 0\,\,\text{as}\,\,y\to\infty\\
\varphi_2(y) &\to 0\,\,\text{as}\,\,y\to\infty\\
\varphi_1(y) &\to 0\,\,\text{as}\,\,y\to\infty\\
\end{align}
\]
Where there respective decays are something like:
\[
\begin{align}
\varphi_3(y) &\approx \frac{1}{\log(y)}\\
\varphi_2(y) &\approx \frac{1}{\log(y-1)}\\
\varphi_1(y) &\approx y^{-e}\\
\end{align}
\]
So ultimately, this means the way we'd have convergence now is like:
\[
\sum_{n=0}^\infty \frac{1}{\log(y+n+1)} - \frac{1}{\log(y+n)}\\
\]
Which, yes it converges. But for fuck's sakes, that's the worst convergence possible. It's wayyyyyyyyyy too slow to be feasible. So I have to speed this up somehow.
Why do we care about this?
Well, this means, if I write bennet's operators:
\[
x [s] y = \exp^{\circ s}_{y^{1/y}}\left(\log^{\circ s}_{y^{1/y}}(x) + y\right)\\
\]
Then:
\[
x[s]\left(x[s+1] y\right) - x\langle s+1 \rangle_{C(y)}(y+1) =0\\
\]
And \(C(y) \to 0 \) like \(-1/\log(y)\).
I can't prove this yet, and the most I've been able to test is up to \(y \approx 1E10\), and there it is only \(0\) upto about \(20\) digits. Again, that's because we start to get too close to doing tetration for \(b \approx 1+\delta\) and that is insufferably hard using Schroder. I might have to switch to the beta method here, that's a whole nother trouble of problems though...
But all signs are pointing to how you put it, MphLee
Bennet WANTS to become Goodstein.
That being said, I've been trying to solve this problem from a more analytic perspective. The trouble I've been having now, is the slow decay of what I was hoping would be much faster.
If I write:
\[
F(x,y,s,\varphi_1,\varphi_2,\varphi_3) = x \langle s\rangle_{\varphi_1} \langle s+1\rangle_{\varphi_2} y - \left(x \langle s+1\rangle_{\varphi_3} y+1\right) = 0\\
\]
Then:
\[
\begin{align}
\varphi_3(x,y,s) &= C(x,y,s) + \rho_1(x,y,s) \varphi_1(x,y,s) + \rho_2(x,y,s) \varphi_2(x,y,s)\\
\rho_1 &= \frac{\partial \varphi_3}{\partial \varphi_1} \Big{|}_{\varphi_1 = \varphi_2 = 0}\\
\rho_2 &= \frac{\partial \varphi_3}{\partial \varphi_2} \Big{|}_{\varphi_1 = \varphi_2 = 0}\\
\end{align}
\]
Serves as a fantastic first order approximation--as it's the tangent plane about zero. Now, I cannot solve this system of equations as well as I'd like, but I can set up a solution fairly well. Firstly, we can ignore \(x\), so long as \(x > e\) everything follows the same (so drop \(x\) from the picture). The first requirement we need is that:
\[
\varphi_3(y,s) = \varphi_2(y+1,s)\\
\]
So we can rewrite this equation as:
\[
\varphi_3(y,s) = C(y,s) + \rho_1(y,s) \varphi_1(y,s) + \rho_2(y,s) \varphi_3(y-1,s)\\
\]
Now, I can somewhat prove the following. But, even though I can't prove it, numerical experimentation gives us good asymptotics on \(y\). Here, we don't really need to talk about \(s\), so drop it from the picture too. \(s\) only makes an appearance when we're talking about \(\varphi_1\), and even then it only appears as a shift \(s \mapsto s-1\). So let's write this out again:
\[
\varphi_3(y) = C(y) + \rho_1(y) \varphi_1(y) + \rho_2(y) \varphi_3(y-1)\\
\]
The first term \(C(y)\) has some interesting asymptotics. My best guess is that it looks like \(1/\log(y)\). So it tends to zero, but does so very slowly. It's difficult to sus out an exact growth, because my code will start to get inaccurate around \(1E10\), which is because it's calculating the super exponential of \(1E10^{1/1E10} \approx 1\), and this is a natural artifact which occurs near \(1\). Nonetheless, I am confident it tends to zero, and should tend somewhere like \(1/\log(y)\), if not that, maybe something a bit slower, but I doubt it.
The second term \(\rho_1(y)\) is our knight in shining armor. When I started feeling doubtful of this working, \(\rho_1\) brought be back in the game... The value \(\rho_1(y) \le 1/y^e\). And decays at least that fast. If I were to wager a guess, it's something like \(\rho_1(y) \approx 1/y^{x-\delta}\), it's probably asymptotically about \(1/y^x\) though.
The third term is a bit trickier, and doesn't behave as nicely as one would like. But it's not as bad as the first term, which is our trouble value. The third term still has nice behaviour. The value \(\rho_2(y) \approx 1+\frac{1}{y}\).
What does this mean?
Well I can't prove it, but we can expect \(\varphi_3(y) \to 0\) just like \(C(y)\). If \(C(y)\) had a faster convergence, I could put a box and everything would be solved. But because this decays so slow, it's really making things difficult. So I'm going to have to do some kind of change of variables, not sure where yet, but I need to somehow factor out this \(C(y)\).
What this means, is that for very large \(y\), we can expect our difference equation to look like this:
\[
\varphi_3(y) = -\frac{1}{\log(y)} + \frac{\varphi_1(y)}{y^3} + \left(1+\frac{1}{y}\right) \varphi_3(y-1)\\
\]
The value \(\varphi_1\) shoots to zero so fast, that for large values it becomes inconsequential. WHICH IS REALLY REALLY GOOD. We can think of this in terms of the tangent plane of \(F\) about \(\varphi_1,\varphi_2 = 0\). For large \(y\), this essentially just becomes \(\varphi_3 = \frac{-1}{\log(y)} + \varphi_2\), it looks like a run of the mill \(y=x\) graph with a small offset.
Another good thing about \(\varphi_1\) shooting to zero so fast, is that it implies that in the region \(0 \le s \le 1\) the value of \(\varphi\) for \(x \langle s\rangle_\varphi y\) is very very small, where as for \(1 \le s \le 2\) we may have just small values (relative to the other interval). We only care about \(\varphi_1\) when \(s=1\) though, which allows us to glue \(0 \le s \le 2\) together. Not much progress on that part, but knowing that this value almost flatlines here is a very good sign.
Which means, for very large large \(y\), \(\varphi_3\) does not depend on \(\varphi_1\), or its dependence is neglible. So for large values of \(y\), we are really just trying to solve the difference equation:
\[
f(y) = \frac{-1}{\log(y)} + \left(1+\frac{1}{y}\right) f(y-1)\\
\]
The solution of which, as \(y \to \infty\) is zero. So theoretically, the solution can be found, if we can massage this equation to be solvable despite the slow decay of \(\frac{1}{\log(y)}\). I've made a bit of progress, but I'm not sure yet.
But what I am sure of, is that one can show that:
\[
\begin{align}
\varphi_3(y) &\to 0\,\,\text{as}\,\,y\to\infty\\
\varphi_2(y) &\to 0\,\,\text{as}\,\,y\to\infty\\
\varphi_1(y) &\to 0\,\,\text{as}\,\,y\to\infty\\
\end{align}
\]
Where there respective decays are something like:
\[
\begin{align}
\varphi_3(y) &\approx \frac{1}{\log(y)}\\
\varphi_2(y) &\approx \frac{1}{\log(y-1)}\\
\varphi_1(y) &\approx y^{-e}\\
\end{align}
\]
So ultimately, this means the way we'd have convergence now is like:
\[
\sum_{n=0}^\infty \frac{1}{\log(y+n+1)} - \frac{1}{\log(y+n)}\\
\]
Which, yes it converges. But for fuck's sakes, that's the worst convergence possible. It's wayyyyyyyyyy too slow to be feasible. So I have to speed this up somehow.
Why do we care about this?
Well, this means, if I write bennet's operators:
\[
x [s] y = \exp^{\circ s}_{y^{1/y}}\left(\log^{\circ s}_{y^{1/y}}(x) + y\right)\\
\]
Then:
\[
x[s]\left(x[s+1] y\right) - x\langle s+1 \rangle_{C(y)}(y+1) =0\\
\]
And \(C(y) \to 0 \) like \(-1/\log(y)\).
I can't prove this yet, and the most I've been able to test is up to \(y \approx 1E10\), and there it is only \(0\) upto about \(20\) digits. Again, that's because we start to get too close to doing tetration for \(b \approx 1+\delta\) and that is insufferably hard using Schroder. I might have to switch to the beta method here, that's a whole nother trouble of problems though...
But all signs are pointing to how you put it, MphLee
Bennet WANTS to become Goodstein.

