The modified Bennet Operators, and their Abel functions
#4
(07/19/2022, 09:48 AM)MphLee Wrote: I feel very stupid... I'm breaking my brain into it... I'm realizing that I have some weak point in truly understanding that implicit function thing... I believed it was 80% clear to me but it's not.

Let \(f_\phi(s,y):=x[s]_\phi y\) for some fixed suitable \(x\). Starting from the assumption of the existence of a function \(\varphi(s,y)=\phi\) that satisfies
\[\alpha_{\phi}(s+1,f_\phi(s,y))= \alpha_\phi(s+1,y) + 1\]
how you derive, step-by-step, the f.eq.
\[\varphi(s,f_{\varphi(s,y)}(s,y)) = \varphi(s,y)\\\]
Pls, can you explain it very slowly, step by step, is if you were an algorithm, or as if I am an high school student that is learning how to solve 2nd degree polynomial equations.

Because if I start from the assumption I derive the following
\[\alpha_{\varphi(s+1,f_{\varphi(s,y)}(s,y))}(s+1,f_{\varphi(s,y)}(s,y) )=\alpha_{\varphi(s+1,y)}(s+1,y)+1\]
but from here I'm lost...


Hey, so I haven't written exactly how to derive this yet--I have a sketch of a proof--but numerical evidence is confirming it. But I'll explain first with the case that \(\varphi_2 = \varphi_1\) in the abel equation. Then call \(\varphi\) this \(\varphi\).

Essentially, you have to prove this with a few tricks from implicit function theory, I'll try to explain as best I can. Write it as this: fix the value \(\varphi_0 = \varphi(s,y)\) such that

\[
\alpha_{\varphi_0}(s+1, x[s]_{\varphi_0}y) = \alpha_{\varphi_0}(s+1,y) + 1
\]

Now we know that:

\[
\alpha_{\varphi_0}(s+1,x[s]_{\varphi_0}y) + 1 = \alpha_{\varphi_0}(s+1,y) + 2
\]

Additionally, we know there is a solution \(\varphi_0'\) to the equation:

\[
\alpha_{\varphi'_0}(s+1,x[s]_{\varphi'_0} x[s]_{\varphi'_0} y) = \alpha_{\varphi'_0}(s+1,x[s]_{\varphi'_0} y) + 1\\
\]

And there is a solution \(\varphi''_0\) such that:

\[
\alpha_{\varphi''_0}(s+1,x[s]_{\varphi''_0} x[s]_{\varphi''_0} y) =\alpha_{\varphi''_0}(s+1,y) + 2\\
\]

Then to prove the identity you're asking about, we have to show that \(\varphi_0 = \varphi'_0 = \varphi''_0\).

This is the tricky part I haven't ironed out yet, but it appears to be happening with code. But I'll sketch the proof I have in mind. Let \(s \to 0\), then all these values can be shown to be the same function. This is essentially because there exists one unique solution to the equation:

\[
\alpha_{\varphi_0}(1,x[s]_{\varphi_0} y ) = \alpha_{\varphi_0}(1,y) + 1\\
\]

And this unique solution, also satisfies the equations of \(\varphi'_0\) and \(\varphi''_0\) at the value \(s=0\). But additionally satisfies this equation in the complex derivative (which is enough to define analycity and again, uniqueness). EDIT: I mean the complex derivative in either \(\varphi\) is equivalent at \(s=0\). So since \(\varphi(0,y) = 0\) and \(\lim_{s\to 0}\frac{d}{d\varphi} \alpha_\varphi\) is the same for each of these solutions, by analytic continuation, they must be the same solutions. I 'd have to justify this further, but that's like 70-80% of a proof, lol. It would just be uniqueness of a differential equation though, nothing too complicated. Too lazy to work through the details, and since this method fails at about \(s= 0.2\), I'm not going to bother.


This is still a little sketchy as a proof, but this line of reasoning should work, as the implicit solution at \(s=0\) is non-degenerate, and hence unique, and since this happens at \(s=0\) where all of these equations are equivalent.

This then gives us the formula:

\[
\alpha_{\varphi_0}\left(s+1,x[s]_{\varphi_0} x[s]_{\varphi_0}y\right) = \alpha_{\varphi_0}(s+1,y) + 2\\
\]

This means that \(\varphi(s,y) = \varphi_0 = \varphi'_0 = \varphi(s,x[s]_{\varphi_0} y)\).

This is still a little shaky though. And I'm probably not going to pursue this argument further because it becomes a truly local solution that fails about \(s = 0.2\), so it's only good for about zero.


Quote:About finding creative ways... I'm not sure I can help you but when you said you wanted to switch to the Abel presentation... I was thinking you would consider what I can inverse goodstein equation, something that give rise not to an hyperoperations sequence but to an hyperlogarithms sequence.
\[a_0\circ a_{s+1}=a_{s+1}\circ a_s\]
Assume the seed is decrementation by one, assume \(\alpha_{\varphi,s}\) is the bi-indexed family of abel functions you defined.

Why don't we just ask for parameters \(\varphi_i\) for \(i=0,1,2\) that solve
\[S^{-1}\circ \alpha_{\varphi_0,s+1}=\alpha_{\varphi_1,s+1}\circ \alpha_{\varphi_2,s}\]
i.e. a function \(\varphi(s)\) that solves
\[\forall y.\,\,  \alpha_{\varphi(s+1),s+1}(y)-1=\alpha_{\varphi(s+1),s+1}( \alpha_{\varphi(s),s}(y))\]
this will take care of all the \(y\) at the same time... even if probably this means that such triples can not exist, i.e. such triples exist locally only for some \(y\in U\), but we should prove this.

This probably speaks for my ignorance of implicit functions... so let's stick to your story of the parameter depending on two variables... why don't you consider the fully abel-like presentation?
\[\alpha_{\varphi_0}(s+1,y)-1=\alpha_{\varphi_1}(s+1, \alpha_{\varphi_2}(s,y))\]

Hmmm, that's interesting.

The reason I don't want to use the full Abel, is because it reduces into exactly the same problem as I've considered before, with a triple of \(\varphi\)'s which relate to each other in some codified way. The reason I like the abel presentation, is that for a single \(\varphi\) it does seem to be working near \(s = 0 \), even if it is failing past \(s = 0.2\). And then, I like the case with \(\varphi_1,\varphi_2\) because it reduces from 3 variables instead of one. And incidentally it's reducing the equation, because it's asking for a periodic function in \(y\). This is a little tricky, and may very well not work, but I think it could.

This means we would have:

\[
x [s]_{\varphi_1} \left(x [s+1]_{\varphi_2} y\right) = x [s+1]_{\varphi_2} (y+1)\\
\]

This implies that \(\varphi_2(y+1) = \varphi_2(y)\). This shouldn't cause a contradiction and greatly simplifies the problem. Then \(\varphi\) would be \(1\)-periodic, which has no obvious contradictions. Additionally, when reduced into the Abel form, it looks precisely like an equation of the form:

\[
x [s]_{\varphi_1} y = f(s+1,f^{-1}(s+1,y) + 1)\\
\]

Returning to the three variable case, would mean we are taking:

\[
x [s]_{\varphi_1} y = g(s+1,f^{-1}(s+1,y) + 1)\\
\]

Which is much less symmetric.

Honestly I jumped the gun thinking I could easily reduce 3  variables into one; so I'm going to try 3 variables into two; if that fails, then I'll go back to 3 variables. I get frustrated because the implicit solution is existing, but no obvious limit formula except for large \(y\), where you should be able to make a kind of iterated log argument.

Quote:
I feel that to go deep inside this matter we need to develop further the theory of goodstein maps... we need to compare and manipulate them as we do with superfunctions... maybe like introducing the analogous of theta mapping but for measuring the difference between solutions to goodstein f.eq.

Your discussion about normal families should be part of it but I'm slow and I still have to absorb the idea.


Normal families are going to play a huge role. The only theorem I've essentially proven hard core to do with this, is that:

\[
\alpha(s+1,x[s] y) - \alpha(s+1,y) - 1 = o(y^{\epsilon})\\
\]

So if you define a neighborhood of functions of \(x[s]y\), call them \(x[s]_{\varphi}y\) (which exists) in which:

\[
\alpha_\varphi(s+1,x[s]_{\varphi} y) - \alpha_\varphi(s+1,y) - 1 = o(y^{\epsilon})\\
\]

Then these can be turned into a normal family! It's a little difficult to explain, and I'm still fuzzy on the details. I've been mostly running numerical evidence lately, and trying to make graphs, etc. etc.. so I haven't gone back to doing the hard math yet. Largely because I know I'm missing something crucial, but I don't know what yet...

But If we think of \(\varphi(y+1) = \varphi(y)\), then it is very much related to \(\theta\) mappings. \(\varphi\) was intended as the semi-operators version of \(\theta\) mappings, lol--but it's similar to the \(\tau\) error from the \(\beta\) method. Any solution to semi-operators must be a \(\varphi\)-mapping. The question is, how the fuck do you find the right \(\varphi\) mapping, lmao.
Reply


Messages In This Thread
RE: The modified Bennet Operators, and their Abel functions - by JmsNxn - 07/20/2022, 09:46 PM

Possibly Related Threads…
Thread Author Replies Views Last Post
  How could we define negative hyper operators? Shanghai46 2 6,453 11/27/2022, 05:46 AM
Last Post: JmsNxn
  "circular" operators, "circular" derivatives, and "circular" tetration. JmsNxn 15 34,211 07/29/2022, 04:03 AM
Last Post: JmsNxn
  The \(\varphi\) method of semi operators, the first half of my research JmsNxn 13 19,564 07/17/2022, 05:42 AM
Last Post: JmsNxn
  The bounded analytic semiHyper-operators JmsNxn 4 16,723 06/29/2022, 11:46 PM
Last Post: JmsNxn
  Holomorphic semi operators, using the beta method JmsNxn 71 89,385 06/13/2022, 08:33 PM
Last Post: JmsNxn
  Hyper operators in computability theory JmsNxn 5 20,226 02/15/2017, 10:07 PM
Last Post: MphLee
  Recursive formula generating bounded hyper-operators JmsNxn 0 6,830 01/17/2017, 05:10 AM
Last Post: JmsNxn
  the inverse ackerman functions JmsNxn 3 16,924 09/18/2016, 11:02 AM
Last Post: Xorter
  Rational operators (a {t} b); a,b > e solved JmsNxn 30 122,428 09/02/2016, 02:11 AM
Last Post: tommy1729
  holomorphic binary operators over naturals; generalized hyper operators JmsNxn 15 51,851 08/22/2016, 12:19 AM
Last Post: JmsNxn



Users browsing this thread: 1 Guest(s)