(05/31/2022, 02:32 PM)MphLee Wrote:(05/29/2022, 06:11 AM)JmsNxn Wrote: Now, we can continue this operation:Up to this point everything seems right. using the inversion in the monoid of binary operations under left bracketing your formula translates as \([s]_{n+1}=[s+1]_nS[s+1]_n^{-1}\)
\[
x\,[s]_n\,y = x\,[s+1]_{n-1}\,\left( (x\,[s+1]_{n-1}^{-1}\,y) +1\right)\\
\]
Quote:The really weird part now, is that this solution doesn't work on its own. You have to solve for \([s+1]_n\) while you solve for \([s]_n\). The thing is... we can solve for:This last formula look suspicious. Maybe it is related to your small mistake.
\[
\begin{align}
x\,[s]_{n}\,y &= f(y)\\
x\,[s+1]_n\,y &= f^{\circ y}(q)\\
\end{align}
\]
You can actually do this pretty fucking fast... It just looks like iterating a linear function.
EDIT:ACk! made a small mistake here, this is an idempotent iteration like this, the actual iteration is a little more difficult, I'll write it up when I can make sense of controlling the convergence of this...
We should have something like \(x\,[s]_{n+1}\, y=f(y)\) and \(x\,[s+1]_{n}\, y=f^y(q)\)
Yes, I screwed up here, lol. I got it now though. I want to wait a bit before I describe the algorithm I have in mind. I want it all working well.
And! The reason this won't converge to the seed, is because at \(s=0\) we still have addition, and at \(s=1\) we still have multiplication. The algorithm I have in mind will fix the endpoints as such.
This can be seen even in the first iteration:
\[
\begin{align*}
x[0]_1y &= x+y\\
x[1]_1 y &= x\cdot y\\
\end{align*}
\]
These remain constant in the iteration I have planned. I'm trying to combine this with the varphi method currently, and so far things are slow, but this seems like a much better method.
Essentially, my argument it doesn't converge to the seed, is because a solution exists! It's close enough to bennet, that I'm confident the modified bennet's will converge. The trouble is doing it accurately, so far I've only managed to get a couple digits, but it does look like it's working.
As to inverting \(x [s]^{-1} y\), this took a while to code, it's slow as hell but it does work. I think a formal solution would be unheard of, and would probably converge slower than the method I have. Yes it would probably use W-Lambert a couple times nested, it'd be hopeless to do formally. I just wrote a quick polynomial inversion of the Taylor series of Ibennet(s,z,y+q), which inverts in q and then set q=0 which gives the inverse at \(y\). Luckily my code is polynomially based, so grabbing taylor expansions is ezpz.
I know what you mean by the domain shrinking, and that making sure you can take the inverse is tricky. What works, is that:
\[
x[s]^{-1}y
\]
is invertible so long as \(y > Y(x,s)\) for \(0 \le s \le 1\). We don't get this quite as well for \(1 \le s \le 2\). But, the thing is, the inversion algorithm will converge (which is further proof that this method will extend beyond \(y > e\), the algorithm is converging on a larger domain than it should).
For example, you can calculate:
\[
3\,[1.5]^{-1}\,10 = 2.5418865714965084351384601699161614103869528768610\\
\]
Which is clearly nonsensical using our current formulas (\(x[s]y\) isn't defined for \(y<e\)), but we only need this \(+1\) which is \(>e\). So if I run:
\[
3[0.5]_1 10 = x[1.5]3.5418865714965084351384601699161614103869528768610 = 17.654344068794716937147222216314762100132579882903\\
\]
Where:
\[
3 [0.5] 10 = 16.841884762465586893000032633581733619306022613397
\]
So they are pretty close to each other, and they vary by less than 1. So the iteration is doing something. After a few more iterations it calms down. This is strong evidence in my mind, that this object will continue to converge for \(y > 0\) and that the restriction \(y>e\) is a little arbitrary, but necessary for the time being. As I'm confident that we can probably say that:
\[
3[1.5]2.5418865714965084351384601699161614103869528768610 = 10\\
\]
Think of it, as the inverse being defined on a larger domain than the actual function, and using that to analytically continue the function.
Now I'm mostly interested in very large \(y\), because this is where you can estimate the behaviour of \(\boldsymbol{\varphi} = (\varphi_1,\varphi_2,\varphi_3)\) very well. And for \(0 \le s \le 1\) this inverse function always works. Since it's analytic, it still spits out values as \(1 \le s \le 2\), despite not being a "true" inverse, because it leaves the domain (\(y>e\))... which you noted. It is really finicky for \(1\le s \le 2\), and to make it not finicky you have to run a Newtonian root finding algorithm, which is annoying.
But everything is still creating a viable inverse function, that is defined on a larger domain than then the original bennet operation.
Code:
InvBennet(0.5,3,100)
%266 = 81.879845404599395355269425245459181009934218442993
Ibennet(0.5,3,%266)
%267 = 100.00000000000000000000100997700361890445814464017This is just one example. It gets troublesome \(1 \le s \le 2\) though, and you need larger values/iterations for it to be manageable. Also, I have written in a flag for these values, which adds a Newtonian root finder. The initial algorithm only kind of gets us close, and even then it can screw up (code is still a little wonky, I'll have to find a more efficient way in the future), but
Code:
InvBennet(1.5,3,20,,1)
%53 = 3.8013626638813936726849674551236741743250837414234
Ibennet(1.5,3,%53)
%54 = 20.000000000000000906030985117114149927827111941439Now you can try the first step of your iteration:
Code:
Ibennet(1.5,3,InvBennet(1.5,3,20,,1)+1)
%55 = 30.682945689981177590653956610929950299856487473611
Ibennet(0.5,3,20)
%56 = 29.332503986494964228494669746491051063337196366026You can see that these things start to change slightly, I'm betting it will converge very well. It's going to be tricky though, and I definitely have to bridge the gap between using \(\varphi\) and also this iterative formula; I think the answer lies somewhere inbetween both of these ideas.
I don't want to spoil too much. I have a lot of code that is still very mix and match, and wonky. I have to streamline everything, and also implement taylor series, which aren't present in the trial run of InvBennet. It turns out to be stupid hard to implement Newtonian root finders in two variables, when it really shouldn't. So I'll have to find a work around to that.
I'll try to keep you guys posted, but I kind of want to stay silent for a bit until I have a better grasp of a working model with working code.
Regards, James
Also, I'd like to add that you definitely deserve credit for this, Mphlee. When you see the algorithm I have planned, it will make much more sense. And this is absolutely fucking fascinating. And it uses your idea. Iterating bennet's, definitely reduces to \(S : y \mapsto y+1\), because because it has a nice decay to it. Iterating the modified Bennet will probably not quite work how you'd like. Iterating modified bennet, while you massage it with \(\varphi\) absolutely will work. I'm going to tread lightly now though, until I have a strong working model.

