On extension to "other" iteration roots
#11
(08/12/2022, 11:34 PM)JmsNxn Wrote: I would say the only disparaging fact about this expansion is that if you are going to expand in \(s\) (if we want the multiplier to be locally holomorphic) we'd have to be more clever. I think it would make sense to generalize this to \(s^x \theta_s(x)\) to try and get holomorphy in \(s\), which shouldn't be too hard if you fiddle around.

Don't let yourself be fooled by \(|s|\) it actually just means \(-s\), its more to make it clear that \(-s\) is positive!
Reply
#12
(08/12/2022, 11:50 PM)bo198214 Wrote:
(08/12/2022, 11:34 PM)JmsNxn Wrote: I would say the only disparaging fact about this expansion is that if you are going to expand in \(s\) (if we want the multiplier to be locally holomorphic) we'd have to be more clever. I think it would make sense to generalize this to \(s^x \theta_s(x)\) to try and get holomorphy in \(s\), which shouldn't be too hard if you fiddle around.

Don't let yourself be fooled by \(|s|\) it actually just means \(-s\), its more to make it clear that \(-s\) is positive!

Yes, I was fooled at first; but then it clicked.
Reply
#13
(08/12/2022, 11:34 PM)JmsNxn Wrote: Ooo that is nice! Very straight to the point.  I was going to say it wouldn't analytic at first, but then I realized \(s\) is constant. I've never seen that before.

I would say the only disparaging fact about this expansion is that if you are going to expand in \(s\) (if we want the multiplier to be locally holomorphic) we'd have to be more clever. I think it would make sense to generalize this to \(s^x \theta_s(x)\) to try and get holomorphy in \(s\), which shouldn't be too hard if you fiddle around.

This would also make the real valued superfunction for base \(b = \eta^- - \delta\) for \(\delta > 0\)! And you could probably get holomorphy in \(b\) if you played your cards right! (with some kind of problem at \(b=1\) and \(b = \eta^-\)).

hey james, no intention to disappoint u but, if you're trying to build a real valued superfunction for base less than \(e^{-e}\) as tetration, you'd be careful about which method to use.
1. the "P method" here doesn't work anymore. The "P method" is only a merging tech, and it requires the merged function has only one limiting value as a fixed point, if you're taking the lower bases, there'll be 2 fixed point, thus the asymptotic expansion will not work
2. any real tetration for such bases generated by asymptotic expansions (at infty) cannot be continuous, this is very easy to prove, it follows ideas below:
  I. If you're using an asymptotic (at infty), there must be 2 limiting value, which is contradictionary if you're not considering oscillative function
  II. If you're considering oscillative function, there can only be 2 limiting value with discontinu(ous)ity, because it always follows \(T(z+1)=b^{T(z)}\), if any continuity was assumed, it'll contradict with (T(z+1)=b^{T(z)}\) because there can only be 2 limiting value, not a interval like sin(z)
So you must have a discontinuity by generating from an asymp
3. even if kneser's double dagger method or other contour-integral-like methods can work, it'll be difficult to choose the initial guess and also make the function to fit \(tet_b(0)=1\)
This is because such realvalued tet function must be oscillative, and kneser-like methods won't discern about where is the startpoint, the initial conditions \(T(0)=1, T(0)=b^b, T(0)=b^{b^b},\cdot\) cant be discerned easily, furthermore there must be tons of branch cuts for the 2 fixed points makes the Tet semi-periodic about a real period, so, Very Hard, besides it's not proven or manifested about its availability, because this time the method also walk across the "2 limiting value" case
maybe your beta method works, looking forward
Regards, Leo Smile
Reply
#14
(08/13/2022, 05:29 AM)Leo.W Wrote:
(08/12/2022, 11:34 PM)JmsNxn Wrote: Ooo that is nice! Very straight to the point.  I was going to say it wouldn't analytic at first, but then I realized \(s\) is constant. I've never seen that before.

I would say the only disparaging fact about this expansion is that if you are going to expand in \(s\) (if we want the multiplier to be locally holomorphic) we'd have to be more clever. I think it would make sense to generalize this to \(s^x \theta_s(x)\) to try and get holomorphy in \(s\), which shouldn't be too hard if you fiddle around.

This would also make the real valued superfunction for base \(b = \eta^- - \delta\) for \(\delta > 0\)! And you could probably get holomorphy in \(b\) if you played your cards right! (with some kind of problem at \(b=1\) and \(b = \eta^-\)).

hey james, no intention to disappoint u but, if you're trying to build a real valued superfunction for base less than \(e^{-e}\) as tetration, you'd be careful about which method to use.
1. the "P method" here doesn't work anymore. The "P method" is only a merging tech, and it requires the merged function has only one limiting value as a fixed point, if you're taking the lower bases, there'll be 2 fixed point, thus the asymptotic expansion will not work
2. any real tetration for such bases generated by asymptotic expansions (at infty) cannot be continuous, this is very easy to prove, it follows ideas below:
  I. If you're using an asymptotic (at infty), there must be 2 limiting value, which is contradictionary if you're not considering oscillative function
  II. If you're considering oscillative function, there can only be 2 limiting value with discontinu(ous)ity, because it always follows \(T(z+1)=b^{T(z)}\), if any continuity was assumed, it'll contradict with (T(z+1)=b^{T(z)}\) because there can only be 2 limiting value, not a interval like sin(z)
So you must have a discontinuity by generating from an asymp
3. even if kneser's double dagger method or other contour-integral-like methods can work, it'll be difficult to choose the initial guess and also make the function to fit \(tet_b(0)=1\)
This is because such realvalued tet function must be oscillative, and kneser-like methods won't discern about where is the startpoint, the initial conditions \(T(0)=1, T(0)=b^b, T(0)=b^{b^b},\cdot\) cant be discerned easily, furthermore there must be tons of branch cuts for the 2 fixed points makes the Tet semi-periodic about a real period, so, Very Hard, besides it's not proven or manifested about its availability, because this time the method also walk across the "2 limiting value" case
maybe your beta method works, looking forward

It looks like my improved/simplified version of your "P method" yields the same result as your "P method" and can also handle the case \(b<\eta_-\), except for the condition \(\text{tet}_b(0)=1\) because it is a repelling fixed point and if you start to iterate you can not use 1.
   
   
Reply
#15
(08/13/2022, 09:43 AM)bo198214 Wrote: It looks like my improved/simplified version of your "P method" yields the same result as your "P method" and can also handle the case \(b<\eta_-\), except for the condition \(\text{tet}_b(0)=1\) because it is a repelling fixed point and if you start to iterate you can not use 1.

It's pretty nice to have a simplification, I see that you probably use 1-order approx about superfunctions T and its inverse, so you get a simplification, great idea
And it shall be very easily to show it's a special case of "P method"(this name so wierd lol)

The difference is it's not easy to determine which function to use as a theta mapping here, also exactly "P method" uses a kinda theta-mapping as well, it's
\(T_merged\approx{T(z+\theta(z))}=\lim{L+s^{z+\theta(z)}}=\lim{L+\theta_1(z)s^z}\) where 2 thetas are 1-periodic.
I'm not tryna act stubborn or pretentious, but simplification is one good thing, while a detailed construction can be helpful, P can be accurately determine what value a merged function gains at specific points, while by simple theta mapping it's hard to find a right theta mapping
and simplified version is indeed awesome



I personally defined the method as a merging tech for 2 different superfunction but has the same limiting value at some point, only with a naive and open and undefinite initial guess. This is why I say yours is a special case as:
1. an initial guess
2. merging or theta-mapping

Leave this aside, I have to point out that the second superfunction you're making, is in fact a merged superfunction of \(log_b(z)\) and the hidden 2 different superfunction (the limit is ~0.2528Cool meets at negative infinity. Besides your function will never meets tet(0)=1, so still a special P method.
And I insist that P method or your simplified version won't work for such base, if you want to get a tet_b, let f(z)=b^b^z, you have to take a decreasing superfunction of f(z) which will pass 1, but it also has a logarithmic singularity at -2, and another rising superfunction that goes from 0 at -1, merging these 2 together will get u the desired tetration \(T(z)\)

However, because you must wanna include continuity for tet_b, when you merge these 2, your initial guess \(T_0(z)\) must go across the fixed point \(L\) of b^z between [0,1], which must be repelling (easy to check), let's say the crossing point \(T_0©=L, c\in[0,1]\), we'll write an asymp that \(T_0(z)=L+s_0(z-c)+O(z-c)^2\), lets denote \(T_n(z)\) as the nth approximation. 
Because the branch cutting issue you can only use \(T_n(z)=\log_b(T_{n-1}(z+1))\) to approximate the final T, you must  notuse \(T_n(z)=exp_b(T_{n-1}(z-1))\)
doesnt matter
Denote \(c_n\) as the solution to \(T_n(z)=L\) in [0,1], \(s_n\) as the derivative of T_n at it. I'll show you a specific example for b=0.06 by pics

1. if you use \(T_n(z)=\log_b(T_{n-1}(z+1))\), we focus on the asym at c_n, we must obtain that \(T_n(z)=L+s_n^{-n}(z-c_n)+O(z-c_n)^2\), you can verify that \(s_n\to0\), and because this time T gets 3 limiting value at inf, L and 2 other fixed point of \(f(z)=b^{b^z}\), T_n would behave constant around a big neighborhood of c_n, but a sudden leap at integers. Thus for very large n, the T will converge at a function which has discontinuity at integer and remain L constantly otherwise.

and pic1 that shows such manner: 
blue: initial guess T_0 with values at integer correct
yellow: after 40 iterations, T_40 tends to be constant around c_n
green: after 80 iterations, T_80 more "constant" around c_n



2. if you use \(T_n(z)=b^{T{n-1}(z-1)}\),(even ignored the branch cuts) we still focus on the asym at c_n, this time we'll have \(T_n(z)=L+s_n^{n}(z-c_n)+O(z-c_n)^2\\), and \(s_n^n\to\infty\), thus as n getting very big, T will remain constant at integers and have infinitely many jumps at c_n, c_n+1, c_n+2, and so on, still not working

no pic at [-1,10] this time because branch cuts makes T_n complex in [-1,n-1]
pic2 showing how \(T_n(z)=b^{T{n-1}(z-1)}\) works:
blue: initial guess T_0 in [80,90]
yellow: after 40 iterations, seems T_40 begins to jump at the c_n or solutions of \(T_{40}(z)=L_{base=0.06}\approx0.36158\)
green: after 80 iterations, T_80 stays constant at 2 exotic fixed point of b^b^z, and jumps at c_n


If u got a great idea on such realvalued tet plz call me up


Attached Files Thumbnail(s)
       
Regards, Leo Smile
Reply
#16
(08/13/2022, 05:29 AM)Leo.W Wrote:
(08/12/2022, 11:34 PM)JmsNxn Wrote: Ooo that is nice! Very straight to the point.  I was going to say it wouldn't analytic at first, but then I realized \(s\) is constant. I've never seen that before.

I would say the only disparaging fact about this expansion is that if you are going to expand in \(s\) (if we want the multiplier to be locally holomorphic) we'd have to be more clever. I think it would make sense to generalize this to \(s^x \theta_s(x)\) to try and get holomorphy in \(s\), which shouldn't be too hard if you fiddle around.

This would also make the real valued superfunction for base \(b = \eta^- - \delta\) for \(\delta > 0\)! And you could probably get holomorphy in \(b\) if you played your cards right! (with some kind of problem at \(b=1\) and \(b = \eta^-\)).

hey james, no intention to disappoint u but, if you're trying to build a real valued superfunction for base less than \(e^{-e}\) as tetration, you'd be careful about which method to use.
1. the "P method" here doesn't work anymore. The "P method" is only a merging tech, and it requires the merged function has only one limiting value as a fixed point, if you're taking the lower bases, there'll be 2 fixed point, thus the asymptotic expansion will not work
2. any real tetration for such bases generated by asymptotic expansions (at infty) cannot be continuous, this is very easy to prove, it follows ideas below:
  I. If you're using an asymptotic (at infty), there must be 2 limiting value, which is contradictionary if you're not considering oscillative function
  II. If you're considering oscillative function, there can only be 2 limiting value with discontinu(ous)ity, because it always follows \(T(z+1)=b^{T(z)}\), if any continuity was assumed, it'll contradict with (T(z+1)=b^{T(z)}\) because there can only be 2 limiting value, not a interval like sin(z)
So you must have a discontinuity by generating from an asymp
3. even if kneser's double dagger method or other contour-integral-like methods can work, it'll be difficult to choose the initial guess and also make the function to fit \(tet_b(0)=1\)
This is because such realvalued tet function must be oscillative, and kneser-like methods won't discern about where is the startpoint, the initial conditions \(T(0)=1, T(0)=b^b, T(0)=b^{b^b},\cdot\) cant be discerned easily, furthermore there must be tons of branch cuts for the 2 fixed points makes the Tet semi-periodic about a real period, so, Very Hard, besides it's not proven or manifested about its availability, because this time the method also walk across the "2 limiting value" case
maybe your beta method works, looking forward

I made a terrible mistake in that post, I meant to write \(\eta^- + \delta\). I agree with everything you just posted. I meant, this would work within the Shell thron region. Yes outside of shell thron \(\eta^- - \delta\) you get a pair of fixed points. My bad, just a dumb typo. Don't mind me, just observing. Great work as always Tongue
Reply
#17
(08/14/2022, 03:50 AM)JmsNxn Wrote:
(08/13/2022, 05:29 AM)Leo.W Wrote: 1. if you're taking the lower bases, there'll be 2 fixed point,

Yes outside of shell thron \(\eta^- - \delta\) you get a pair of fixed points.

Which two fixed points are you talking about? If the base moves through \(\eta_-\) there is no fixed point split or something.
There is *one* real fixed point, if the base is left of \(\eta_-\) or right of \(\eta_-\) (b<1).
AFAIK there are only two cases when a fixed point comes into existence or vanishes:
1. at \(b=\eta\) two real fixed points merge into 1 fixed point and then already split into two again
2. at \(b=1\) the right real fixed vanishes to infinity (coming from eta) 
I mean there are two additional complex conjugated fixed points a bit to the left for b at eta minor, but these fixed points are continuous in b, a small vicinity around eta minor maps to a small vicinity around these fixed points. So nothing coming into existence below eta minor.
In this post is an overview of the complex fixed points on the STB.

Or is Leo talking about the *additional* fixed points of b^b^x?
Like shown in this post? Which has to do with the construction of the P method that makes a merge of the even and odd iterates.
Reply
#18
(08/14/2022, 06:15 AM)bo198214 Wrote:
(08/14/2022, 03:50 AM)JmsNxn Wrote:
(08/13/2022, 05:29 AM)Leo.W Wrote: 1. if you're taking the lower bases, there'll be 2 fixed point,

Yes outside of shell thron \(\eta^- - \delta\) you get a pair of fixed points.

Which two fixed points are you talking about? If the base moves through \(\eta_-\) there is no fixed point split or something.
There is *one* real fixed point, if the base is left of \(\eta_-\) or right of \(\eta_-\) (b<1).
AFAIK there are only two cases when a fixed point comes into existence or vanishes:
1. at \(b=\eta\) two real fixed points merge into 1 fixed point and then already split into two again
2. at \(b=1\) the right real fixed vanishes to infinity (coming from eta) 
I mean there are two additional complex conjugated fixed points a bit to the left for b at eta minor, but these fixed points are continuous in b, a small vicinity around eta minor maps to a small vicinity around these fixed points. So nothing coming into existence below eta minor.
In this post is an overview of the complex fixed points on the STB.

Or is Leo talking about the *additional* fixed points of b^b^x?
Like shown in this post? Which has to do with the construction of the P method that makes a merge of the even and odd iterates.

Well I can clarify, that I meant if I didn't make my typo. We can write a super function:

\[
F(t) = \Psi^{-1}(\cos(\pi t) |s|^t \Psi(z_0))\\
\]

When \(b = \eta^- + \delta\). And what I presumed Leo meant, is that for \(\eta^- - \delta\) (which I stupidly wrote), implies that the nearest fixed points are the conjugate pairs (which are still there for \(b = \eta^- + \delta\), but aren't the "primary fixed point/points"). But once you go outside of the shell thron region, you get (when forcing real valued solutions) two complex conjugate fixed points. That's what I interpreted from Leo's comment. Perhaps I read him wrong.

My original comment was poisoned by a typo, lmao, I should've just written \(b = \eta^- + \delta\), and this whole situation would've been avoided. Hope I'm not being obtuse or anything, lol.

Let's say, nearest fixed point to the real line; and that \(b = \eta^- + \delta\) has a real fixed point. And \(b = \eta^- - \delta\) has conjugate fixed point pairs as the closest.
Reply
#19
(08/13/2022, 05:44 PM)Leo.W Wrote: And it shall be very easily to show it's a special case of "P method"(this name so wierd lol)
The name of the method is in your hand, you even called it re-construction method once.
I would say to choose a name wisely in the beginning and don't say things like "call it what you want".
We have enough confusion about naming, so that would really improve things to start with good names in the first place.

(08/13/2022, 05:44 PM)Leo.W Wrote: The difference is it's not easy to determine which function to use as a theta mapping here, also exactly "P method" uses a kinda theta-mapping as well, it's

Actually your expertise and technical arsenal in creating superfunctions is a bit wasted on me.
I am still puzzled how a superfunction can not come out of a iteration (semi)group.
And I'm rather interested in criterions that makes a superfunction uniq instead of crafting superfunctions for all kinds of purposes.
So my simplification was a pure coincidence, because I was just inspired how it was done with the Fibonacci extension
and I thought why is he constructing something that is not a superfunction and has to be "post-iterated" to become one instead of just having a superfunction right away.
And I could construct a superfunction from this method, which you claimed was not possible with the P-method (because of additional fixed points of b^b^x).
So I thought it could simplify/improve dealing with the superfunctions.
But well, if it doesn't fit into your arsenal of techniques - I will not cry Wink
Reply
#20
(08/14/2022, 06:30 AM)JmsNxn Wrote: that the nearest fixed points are the conjugate pairs (which are still there for \(b = \eta^- + \delta\), but aren't the "primary fixed point/points"). But once you go outside of the shell thron region, you get (when forcing real valued solutions) two complex conjugate fixed points.
What's not primary about them? They are as primary as it can get inside and outside the STR:


   
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  my proposed extension of the fast growing hierarchy to real numbers Alex Zuma 2025 0 1,322 09/28/2025, 07:15 PM
Last Post: Alex Zuma 2025
  possible tetration extension part 1 Shanghai46 6 9,407 10/31/2022, 09:45 AM
Last Post: Catullus
  possible tetration extension part 3 Shanghai46 11 14,840 10/28/2022, 07:11 PM
Last Post: bo198214
  possible tetration extension part 2 Shanghai46 8 10,177 10/18/2022, 09:14 AM
Last Post: Daniel
  Qs on extension of continuous iterations from analytic functs to non-analytic Leo.W 18 24,975 09/18/2022, 09:37 PM
Last Post: tommy1729
  Tetration extension for bases between 1 and eta dantheman163 23 65,441 07/05/2022, 04:10 PM
Last Post: Leo.W
  Non-trivial extension of max(n,1)-1 to the reals and its iteration. MphLee 9 21,455 06/15/2022, 10:59 PM
Last Post: MphLee
  Fractional iteration of x^2+1 at infinity and fractional iteration of exp bo198214 17 53,222 06/11/2022, 12:24 PM
Last Post: tommy1729
  Ueda - Extension of tetration to real and complex heights MphLee 4 8,302 05/08/2022, 11:48 PM
Last Post: JmsNxn
  Possible continuous extension of tetration to the reals Dasedes 0 5,867 10/10/2016, 04:57 AM
Last Post: Dasedes



Users browsing this thread: 2 Guest(s)