real slog developed at a fixed point
#1
Following an idea of Dmitrii we make the ansatz:
\( \text{slog}(z)=\frac{1}{\el}\left(\log(z-\el) + r(z-\el)\right) \)
where \( \el \) is the fixed point of \( \log \) in the upper half plane and \( r \) is a powerseries developed at \( \el \).

\( \text{slog} \) must satisfy:
\( \text{slog}(\exp(z))=\text{slog}(z)+1 \)
and hence:
\( \log(e^z-\el)+r(e^z-\el)=\log(z-\el)+r(z-\el)+\el \)
\( \log\left(\frac{e^z-\el}{z-\el}\right)=r(z-\el)-r(e^z-\el)+\el \)

The left side has no singularity at \( \el \) hence we do the transformation \( z\mapsto z+\el \):
\( \log\left(\frac{\el(e^z-1)}{z}\right)=r(z)-r(\el(e^z-1))+\el \)
(*) \( \log\left(\frac{e^z-1}{z}\right)=r(z)-r(\el(e^z-1)) \)
The left side can be developed as:
\( \log\left(\frac{e^z-1}{z}\right)=
\frac{1}{2}z+\frac{1}{24}z^2-\frac{1}{2880}z^4+\frac{1}{181440}z^6 -\frac{1}{9676800}z^8+\frac{1}{479001600}z^{10}+\dots \)
Let \( \lambda_k \) be the coefficients of this series.

And let \( \eta_k \) be the coefficients of the series \( \el(e^z-1)=\el z + \frac{\el}{2} z^2 + \frac{\el}{6} z^3 + \dots \).

Then on the right hand side of (*) we have:
\( (r-r\circ \eta)_n = r_n - \sum_{j=0}^n r_j (\eta^{\cdot j})_n \)
by the composition rule for powerseries (Carleman matrix multiplication) where \( \eta^{\cdot j} \) means the \( j \)-th power (not iteration) of \( \eta \).

This yields a recursive formula for \( r_n \):
\( r_n(1-(\eta^{\cdot n})_n)=\lambda_n + \sum_{j=0}^{n-1} r_j (\eta^{\cdot j})_n \)
Because \( \eta^{\cdot 0}=1 \) we can start the right hand sum at 1 instead of 0 if \( n>0 \), so:
\( r_n = \frac{\lambda_n + \sum_{j=1}^{n-1} r_j (\eta^{\cdot j})_n}{1-(\eta^{\cdot n})_n} \) for \( n>0 \).

Now we know that \( \eta^{\cdot j}(z)=\el^j (e^z-1)^j=\el^j\sum_{k=0}^j \left(j\\k\right)e^{kz}(-1)^{j-k} \)
\( \left(\eta^{\cdot j}\right)_n=\el^j \sum_{k=0}^j \left(j\\k\right)\frac{k^n}{n!}(-1)^{j-k}=\el^j \sum_{k=1}^j \left(j\\k\right)\frac{k^n}{n!}(-1)^{j-k} \)

So for example:
\( r_1=\frac{\lambda_1}{1-\eta_1}=\frac{1/2}{1-\el} \)
\( r_2=\frac{\lambda_2 + \eta_2}{1-(\eta^{\cdot 2})_2}=\frac{\frac{1}{24}+\frac{1}{2-2\el}\frac{\el}{2}}{1-\el^2(-1+2^2/2)}=\frac{\el}{{4 {\el}^{3} } - {4 {\el}^{2} } - {4 \el} + 4} + \frac{1}{24 - {24 {\el}^{2} }} \)
\( r_3=\frac{\el}{{-12 {\el}^{6} } + {12 {\el}^{5} } + {12 {\el}^{4} } - {12 {\el}^{2} } - {12 \el} + 12} + \frac{{\el}^{2} }{{24 {\el}^{5} } - {24 {\el}^{3} } - {24 {\el}^{2} } + 24}+\frac{{\el}^{3} }{{-6 {\el}^{6} } + {6 {\el}^{5} } + {6 {\el}^{4} } - {6 {\el}^{2} } - {6 \el} + 6} \)
Reply
#2
bo198214 Wrote:...
\( \log(e^z-\el)+r(e^z-\el)=\log(z-\el)+r(z-\el)+\el \)
\( \log\left(\frac{e^z-\el}{z-\el}\right)=r(z-\el)-r(e^z-\el)+\el \)
...
Let
\( f_1=\log(e^z-\el)-\log(z-\el) \)
\( f_2=\log\left(\frac{e^z-\el}{z-\el}\right) \)
Perhaps, it is supposed that \( f_1=f_2 \). I plot both functions below.
   
Levels \( \Re(f)=-2,-1,0,1,2,3,4 \) are shown with thick black curves.
Levels \( \Re(f)=-1.8,-1.6,-1.4,-1.2,-0.8,-0.6,-0.4,-0.2 \) are shown with thin red curves.
Levels \( \Re(f)= 0.2, 0.4, 0.6, 0.8, 1.2, 1.4, 1.6, 1.8 \) are shown with thin red curves.
Levels \( \Im(f)=-2,-1 \) are shown with thick red curves.
Levels \( \Im(f)=1,2 \) are shown with thick blue curves.
Levels \( \Im(f)=\pm \pi,\pm 3\pi \) are shown with thick pink curves.
Levels \( \Im(f)=-4 \) is shown with cyan curve.
Levels \( \Im(f)=-1.8,-1.6,-1.4,-1.2,-0.8,-0.6,-0.4,-0.2 \) are shown with thin red curves.
Levels \( \Im(f)= 0.2, 0.4, 0.6, 0.8, 1.2, 1.4, 1.6, 1.8 \) are shown with thin red curves.
Reply
#3
Dear Henryk. I already have expessed some doubts, but now I see, my misunderstanding begins even earlier.
I do not understand even the title: how the slog can be real at the fixed point?

On the other hand, I suggest the asymptotic development of sexp at large values of the imaginary part of the argument.
Let \( \varepsilon=\varepsilon(z)=\exp(\el z +R) \), where \( \el \approx 0.3+1.3 i \) is fixed point of logarithm and \( R \) is some complex constant. Then, equation \( f(z+1)=exp(f(z)) \) allows the asumptotic expansion, using \( \varepsilon \) as small parameter:
\( \mathrm{sexp}(z)=\sum_{n=0}^N a_n \varepsilon^n +\mathcal{O}(\varepsilon^{N+1}) \).
The first coefficients of this expansion are:
\( a_0=\el \),
\( a_1=1 \),
\( a_2=\frac{1/2}{\el-1}\approx -0.151314 - 0.2967488 \mathrm{i}~,~ \)
\( a_3=\frac{a_2+1/6}{\el^2-1}\approx -0.36976 + 0.98730 \mathrm{i} \)
\( a_4= a4=(a2/2.+a_2^2/2.+a_3+1./24.)/(\ell^3-1.)
\approx 0.025811 -0.017387 i \).
I evaluate the parameter \( R\approx 1.077961437527-0.946540963949 i \), fitting tetration sexp at large values of the imaginary part, with the approximation at \( N=4 \). I plot the deviation of tetration from the asymptoric approximation with \( N=0,1,2,3 \) in the figiures below.
   
The four plots correspond to the four asumptotic approximations. The deviations
\( f=\mathrm{slog}(z)-L \),
\( f=\mathrm{slog}(z)-\Big(L+\varepsilon(z)\Big) \),
\( f=\mathrm{slog}(z)-\Big(L+\varepsilon+a_2 \varepsilon^2 \Big) \),
\( f=\mathrm{slog}(z)-\Big(L+\varepsilon+a_2 \varepsilon^2 + a_3 \varepsilon^3\Big) \)
are shown in the complex plane with lines of constant phase and constant modulus.
Levels \( \arg(f)=-2,-1 \) are shown with thick red lines.
Level \( \arg(f)=0 \) is shown with thick black lines.
Levels \( \arg(f)=1,2 \) are shown with thick blue lines.
Levels \( \arg(f)=\pm \pi \) are shown with scratched lines. (these lines reveal the step of sampling used by the plotter).

Levels \( |f|=\exp(-0.Cool,\exp(-0.6),\exp(-0.4),\exp(-0.2) \) are shown with thin red lines.
Levels \( |f|=\exp(0.2),\exp(0.4),\exp(0.6),\exp(-0.Cool \) are shown with thin blue lines.
Levels \( |f|=\exp(3), \exp(2), \exp(1),\exp(0), \exp(-\!1), \exp(-\!2), \exp(-\!3), \exp(-\!4), \exp(-\!5), \exp(-\!5), \exp(-\!7), \exp(-\!Cool \) are shown with thin thick black lines.
Level \( |f|=\exp(-10) \) is shown with thick red line.
Levels \( |f|= \exp(-12),\exp(-14), \exp(-16),\exp(-1Cool \) are shown with thick black lines.
Level \( |f|=\exp(-20) \) is shown with thick red line.
Levels \( |f|= \exp(-22),\exp(-24), \exp(-26),\exp(-2Cool \) are shown with thick black lines.
Level \( |f|=\exp(-30) \) is shown with thick green line.
Level \( |f|=\exp(-31) \) is shown with thick black line.
The plotter tried to draw also
Level \( |f|=\exp(-32) \) with thick black line and
Level \( |f|=\exp(-33) \) with thin dark green line, which are seen a the upper left hand side corners of the two last pictures, but the precision of evaluation of tetration is not sufficient to plot the smooth lines; for the same reason, the curve for
\( |f|=\exp(-31) \) in the last picture, in the upper right band side looks a little bit irreguler; also, the pattenn in the upper left corner of the last two pictures looks chaotic; the plotter cannot distinguish the function from its asymptotic aproximation.

The figure indicates that, at \( \Im(z)>4 \), \( \Re(z)<4\Im(z) - 25 \), the asymptotic approximation
\( \mathrm{sexp}(z)\approx L+\varepsilon+a_2 \varepsilon^2 + a_3 \varepsilon^3 \)
gives at least 14 correct significant figures. At large values of the imaginary part, this approximation is more precise than the evaluation of tetration through the contour integral.

Questions.
1. Could anybody confirm this result with some independent way of evaluation of the superexponentiation?
2. Can we invert the series and get the expansion for the slog? Is it the same as Henryk has posted?
This may be not so straightforward. I see, there is different behavior in the left hand side of the figure and in the right hand side. This may mean that we should add more exponential terms, which are not just integer power of \( \varepsilon \). Similarly, we may need some additional logarithmic terms in the expansion of slog for the robust approximation.
Reply
#4
Henryk, I continue expressing my doubts.
bo198214 Wrote:.. So for example:
\( r_1=\frac{\lambda_1}{1-\eta_1}=\frac{1/2}{1-\el} \)
\( r_2=... \)
Could you type your expression for the
\( r_0 \)
?
Reply
#5
Kouznetsov Wrote:
bo198214 Wrote:.. So for example:
\( r_1=\frac{\lambda_1}{1-\eta_1}=\frac{1/2}{1-\el} \)
\( r_2=... \)
Could you type your expression for the
\( r_0 \)?

Well \( r_0 \) has to be chosen such that \( \text{slog}(0)=-1 \), i.e
\( -1=\text{slog}(0)=\frac{1}{\el}\left(\log(-\el) + r_0+\sum_{n=1}^\infty r_n (-\el)^n\right) \)
i.e.
\( r_0=-\el-\log(-\el)-\sum_{n=1}^\infty r_n (-\el)^n \)
*headscratch, is that correct?*
Reply
#6
bo198214 Wrote:\( r_0 \) has to be chosen such that \( \text{slog}(0)=-1 \), i.e
\( -1=\text{slog}(0)=\frac{1}{\el}\left(\log(-\el) + r_0+\sum_{n=1}^\infty r_n (-\el)^n\right) \)
i.e.
\( r_0=-\el-\log(-\el)-\sum_{n=1}^\infty r_n (-\el)^n \)
*headscratch, is that correct?*
Well, if \( r_0 \approx -1.0779614375277+0.9465409639482 i \)
then, can we plot the pics?

P.S. Henryk, I have computational indication, that the series above, even if converges, does not converge to the values of the slog function.
I calculated the jump of the slog function along its cut. The code:
#include <math.h>
#include <stdio.h>
#include <stdlib.h>
#define DB double
#define DO(x,y) for(x=0;x<y;x++)
#include <complex.h>
#define z_type complex<double>
#define Re(x) x.real()
#define Im(x) x.imag()
#define I z_type(0.,1.)
#include "f4natu.cin"
#include "f4d1natu.cin"
#include "g4natu.cin"
main(){ int m,n; DB x,y; z_type z,c,d, cu,cd;
z_type L=z_type(.31813150520476413, 1.3372357014306895);

DO(m,31)
{ x=Re(L)-.1*m;
z=z_type(x,Im(L)+1.e-14); cu=G4natu(z);
z=z_type(x,Im(L)-1.e-14); cd=G4natu(z); d=cu-cd; c=d*L;
printf("%5.2f %6.3f %15.12f %15.12f %15.12f %15.12f\n",.1*m,x,Re(d),Im(d), Re©,Im©);
}
}
Below, is the output.
Zeroth column: distance from the fixed point L.
First column: x, Real part of the argument of slog
next two columns: Real and imaginary parts of the jump evaluated as
d= slog(L - x + i *1.e-14) - slog(L - x - i *1.e-14).
next two columns: Real and imaginary parts of d/L .
0.00 0.318 2.204629320832 0.510377772480 0.018866665677 3.110476285219
0.10 0.218 4.446950517660 1.057939822526 0.000000161107 6.283184982867
0.20 0.118 4.446949163231 1.057934723290 0.000006549101 6.283181549449
0.30 0.018 4.446950722702 1.057908310360 0.000042365531 6.283175232044
0.40 -0.082 4.446968541263 1.057833866255 0.000147583492 6.283175376644
0.50 -0.182 4.447023057115 1.057676880467 0.000374853702 6.283198335062
0.60 -0.282 4.447137525037 1.057397004506 0.000785529681 6.283262368294
0.70 -0.382 4.447333728986 1.056950220488 0.001445403878 6.283382603147
0.80 -0.482 4.447628633453 1.056291468498 0.002420128961 6.283567390167
0.90 -0.582 4.448032384297 1.055377506697 0.003770757173 6.283816540166
1.00 -0.682 4.448547681229 1.054169596920 0.005549949441 6.284121339465
1.10 -0.782 4.449170275877 1.052635649573 0.007799265569 6.284465898277
1.20 -0.882 4.449890217828 1.050751616712 0.010547697790 6.284829260148
1.30 -0.982 4.450693461716 1.048502095034 0.013811375676 6.285187742835
1.40 -1.082 4.451563518823 1.045880232677 0.017594216201 6.285517117241
1.50 -1.182 4.452482939627 1.042887105440 0.021889229526 6.285794391492
1.60 -1.282 4.453434513703 1.039530746214 0.026680198603 6.285999106707
1.70 -1.382 4.454402151619 1.035824994068 0.031943498780 6.286114150167
1.80 -1.482 4.455371465040 1.031788294715 0.037649886408 6.286126149438
1.90 -1.582 4.456330089184 1.027442545470 0.043766145990 6.286025536119
2.00 -1.682 4.457267802332 1.022812043024 0.050256535271 6.285806370905
2.10 -1.782 4.458176496661 1.017922564733 0.057084004499 6.285466012314
2.20 -1.882 4.459050048272 1.012800594128 0.064211190742 6.285004696498
2.30 -1.982 4.459884125528 1.007472688141 0.071601203094 6.284425079632
2.40 -2.082 4.460675965723 1.001964975531 0.079218222344 6.283731779706
2.50 -2.182 4.461424141950 0.996302771896 0.087027941621 6.282930942304
2.60 -2.282 4.462128335191 0.990510295126 0.094997874414 6.282029845292
2.70 -2.382 4.462789121224 0.984610465224 0.103097554447 6.281036550200
2.80 -2.482 4.463407777924 0.978624773560 0.111298649224 6.279959602927
2.90 -2.582 4.463986115531 0.972573208132 0.119575005978 6.278807783003
3.00 -2.682 4.464526330455 0.966474223171 0.127902645797 6.277589898421

In the first row, the last number pretends to be pi\approx 3.14, but my algorithm, evaluating slog, gives only 14 digits. As the distance from the branchpoint becomes macroscopic (for ex., 0.1, id est, much larger than the error of evaluation of slog), the jump of slog is similar to \( 2\pi i /L \) , but it grow with increase of distance from the branchpoint. The up to last colmn can be interpreted as indication of error of the approximation with log+polynomial, because log has constsant jump.
For this reason I expect that the series has radius of approximation zero.
(however, the series still may be very useful for the precise numerical evaluation in vicinity of the branchpoint)

Can anybody suggest a holomorphic function with some branchpoint and similar behavior of the jump along the cutline?
Reply
#7
I just see, that Kneser himself made the following ansatz:

\( \Psi(z)=\frac{1}{c}\log(z-c)+\sum_{0\le m\le n } c_{m,n} (z-c)^{m+\frac{2\pi i n}{c}} \)

or the powers directly expressed:
\( \Psi(z)=\frac{1}{c}\log(z-c)+\sum_{0\le m\le n} c_{m,n} \exp\left(\log(z-c)\left(m+\frac{2\pi i n}{c}\right)\right) \)
Reply
#8
bo198214 Wrote:\( \Psi(z)=\frac{1}{c}\log(z-c)+\sum_{0\le m\le n } c_{m,n} (z-c)^{m+\frac{2\pi i n}{c}} \)
I do not understand why \( m\le n \). The sum with first 5 terms with \( n=0 \) gives the approximation of slog with function \( f \)
\( f=f(z)=\frac{1}{c}\log(z-c)+\sum_{m=0}^4 c_{m,0} (z-c)^{m} \)
shown in the figure below.
   
Levels \( \Re(f)=0,\pm 1, \pm2 \) are shown with thick black lines.
Levels \( \Re(f)=-1.9, .. -0.1 \) are shown with thin red lines.
Levels \( \Re(f)=0.1, .. 1.9 \) are shown with thin blue lines.
Levels \( \Im(f)=0.1, .. 1.9 \) are shown with thin dark green lines.
Levels \( \Im(f)=0 \) is shown with thick green line.
(Deviation of this line from the real axis indicates the error of the approximation.)
Level \( \Im(f)=-1 \) is shown with thick red line
Levels \( \Im(f)=1,2,3 \) are shown with thin dark blue lines lines.
For comparison, dashed lines show the precise evaluation for some of the levels above for the robust implementation of the slog function as inverse of tetration. While \( |z-L|<1 \), the deviation of these dashed lines from the levels for function \( f \) is not seen even at the strong zooming-in of the central part of the figure.

Here is the code I used for the evaluation of the coefficients:
fog := z -> log((exp(z)-1)/z)
M := 6
Digits := 41
slo := z-> sum(v[n]*z^n, n = 1 .. M)
F := series(slo(z)-slo((exp(z)-1)*L)-fog(z), z, M)
Le := solve(log(x) = x, x)
L := conjugate(evalf(Le, 40))
for m to M-1 do v[m] := solve(coeff(F, z^m) = 0, v[m]) end do

Can anybody suggest a similar code for evaluation of the "fractal-power" coefficients, for example, \( c_{0,1} \), \( c_{1,1} \) ?
Reply
#9
Kouznetsov Wrote:I do not understand why \( m\le n \). The sum with first 5 terms with \( n=0 \) gives the approximation of slog with function \( f \)
\( f=f(z)=\frac{1}{c}\log(z-c)+\sum_{m=0}^4 c_{m,0} (z-c)^{m} \)
shown in the figure below.
Didnt you say that the series does not converge? Or not to the values of slog?

Beautiful picture btw.
Reply
#10
bo198214 Wrote:
Kouznetsov Wrote:I do not understand why \( m\le n \)
Didnt you say that the series does not converge? Or not to the values of slog?
Not to the values of slog. The radius of convergence of the sub-series \( S(z) \) of terms with integer poers of \( z \) seems to be \( |L| \).
This subseries gives the approximation \( S_{+}(z) \) of \( \mathrm slog(z) \), which does not seem to be better than that with the sum of the first 5 terms (from m=0 to m=4).
I include below the similar figure for the approximation of slog with \( \frac{1}{L}\Big(\log(z-L) + \text{polynomial of 64th power of }(z-L) \Big) \).
   
Looking a the pic, I expect, the subseries has singularity at \( z=0 \). If we holomorphically exptend the subseries \( S \), I expect, \( \Re(S(0))=-1 \). But it is difficult to think that that \( \Im(S(0))=0 \).

Kneser klaims the "gleichmassig und absolut konvergeirt". Any subseries of the absloutely convergent series should converge too. Therefore, I guess, the radius of convergence of the Kneser's expansion is not larger than \( |L| \).

I tried to calculate the soefficients in terms with fractional powers of \( z-L \).
I failed to force Maple to the asymptotic analysis with fractional powers.
I should try Mathematica the same.
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  extending normal tetration to real numbers using product tetration Alex Zuma 2025 0 940 12/12/2025, 07:49 PM
Last Post: Alex Zuma 2025
  my proposed extension of the fast growing hierarchy to real numbers Alex Zuma 2025 0 1,334 09/28/2025, 07:15 PM
Last Post: Alex Zuma 2025
  Behaviour of tetration into the real negatives Shanghai46 1 6,397 03/24/2025, 12:34 AM
Last Post: MphLee
  X-th iteration where x is a real number Natsugou 1 5,702 10/27/2024, 11:23 PM
Last Post: MphLee
  Down with fixed points! Daniel 1 2,793 04/29/2023, 11:02 PM
Last Post: tommy1729
  Real and complex tetration Daniel 13 17,297 04/04/2023, 10:25 AM
Last Post: JmsNxn
  Iteration with two analytic fixed points bo198214 62 72,667 11/27/2022, 06:53 AM
Last Post: JmsNxn
  Range of complex tetration as real Daniel 2 4,976 10/22/2022, 08:08 PM
Last Post: Shanghai46
  From complex to real tetration Daniel 3 6,331 10/21/2022, 07:55 PM
Last Post: Daniel
  Cost of real tetration Daniel 1 4,028 09/30/2022, 04:41 PM
Last Post: bo198214



Users browsing this thread: 1 Guest(s)