Now I compared Daniels method with Andrews method for the base \( b=\sqrt{2}<\eta \) (hyperbolic case).
Daniel's approach is to consider the fixed point of \( f(x)=b^x \), and to determine the unique hyperbolic iterate \( f^{\circ t} \) there and then set \( {}^tb=f^{\circ t}(1) \). The lower fixed point of \( f \) for \( b=\sqrt{2} \) is 2 (and the upper fixed point is not reachable from 1 by iterations, so it is not of importance for our method \( {}^xb=\exp_b^{\circ t}(1) \)).
Because I have no actual formula to compute the series expansion for the hyperbolic iterate, I used an iterative formula (which can be found in [1] and has quite some similarity with Jay's approach):
\( f^{\circ t}(x)=\lim_{x\to 0} f^{\circ -n}(c^t f^{\circ n}(x)) \)
where f is assumed to have its fixed point at 0 and \( c \) is the derivative at the fixed point, which is in our case
\( c=f'(2)=\log(b) b^2 = \log(2^{1/2})2=\log(2) \). Of course there are some demands on the function f for the formula to be valid, but they are satisfied by our f, particularely \( 0<c<1 \).
In the usual way we can move the fixed point to 0 by conjugating and after iteration move it back to its original place by inverse conjugating. Resulting in this case in the formula
\( f_2(x)=b^{x+2}-2, g_2(x)=\log_b(x+2)-2 \) and
\( f^{\circ t}(x)= \lim_{n\to\infty} g_2^{\circ n}(c^t f_2(x-2))+2 \) i.e.
\( \phantom{sqrt{2}}^t \left(sqrt{2}\right) = \lim_{n\to\infty} g_2^{\circ n}(\log(2)^t f_2(-1))+2 \)
And now guess how Andrew's and Daniel's slog compare! (At least on the picture, I didnt start exacter numerical computations)
[1] M. C. Zdun, Regular fractional iterations, Aequationes Mathematicae 28 (1985), 73-79
Daniel's approach is to consider the fixed point of \( f(x)=b^x \), and to determine the unique hyperbolic iterate \( f^{\circ t} \) there and then set \( {}^tb=f^{\circ t}(1) \). The lower fixed point of \( f \) for \( b=\sqrt{2} \) is 2 (and the upper fixed point is not reachable from 1 by iterations, so it is not of importance for our method \( {}^xb=\exp_b^{\circ t}(1) \)).
Because I have no actual formula to compute the series expansion for the hyperbolic iterate, I used an iterative formula (which can be found in [1] and has quite some similarity with Jay's approach):
\( f^{\circ t}(x)=\lim_{x\to 0} f^{\circ -n}(c^t f^{\circ n}(x)) \)
where f is assumed to have its fixed point at 0 and \( c \) is the derivative at the fixed point, which is in our case
\( c=f'(2)=\log(b) b^2 = \log(2^{1/2})2=\log(2) \). Of course there are some demands on the function f for the formula to be valid, but they are satisfied by our f, particularely \( 0<c<1 \).
In the usual way we can move the fixed point to 0 by conjugating and after iteration move it back to its original place by inverse conjugating. Resulting in this case in the formula
\( f_2(x)=b^{x+2}-2, g_2(x)=\log_b(x+2)-2 \) and
\( f^{\circ t}(x)= \lim_{n\to\infty} g_2^{\circ n}(c^t f_2(x-2))+2 \) i.e.
\( \phantom{sqrt{2}}^t \left(sqrt{2}\right) = \lim_{n\to\infty} g_2^{\circ n}(\log(2)^t f_2(-1))+2 \)
And now guess how Andrew's and Daniel's slog compare! (At least on the picture, I didnt start exacter numerical computations)
[1] M. C. Zdun, Regular fractional iterations, Aequationes Mathematicae 28 (1985), 73-79
