In this post I describe a current problem of mine when trying to compute the derivative of the asum(x) depending on x with my matrix-ansatz.
Remark: It might look as a step-back behind Sheldon's power-series solution and the possibilities of computations/analyses derived from there, and so might seem superfluous or even ignorant. But it's not meant as this - I just want to complete the formulation of the Carleman/Neumann-matrix-ansatz for this group of questions too. But here I stumble in a tiny, but mind-bending problem. To expose everything as clean as possible I separate the planned post into two: first the simple, but more complete than before, re-statement of the matrix-representation and the next post with the problem/question concerning derivatives.
I'm beginning with the restatement of the general problem of this iteration series in usual computation per Cesaro/Euler-summation of the iteration-terms, and then in terms of two powerseries using the Bell-matrices developed at the different fixpoints and the Neumann-versions of that Bell-matrices for the iteration series.
We begin with the formulation
Here \( x_k \) means \( x_k = b^{x_{k-1}}-1 \) so increasing k means iterate by \( x_{k+1}=f(x_k)=b^{x_k}-1 \) and \( x_{k-1} = g(x_k-t_1)+t_1 \) where \( g(x) = \log(1+x_k+t_1)/\log(b) -t_1 \) and \( t_1 \) is the upper fixpoint.
Thanks to our powerful computers this can be approximated(!) in reasonable time, say about a second using sumalt() in Pari/GP and, say standard 200 digits precision.
The conversion of this into a problem using powerseries (besides of the advantages of having it in common analytical terms) provides a much faster computation; it needs about only the 20'th part of the time.
The principle of the idea is to separate the iteration series into three parts
find power series for the two infinite partial iteration series and correct for the doubly occuring term \( x_0 \) by subtraction of one instance.
I'd already solved this problem; I'll repeat the statement here because the open problem which I want adress below is directly associated with this.
First let's denote the partial iteration series with positive indexes at the \( x \) whose terms go towards the lower fixpoints, as " \( \operatorname{asuma}(x) \) " and that with the negative indexes going to the upper fixpoint as " \( \operatorname{asumb}(x) \) ".
Second we denote the Bell-matrix for \( f(x) = b^x - 1 \) developed at the lower fix-point \( t_0=0 \) as \( U_0 \) and that for \( g(x)=\log(1+x+t_1)/\log(b)-t_1 \) developed at the higher fixpoint \( t_1>0 \) as \( U_1 \)
Third we identify the matrices, which provide the power series for the partial iteration series towards the lower fixpoint \( \operatorname{asuma}(x) \) and that towards the upper fixpoint \( \operatorname{asumb} (x) \) , as
and
Then -in principle- we can compute:
(using the \( a_{0,k} \) from the second column of \( A_0 \) and the \( a_{1,k} \) from the second column of \( A_1 \) ) and the asum(x) by
The two occuring power series seem to be entire, but that's not yet known. Anyway, to get convergence to reasonable precision with finitely many terms, say 64, the argument x should be in the near of the fixpoints. Fortunately this can be achieved by exact computations and a couple of iterations of x using the original functions \( f(x) \) and \( g(x) \) . So if we find that we need n iterations to shift x sufficiently near to the lower fixpoint (by iteration of f(x)) and m iterations to shift it to the upper fixpoint (by iterations of g(x)) then we can rewrite the three parts of the \( \operatorname{asum}(x) \) in the following way. Using
we get for the whole expression
where by the implemented procedure the m and n are to be determined such that the two power series converge sufficiently fast (for instance, I've taken them such that the resulting x is less than 0.1 from the respective fixpoint.
In my practical computations this speeds up the computations to a factor of 20, so that bulks of analyses can be done in reasonable time.
The next step is to use this scheme also for the computation of the derivative (for instance to find the zero and the extrema of the \( \operatorname{asum}() \) depending of the x via newton-iteration).
Everyting (except time consumtion, and the limitation in precision) is fine when I compute the derivatives by numerical diffenrentiation and the serial/sumalt-evaluation of the \( \operatorname{asum}(x) \) . But trying the same matrix-ansatz as above for the derivatives using the technique of the \( \operatorname{asumae}(x_{n-1},x_{1-m}) \) fails when that sum contains more than the "trivial" term x itself . See next posting.
Remark: It might look as a step-back behind Sheldon's power-series solution and the possibilities of computations/analyses derived from there, and so might seem superfluous or even ignorant. But it's not meant as this - I just want to complete the formulation of the Carleman/Neumann-matrix-ansatz for this group of questions too. But here I stumble in a tiny, but mind-bending problem. To expose everything as clean as possible I separate the planned post into two: first the simple, but more complete than before, re-statement of the matrix-representation and the next post with the problem/question concerning derivatives.
I'm beginning with the restatement of the general problem of this iteration series in usual computation per Cesaro/Euler-summation of the iteration-terms, and then in terms of two powerseries using the Bell-matrices developed at the different fixpoints and the Neumann-versions of that Bell-matrices for the iteration series.
We begin with the formulation
\( \operatorname{asum}(x) = \ldots + x_{-m} - x_{-m+1} + \ldots x_{-1} + x_0 - x_{-1} + \ldots - x_{n-1} + x_n - \ldots \)
Here \( x_k \) means \( x_k = b^{x_{k-1}}-1 \) so increasing k means iterate by \( x_{k+1}=f(x_k)=b^{x_k}-1 \) and \( x_{k-1} = g(x_k-t_1)+t_1 \) where \( g(x) = \log(1+x_k+t_1)/\log(b) -t_1 \) and \( t_1 \) is the upper fixpoint.
Thanks to our powerful computers this can be approximated(!) in reasonable time, say about a second using sumalt() in Pari/GP and, say standard 200 digits precision.
The conversion of this into a problem using powerseries (besides of the advantages of having it in common analytical terms) provides a much faster computation; it needs about only the 20'th part of the time.
The principle of the idea is to separate the iteration series into three parts
\( \begin{array} {llll}
\operatorname{asum}(x) &=& \ldots +x_{-2} - x_{-1} &+ x_0 \\
& & &+ x_0 - x_1 + x_2 - \ldots \\
& & & - x_0
\end{array}
\)
\operatorname{asum}(x) &=& \ldots +x_{-2} - x_{-1} &+ x_0 \\
& & &+ x_0 - x_1 + x_2 - \ldots \\
& & & - x_0
\end{array}
\)
find power series for the two infinite partial iteration series and correct for the doubly occuring term \( x_0 \) by subtraction of one instance.
I'd already solved this problem; I'll repeat the statement here because the open problem which I want adress below is directly associated with this.
First let's denote the partial iteration series with positive indexes at the \( x \) whose terms go towards the lower fixpoints, as " \( \operatorname{asuma}(x) \) " and that with the negative indexes going to the upper fixpoint as " \( \operatorname{asumb}(x) \) ".
Second we denote the Bell-matrix for \( f(x) = b^x - 1 \) developed at the lower fix-point \( t_0=0 \) as \( U_0 \) and that for \( g(x)=\log(1+x+t_1)/\log(b)-t_1 \) developed at the higher fixpoint \( t_1>0 \) as \( U_1 \)
Third we identify the matrices, which provide the power series for the partial iteration series towards the lower fixpoint \( \operatorname{asuma}(x) \) and that towards the upper fixpoint \( \operatorname{asumb} (x) \) , as
\( A_0 = (I + U_0)^{-1} \)
and
\( A_1 = (I + U_1)^{-1} \)
Then -in principle- we can compute:
\( \begin{array} {rcl} \operatorname{asuma}(x) &=& \sum_{k=0}^\infty a_{0,k}*x^k \\
\operatorname{asumb}(x) &=& \frac {t_1}2 + \sum_{k=0}^\infty a_{1,k}*(x-t_1)^k \end{array} \)
\operatorname{asumb}(x) &=& \frac {t_1}2 + \sum_{k=0}^\infty a_{1,k}*(x-t_1)^k \end{array} \)
(using the \( a_{0,k} \) from the second column of \( A_0 \) and the \( a_{1,k} \) from the second column of \( A_1 \) ) and the asum(x) by
\( \operatorname{asum}(x) = \operatorname{asuma}(x) + \operatorname{asumb}(x) - x \)
The two occuring power series seem to be entire, but that's not yet known. Anyway, to get convergence to reasonable precision with finitely many terms, say 64, the argument x should be in the near of the fixpoints. Fortunately this can be achieved by exact computations and a couple of iterations of x using the original functions \( f(x) \) and \( g(x) \) . So if we find that we need n iterations to shift x sufficiently near to the lower fixpoint (by iteration of f(x)) and m iterations to shift it to the upper fixpoint (by iterations of g(x)) then we can rewrite the three parts of the \( \operatorname{asum}(x) \) in the following way. Using
\( \operatorname{asumae}(x_m,x_n) = \sum_{h=m}^n (-1)^h x_{h} \)
we get for the whole expression
\( \operatorname{asum}(x) = \operatorname{asuma}(x_{n}) + \operatorname{asumae}(x_{n-1},x_{-m+1}) + \operatorname{asumb}(x_{-m}) \)
where by the implemented procedure the m and n are to be determined such that the two power series converge sufficiently fast (for instance, I've taken them such that the resulting x is less than 0.1 from the respective fixpoint.
In my practical computations this speeds up the computations to a factor of 20, so that bulks of analyses can be done in reasonable time.
The next step is to use this scheme also for the computation of the derivative (for instance to find the zero and the extrema of the \( \operatorname{asum}() \) depending of the x via newton-iteration).
Everyting (except time consumtion, and the limitation in precision) is fine when I compute the derivatives by numerical diffenrentiation and the serial/sumalt-evaluation of the \( \operatorname{asum}(x) \) . But trying the same matrix-ansatz as above for the derivatives using the technique of the \( \operatorname{asumae}(x_{n-1},x_{1-m}) \) fails when that sum contains more than the "trivial" term x itself . See next posting.
Gottfried Helms, Kassel

