Well, I've now fiddled out the method how to find the analytic formulae for the first and second derivative of the asum(x), in terms of power series. It is always
\( \operatorname{asum}(x) =
\operatorname{asum}_0(x,p) +
\operatorname{asum}_c(x,p,q) +
\operatorname{asum}_1(x,q)
\)
\( \operatorname{asum}'(x) =
\operatorname{asum}'_0(x,p) +
\operatorname{asum}'_c(x,p,q) +
\operatorname{asum}'_1(x,q)
\)
\( \operatorname{asum}''(x) =
\operatorname{asum}''_0(x,p) +
\operatorname{asum}''_c(x,p,q) +
\operatorname{asum}''_1(x,q)
\)
where
This requires to compute the first and second derivatives of \( x_{-q},x_{-q+1},...x_{-1},x,x_1,x_2,...,x_p, \) and for the second derivative \( \operatorname{asum}''(x) \) some rule of combination - I can provide the details if this is of interest; after that derivatives can be computed recursively this is not much amount of computation. (For the recursion for the first derivatives I found amazingly an early reference in Ramanujan's notebooks, but not yet for the second derivatives, so this all remains based on pattern recognition so far and the inductive proofs should follow another day...)
The point of this part of investigation is, to have now the possibility to invoke the Newton-iteration for the zeros and the extrema of the asum without the need of the basic, but consumptive, limit formula \( \lim_{h \to 0} { \operatorname{asum}(x+h/2)-\operatorname{asum}(x-h/2) \over h} \) which seemed to be unsatisfactory to me.
I'd like now to relate this to Sheldon's earlier posted solution as a single power series, where some reservation was expressed concerning the accuracy of achievable computation (something ~32 dec digits) . Can that power series be made arbitrary precise (at least in principle)? And if, what would be the amount of computation? And did this include the possibility of a power series for the inverse of the asum(x)?
\( \operatorname{asum}(x) =
\operatorname{asum}_0(x,p) +
\operatorname{asum}_c(x,p,q) +
\operatorname{asum}_1(x,q)
\)
\( \operatorname{asum}'(x) =
\operatorname{asum}'_0(x,p) +
\operatorname{asum}'_c(x,p,q) +
\operatorname{asum}'_1(x,q)
\)
\( \operatorname{asum}''(x) =
\operatorname{asum}''_0(x,p) +
\operatorname{asum}''_c(x,p,q) +
\operatorname{asum}''_1(x,q)
\)
where
- * the \( \operatorname{asum}_0(x,p) \) and \( \operatorname{asum}_1(x,q) \) are expressed as a power series in x around the respective fixpoint
* the parameters p and q indicate initial shifts by integer iterations towards the respective fixpoint and
* the \( \operatorname{asum}_c(x,p,q) \) contains the remaining finite alternating sum over the integer iterates \( x_{-(q-1)} \cdots x_{p-1} \) around the center \( x_0=x \) .
This requires to compute the first and second derivatives of \( x_{-q},x_{-q+1},...x_{-1},x,x_1,x_2,...,x_p, \) and for the second derivative \( \operatorname{asum}''(x) \) some rule of combination - I can provide the details if this is of interest; after that derivatives can be computed recursively this is not much amount of computation. (For the recursion for the first derivatives I found amazingly an early reference in Ramanujan's notebooks, but not yet for the second derivatives, so this all remains based on pattern recognition so far and the inductive proofs should follow another day...)
The point of this part of investigation is, to have now the possibility to invoke the Newton-iteration for the zeros and the extrema of the asum without the need of the basic, but consumptive, limit formula \( \lim_{h \to 0} { \operatorname{asum}(x+h/2)-\operatorname{asum}(x-h/2) \over h} \) which seemed to be unsatisfactory to me.
I'd like now to relate this to Sheldon's earlier posted solution as a single power series, where some reservation was expressed concerning the accuracy of achievable computation (something ~32 dec digits) . Can that power series be made arbitrary precise (at least in principle)? And if, what would be the amount of computation? And did this include the possibility of a power series for the inverse of the asum(x)?
Gottfried Helms, Kassel

