08/26/2009, 04:01 PM
(08/23/2009, 03:23 PM)bo198214 Wrote:(08/23/2009, 02:45 PM)tommy1729 Wrote:bo198214 Wrote:For \( k\ge 1 \) the constant -1 vanishes and we make the following calculations:notice in the last line bo wrote s(x) instead of s(0).
\( s^{(k)}(x)=(t(b^x))^{(k)}=\left(\sum_{i=0}^\infty \nu_i \frac{b^{xi}}{i!}\right)^{(k)}=\sum_{i=0}^\infty\frac{\nu_i}{i!}(b^{xi})^{(k)} \)
The derivation of \( b^{xi} \) is easily determined to be
\( (b^{xi})'=b^{xi}\text{ln}(b) i \) and so the k-th derivative is \( (b^{xi})^{(k)} = b^{xi}(\text{ln}(b)i)^k \), which give us in turn
\( \nu_k=s(x)^{(k)}= \text{ln}(b)^k\sum_{i=0}^\infty\nu_i\frac{i^k}{i!} \) for \( k\ge 1 \).
which is obviously a misprint. The corresponding equation system for arbitrary \( x_0 \) is:
\( \nu_k(x_0)=s^{(k)}(x_0)= \text{ln}(b)^k\sum_{i=0}^\infty\nu_i \cdot \frac{ b^{x_0 i}\cdot i^k}{i!} \) for \( k\ge 1 \).
right. thats what i meant.
bo198214 Wrote:tommy1729 Wrote:it is an expansion at x = 0.
now if we consider expansions at both x = 0 and x = 1 and get the same coefficients for x = 0 by computing them from
1) the coefficients expanded at x = 1
2) solving the modified equation ( see below)
I doubt this. I guess we get different superlogarithms for different development points \( x_0 \). One should perhaps check this with a complex plot.
The idea is to arrive at the same superlogaritms by choosing an appropriate ( of possibly many ) solution for development points\( x_0 \) and \( x_1 \) .
if that is possible for all the points on the (real) interval [\( x_0 \),\( x_1 \)]
then i assume THAT PARTICULAR SLOG has a radius at least \( x_1 - x_0 \) when developped at \( x_0 \).
bo198214 Wrote:I further guess that for \( x_0 \) converging to the lower fixed point, the solution converges to the regular tetration.
And 3. I guess that Andrew's slog corresponds to the inverse of the matrix power sexp (which also depends on a development point).
i dont know why you believe that.
anyways im only thinking about real bases > eta.
a proof of those statements would be very intresting though !
bo198214 Wrote:If you have a function f(x) with powerseries coefficients \( v_k \) at 0 then f(x+d) has the powerseries development coefficients v_k(d) (provided that f has convergence radius >d at 0):
\( v_k(d) = \sum_{n=k}^\infty \left(n\\k\right) d^{n-k} v_n \) and vice versa \( v_k = \sum_{n=k}^\infty \left(n\\k\right) (-d)^{n-k} v_n(d) \).
yes , i wanted to add that comment yesterday , but you were first.
that is a very vital part of my idea.
bo198214 Wrote:tommy1729 Wrote:bo mentioned the potential non-uniqueness for v_k when expanded at x = 0.
maybe this could be the extra condition we(?) are looking for.
The demand that the solution at different development points should give the same function does not determine how to solve the equation system in a different way. If you mean that.
i think it does...( if such a solution exists and we have the same function analytic in both developped points , nomatter where we expand ) although i didnt give a method nor proof.
i consider it like solvable coupled infinite simultane equations ?
sorry gotta run.
regards
tommy1729

