(05/01/2015, 01:57 AM)marraco Wrote: This is a numerical example for base a=e
(...)
So you get this systems of equations (blue to the left, and red to the right):
Code:[1 1 1 1 1 1 1 1 1] [ 1] [e]
[0 1 2 3 4 5 6 7 8] [a₁] [e.a₁]
[0 0 1 3 6 10 15 21 28] [a₂] [e.a₂+e/2.a₁²]
[0 0 0 1 4 10 20 35 56] [a₃] [e.a₃+e.a₁.a₂+e/6.a₁³]
[0 0 0 0 1 5 15 35 70] * [a₄] = [e.a₄+e/2.a₂²+e.a₃.a₁+e/2.a₂.a1²+e/24.a₁⁴]
[0 0 0 0 0 1 6 21 56] [a₅] [...]
[0 0 0 0 0 0 1 7 28] [a₆] [...]
[0 0 0 0 0 0 0 1 8] [a₇] [...]
[0 0 0 0 0 0 0 0 1] [a₈] [...]
Quote:It is a non linear system of equations, and the solution for this particular case is:This is perhaps a good starting point to explain the use of Carleman-matrices in my (Pari/GP-supported) matrix-toolbox, because you've just applied things analoguously to how I do this, only you didn' express it in matrix-formulae.
(...)Code:a₀= 1,00000000000000000
a₁= 1,09975111049169000
a₂= 0,24752638354178700
a₃= 0,15046151104294100
a₄= 0,12170896032120000
a₅= 0,16084324512292400
a₆= -0,02254254634348470
a₇= -0,10318144159688800
a₈= 0,06371479195361670
To explain the basic idea of a Carlemanmatrix:
consider a powerseries
\( \hspace{100} \small f(x) = a_0 + a_1 x + a_2 x^2 + ... \)
We express this in terms of the dot-product of two infinite-sized vectors
\( \hspace{100} \small V(x) \cdot A_1 = f(x) \)
where the column-vector A_1 contains the coefficients \( \small A_1=[a_0,a_1,a_2,...] \) and the row-vector \( \small V(x)=[1,x,x^2,x^3,x^4,...] \)
Now to make that idea valuable for function-composition /- iteration it would be good, if the output of such an operation were not simple a scalar, but of the same type ("vandermonde vector") as the input (\( \small V(x) \) ).
This leads to the idea of Carlemanmatrices: we just generate the vectors \( \small A_0,A_1,A_2,A_3,A_4,... \) where the vector \( \small A_k \) contains the coefficients for powers of f(x), such that \( \small V(x) \cdot A_k = f(x)^k \)
\( \hspace{400} \) ... in a matrix \( \small A \) getting the operation:
\( \hspace{100} \small V(x) \cdot A = [ 1, f(x), f(x)^2, f(x)^3 ,... ] \) or
\( \hspace{100} \small V(x) \cdot A = V(f(x)) \)
Having this general idea we can fill our toolbox with Carlemanmatrices for the composition of functions for a fairly wide range of algebra.
For instance the operation INC
\( \hspace{100} \text{INC := } \hspace{50} \small V(x) \cdot P = V(x+1) \)
and its h'th iteration ADD
\( \hspace{100} \text{ADD(h) := INC ^h:= } \hspace{50} \small V(x) \cdot P^h = V(x+h) \)
is then only a problem of powers of P
The operation MUL needs a diagonal vandermonde vector:
\( \hspace{100} \text{MUL(w) := } \hspace{50} \small V(x) \cdot ^dV(w) = V(x*w) \)
The operation DEXP (= exp(x)-1) needs the matrix of Stirlingnumbers 2nd kind, similarity-scaled by factorials:
\( \hspace{100} \text{DEXP := } \hspace{50} \small V(x) \cdot S2 = V(e^x -1) \)
and as an exercise, we see, that if we right-compose this with the INC -operation, we get the ordinary EXP operator, for which I give the matrix-name B:
\( \hspace{100} \text{EXP := } \hspace{50} \small V(x) \cdot S2 \cdot P = V( e^x -1) \cdot P = V(( e^x -1) +1) = V(e^x) \)
\( \hspace{100} \text{EXP := } \hspace{50} \small V(x) \cdot B = V( e^x) \)
Of course, iterations of the EXP require then only powers of the matrix B.
To see, that this is really useful, we need a lemma on the uniqueness of power-series. That is, in the new matrix-notation:
If a function \( \small V(x) \cdot A_1 = f(x)$ \) is continuous for a (even small) continuous range of the argument x, then the coefficients in A_1 are uniquely determined.
That uniqueness of the coefficients in A_1 is the key, that we can look at the compositions of Carleman-matrices alone without respect of the notation with the dotproduct by V(x) and for instance, we can make use of the analysis of Carlemanmatrix-decompositions like
\( \hspace{100} \small V(x) \cdot S2 \cdot P = V(x) \cdot B \)
and can analyze
\( \hspace{100} \small S2 \cdot P = B \)
directly, for instance to arrive at the operation LOGP : log(1+x)
\( \hspace{100} \small S2 \cdot P = B \\
\hspace{100} \small P = S2^{-1} \cdot B \qquad \text{ where } \qquad S2^{-1} = S1 \\
\hspace{100} \small S2 = B \cdot P^{-1} \\
\)
<hr>
Now I relate this to that derivation which I've quoted from marraco's post.
(05/01/2015, 01:57 AM)marraco Wrote: This is a numerical example for base a=e
(...)
So you get this systems of equations (blue to the left, and red to the right):
Code:[1 1 1 1 1 1 1 1 1] [ 1] [e]
[0 1 2 3 4 5 6 7 8] [a₁] [e.a₁]
[0 0 1 3 6 10 15 21 28] [a₂] [e.a₂+e/2.a₁²]
[0 0 0 1 4 10 20 35 56] [a₃] [e.a₃+e.a₁.a₂+e/6.a₁³]
[0 0 0 0 1 5 15 35 70] * [a₄] = [e.a₄+e/2.a₂²+e.a₃.a₁+e/2.a₂.a1²+e/24.a₁⁴]
[0 0 0 0 0 1 6 21 56] [a₅] [...]
[0 0 0 0 0 0 1 7 28] [a₆] [...]
[0 0 0 0 0 0 0 1 8] [a₇] [...]
[0 0 0 0 0 0 0 0 1] [a₈] [...]
First we see the Pascalmatrix P on the lhs in action, then the coefficients \( \small [1,a_1,a_2,...] \) of the Abel-function \( \small \alpha(x) \) in the vector, say, A_1 . So the left hand is
\( \hspace{100} \small P \cdot A_1 \)
To make things smoother first, we assume A as complete Carlemanmatrix, expanded from A_1. If we "complete" that left hand side to discuss this in power series we have
\( \hspace{100} \small V(x) \cdot P \cdot A = V(x+1) \cdot A = V( \alpha (x+1)) \)
It is very likely, that the author wanted to derive the solution for the equation \( \small \alpha(x+1) = e^{\alpha(x)} \) ; so we would have for the right hand side
\( \hspace{100} \small V(x) \cdot A \cdot B = V(\alpha(x)) \cdot B = V( e^{\alpha (x)}) \)
and indeed, expanding the terms using the matrixes as created in Pari/GP with, let's say size of 32x32 or 64x64 we get very nice approximations to that descriptions in the rhs of the quoted matrix-formula.
What we can now do, depends on the above uniqueness-lemma: we can discard the V(x)-reference, just writing \( \small A \cdot B \) and looking at the second column of B only \( \small A \cdot B[,1] = Y \) we get \( \small y_1 = e \cdot a_1 , y_2 =e \cdot (... ) \) as shown in the quoted post.
So indeed, that system of equations of the initial post is expressible by
\( \hspace{100} \small P \cdot A = A \cdot B \)
and the OP searches a solution for A.
<hr>
While I've -at the moment- not yet a solution for A this way, we can, for instance, note that if A is invertible, then the equation can be made in a Jordan-form:
\( \hspace{100} \small P = A \cdot B \cdot A^{-1} \)
which means, that B can be decomposed by similarity transformations into a triangular Jordan block, namely the Pascalmatrix - and having a Jordan-solver for finite matrix-sizes, one could try, whether increasing the matrix-sizes the Jordan-solutions converge to some limit-matrix A.
For the alternative: looking at the "regular tetration" and the Schröder-function (including recentering the powerseries around the fixpoint) one gets a simple solution just by the diagonalization-formulae for triangular Carlemanmatrices which follow the same formal analysis using the "matrix-toolbox" which can, for finite size and numerical approximations, nicely be constructed using the matrix-features of Pari/GP.
Gottfried Helms, Kassel

