*** I just tried my first latex-code in the hope to improve readability ***
Matrix method for tetration
Well - there is not much special here.
1) -----------------------------------------------------
Assume, you denote the summation of a powerseries
\( \hspace{24} s(x) = \sum_{k=0}^\infty x^k * a_k \)
as a vector-product:
\( \hspace{24} s(x) = rowvector(1,x,x^2,x^3,...) * colvector(a_0,a_1,a_2,a_3,...) \)
then it may be useful for the further analysis, to give names
for such vectors. I say
\( \hspace{24} V(x) =colvector(1,x,x^2,x^3,...) \)
Let the other vector be denoted as
\( \hspace{24} A = colvector(a_0,a_1,a_2,a_3,...) \)
then
\( \hspace{24} s(x) = V(x)\sim * A \)
Example: if
\( \hspace{24} A = colvector(a_0,a_1,a_2,a_3,...) = (1,1,\frac1{2!},\frac1{3!},\frac1{4!},...) \)
then V(x)~ * A represents simply the exponential-series in x, so
\( \hspace{24} e^x = V(x)\sim * A \)
2) -------------------------------------------------------
Now, if we want iteration, to get e^(e^x) that means, that
we simply need to set e^x=y and use y as paraemter for the
"powerseries"-vector (or better: "vandermonde"-vector, thus
the letter "V" at V(x))
\( \hspace{24} e^y = V(y)\sim * A = e^{e^x} \)
But what is V(y) now? It contains the powers of y, or the powers
of e^x. But by the previous formula
\( \hspace{24} y = e^x = V(x)\sim * A \)
we only have the "first power" of e^x. How to obtain the
other powers too?
To make in in one formula, we should -instead of a single
vectorproduct- have a full matrix-product, which provides all
required powers of e^x as well, and in the same one shot.
Something like
where A1 is our already known vector A.
A2, for instance, is then simply
\( \hspace{24} A2 = colvector(1, 2, \frac{2^2}{2!}, \frac{2^3}{3!}, \frac{2^4}{4!},...) \)
since
and written as a vector-product it is
\( \hspace{24} \left(e^x\right)^2 = V(x)\sim * colvector(\frac{2^0}{0!},\frac{2^1}{1!},\frac{2^2}{2!}, ....) \)
This is completely analoguous for all powers of (e^x).
So
Call this collection of vectors B
\( \hspace{24} B:=[A0,A1,A2,...] \)
3) ----------------------------------------------------------
If we consider the collection of all A_k into the matrix B,
then we may also extract the common factorial denominators
into a diagonal "factorial"-vector F
then
so we have
and for obvious reasons I denote the above collection of V-vectors
of subsequent parameters VZ so we have:
and for more brevity I call the product F-1 * VZ
\( \hspace{24} B := F^{-1} * VZ \)
and we have the iterable matrix-operator:
\( \hspace{24} V(x)\sim * B = V(e^x)\sim \)
4) -------------------------------------------------------------
Introducing another diagonal Vandermonde-vector, with powers
of logs of a parameter s parametrizes this for s and I call the
resulting matrix Bs:
\( \hspace{24} diag(V(\log{(s)})) * B = B_s \)
and
\( \hspace{24} V(x)\sim * B_s = V(s^x)\sim \)
the iterable matrix-expression for obtaining
\( \hspace{24} x,s^x,s^s^{\small x},s^s^{^{\small{s^x}}},\dots \)
I call the matrix B a "constant", since it is independent
of the parameter s.
====================================================================
The rationale in short:
. We need a powerseries-vector in x and a exponential-series-vector
. to get one scalar result for s^x.
.
. But to make it iterable the result should again be not a scalar
. alone, but again a full vector of powers of s^x, we need not only
. one exponential-series-vector, but one for each power, thus a
. full matrix of such vectors.
.
. That matrix is B (resp. Bs) in the tetration case.
-------------------
Once having introduced a matrix as an operator for tetration
(the analoguous is valid for other iterated operations and
functions as well, just take the appropriate matrices), we
are in a situation to discuss iterations in terms of powers
of Bs, and if Bs has an accessible eigensystem, we are also
in the situation to define the fractional iteration by fractional
powers and even complex powers of Bs, thus the continuous
version of tetration.
In my first heuristic approach I used the matrix-logarithm
of Bs instead, and defined arbitrary powers by
\( \hspace{24} B_s ^y = \exp \left( y * \log(B_s) \right) \)
I found that this provides also numerical stable approximates
(with the same results, of course) but didn't investigate this
path deeper, since I thought, that the eigensystem-decomposition
is a more general or fundamental approach.
-------------------
Hope I made it readable/understandable...
Gottfried
Matrix method for tetration
Well - there is not much special here.
1) -----------------------------------------------------
Assume, you denote the summation of a powerseries
\( \hspace{24} s(x) = \sum_{k=0}^\infty x^k * a_k \)
as a vector-product:
\( \hspace{24} s(x) = rowvector(1,x,x^2,x^3,...) * colvector(a_0,a_1,a_2,a_3,...) \)
then it may be useful for the further analysis, to give names
for such vectors. I say
\( \hspace{24} V(x) =colvector(1,x,x^2,x^3,...) \)
Let the other vector be denoted as
\( \hspace{24} A = colvector(a_0,a_1,a_2,a_3,...) \)
then
\( \hspace{24} s(x) = V(x)\sim * A \)
Example: if
\( \hspace{24} A = colvector(a_0,a_1,a_2,a_3,...) = (1,1,\frac1{2!},\frac1{3!},\frac1{4!},...) \)
then V(x)~ * A represents simply the exponential-series in x, so
\( \hspace{24} e^x = V(x)\sim * A \)
2) -------------------------------------------------------
Now, if we want iteration, to get e^(e^x) that means, that
we simply need to set e^x=y and use y as paraemter for the
"powerseries"-vector (or better: "vandermonde"-vector, thus
the letter "V" at V(x))
\( \hspace{24} e^y = V(y)\sim * A = e^{e^x} \)
But what is V(y) now? It contains the powers of y, or the powers
of e^x. But by the previous formula
\( \hspace{24} y = e^x = V(x)\sim * A \)
we only have the "first power" of e^x. How to obtain the
other powers too?
To make in in one formula, we should -instead of a single
vectorproduct- have a full matrix-product, which provides all
required powers of e^x as well, and in the same one shot.
Something like
Code:
.
V(y)~ = V(x)~ [A0, A1, A2, A3 , ... ]
= [1 , e^x , (e^x)^2, (e^x)^3, ... ]where A1 is our already known vector A.
A2, for instance, is then simply
\( \hspace{24} A2 = colvector(1, 2, \frac{2^2}{2!}, \frac{2^3}{3!}, \frac{2^4}{4!},...) \)
since
Code:
.
(e^x)^2 = e^(2x) = sum (k=0..inf) (2x)^k/k!
= sum (k=0..inf) 2^k * x^k / k!
= sum (k=0..inf) 2^k/k! * x^k\( \hspace{24} \left(e^x\right)^2 = V(x)\sim * colvector(\frac{2^0}{0!},\frac{2^1}{1!},\frac{2^2}{2!}, ....) \)
This is completely analoguous for all powers of (e^x).
So
Code:
.
A0 = colvector(0^0/0!, 0^1/1!, 0^2/2!,...)
A1 = colvector(1^0/0!, 1^1/1!, 1^2/2!,...)
A2 = colvector(2^0/0!, 2^1/1!, 2^2/2!,...)
A3 = colvector(3^0/0!, 3^1/1!, 3^2/2!,...)
...
A_k= colvector(k^0/0!, k^1/1!, k^2/2!,...)Call this collection of vectors B
\( \hspace{24} B:=[A0,A1,A2,...] \)
3) ----------------------------------------------------------
If we consider the collection of all A_k into the matrix B,
then we may also extract the common factorial denominators
into a diagonal "factorial"-vector F
Code:
.
F = diagonal( 0! , 1! , 2! , 3! ,... )
F^-1 = diagonal(1/0!,1/1!,1/2!,1/3!,... )then
Code:
.
A0 = F^-1 * V(0)
A1 = F^-1 * V(1)
A2 = F^-1 * V(2)
A3 = F^-1 * V(3)
...
A_k = F^-1 * V(k)
...so we have
Code:
.
V(y)~ = V(x) * F^-1 * [V(0), V(1), V(2) , V(3) , ....]
= [1 , e^x , (e^x)^2, (e^x)^3, ... ]and for obvious reasons I denote the above collection of V-vectors
of subsequent parameters VZ so we have:
Code:
.
V(y)~ = V(x) * F^-1 * VZ
= V(e^x)~and for more brevity I call the product F-1 * VZ
\( \hspace{24} B := F^{-1} * VZ \)
and we have the iterable matrix-operator:
\( \hspace{24} V(x)\sim * B = V(e^x)\sim \)
4) -------------------------------------------------------------
Introducing another diagonal Vandermonde-vector, with powers
of logs of a parameter s parametrizes this for s and I call the
resulting matrix Bs:
\( \hspace{24} diag(V(\log{(s)})) * B = B_s \)
and
\( \hspace{24} V(x)\sim * B_s = V(s^x)\sim \)
the iterable matrix-expression for obtaining
\( \hspace{24} x,s^x,s^s^{\small x},s^s^{^{\small{s^x}}},\dots \)
I call the matrix B a "constant", since it is independent
of the parameter s.
====================================================================
The rationale in short:
. We need a powerseries-vector in x and a exponential-series-vector
. to get one scalar result for s^x.
.
. But to make it iterable the result should again be not a scalar
. alone, but again a full vector of powers of s^x, we need not only
. one exponential-series-vector, but one for each power, thus a
. full matrix of such vectors.
.
. That matrix is B (resp. Bs) in the tetration case.
-------------------
Once having introduced a matrix as an operator for tetration
(the analoguous is valid for other iterated operations and
functions as well, just take the appropriate matrices), we
are in a situation to discuss iterations in terms of powers
of Bs, and if Bs has an accessible eigensystem, we are also
in the situation to define the fractional iteration by fractional
powers and even complex powers of Bs, thus the continuous
version of tetration.
In my first heuristic approach I used the matrix-logarithm
of Bs instead, and defined arbitrary powers by
\( \hspace{24} B_s ^y = \exp \left( y * \log(B_s) \right) \)
I found that this provides also numerical stable approximates
(with the same results, of course) but didn't investigate this
path deeper, since I thought, that the eigensystem-decomposition
is a more general or fundamental approach.
-------------------
Hope I made it readable/understandable...
Gottfried
Gottfried Helms, Kassel

