![]() |
|
Taylor polynomial. System of equations for the coefficients. - Printable Version +- Tetration Forum (https://tetrationforum.org) +-- Forum: Tetration and Related Topics (https://tetrationforum.org/forumdisplay.php?fid=1) +--- Forum: Mathematical and General Discussion (https://tetrationforum.org/forumdisplay.php?fid=3) +--- Thread: Taylor polynomial. System of equations for the coefficients. (/showthread.php?tid=993) Pages:
1
2
|
RE: Taylor polinomial. System of equations for the coefficients. - marraco - 05/06/2015 (05/05/2015, 07:40 AM)Gottfried Wrote: P*A = A*Bb I think that we are speaking of different things. Obviously, there should be a way to demonstrate the equivalence of both, because they are trying to solve the same problem; looking for the same solution. But as I understand, the Carleman matrix A only contains powers of a_i coefficients, yet if you look at the red side, it cannot be written as a matrix product A*Bb, because it needs to have products of a_i coefficients (like \( a_1^3.a_3^2.a_5^8.a_... \)). Maybe it is a power of A.Bb, or something like A^Bb? The Pascal matrix on the blue side is the exponential of a much simpler matrix \( \exp \left ( \left [ \begin{matrix} . & 1 & . & . & . & . & . \\ . & . & 2 & . & . & . & . \\ . & . & . & 3 & . & . & . \\ . & . & . & . & 4 & . & . \\ . & . & . & . & . & 5 & . \\ . & . & . & . & . & . & 6 \\ . & . & . & . & . & . & . \end{matrix} \right ] \right ) = \left [ \begin{matrix} 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ . & 1 & 2 & 3 & 4 & 5 & 6 \\ . & . & 1 & 3 & 6 & 10 & 15 \\ . & . & . & 1 & 4 & 10 & 20 \\ . & . & . & . & 1 & 5 & 15 \\ . & . & . & . & . & 1 & 6 \\ . & . & . & . & . & . & 1 \end{matrix} \right ] \) Maybe the equation can be greatly simplified by taking a logarithm of both sides. RE: Taylor polinomial. System of equations for the coefficients. - Gottfried - 05/06/2015 (05/06/2015, 02:42 PM)marraco Wrote:(05/05/2015, 07:40 AM)Gottfried Wrote: P*A = A*Bb No, no ... In your convolution-formula you have in the inner of the double sum powers of powerseries (the red-colored formula \( a^{ \;^x a} \) in your first posting ) with the coefficients of the a()-function (not of its single coefficients), and if I decode this correctly, then this matches perfectly the composition of V(x)*A * Bb = (V(x)*A) * Bb = [1,a(x), a(x)^2, a(x)^3),...] * Bb = V(a(x))*Bb Only, that after removing of the left V(x)-vector we do things in different order: V(x)*A * Bb = V(x)*(A * Bb ) and I discuss that remaining matrix in the parenthese of the rhs. That V(x) can be removed on the rhs and on the lhs of the matrix-equation must be justified; if anywhere occur divergent series, this becomes difficult, but as far as we have nonzero intervals of convergence for all dot-products, this exploitation of associativity can be done /should be possible to be done (as far as I think). (The goal of this all is of course to improve computability of A, for instance by diagonalization of P or Bb and algebraic manipulations of the occuring matrix-factors). Anyway - I hope I didn't actually misread you (which is always possible given the lot of coefficients... ) Gottfried RE: Taylor polynomial. System of equations for the coefficients. - marraco - 05/07/2015 I misinterpreted what the Carleman matrix was. I tough that it contained the powers of the derivatives of a function (valued at zero), but it contains the derivatives of the powers of a function, so it actually haves the products of the aᵢ coefficients (of bᵢ in your notation). ________________ I tried to use this method to find the coefficients for exponentiation: bˣ=Σbᵢ.xⁿ The condition is b.(x+1)=b.Σbᵢ.xⁿ which translates into P.[bᵢ]=b.[bᵢ] or [P-b.I].[bᵢ]=0 The solution should be bᵢ=ln(b)ⁱ / i! I found bᵢ=c. (ln(b)ⁱ/i!), where c is an arbitrary constant, because, obviously c.b⁽ˣ⁺¹⁾=b.(c.bˣ) I was bugged for the fact that any equation for solving tetration I tried seems to have at least one degree of liberty. I think now that it should be explained by one (at least) arbitrary constant in the solution. This looks analogous to constants found in the solution of differential equations, so I wonder if the evolvent of the curves generated by the constant is also a solution, and what is his meaning. RE: Taylor polynomial. System of equations for the coefficients. - marraco - 01/13/2016 So, we want the vector \( \\[15pt] {[a_i]} \), from the matrix equation: \( \\[15pt] {\left [ {{i} \choose {r}} \right ]\cdot \left [a_i \right ] = \left [ \sum_{n=1}^{P(i)} \frac{a.ln(a)^{\sum_{j=1}^{i}c_{n,j}}}{ \prod_{j=1}^{i} c_{n,j}!}\prod_{j=1}^{i}a_j^{c_{n,j}} \right ]} \) where "r" is the row index of the first matrix at left, and "i" his column index. Note that in the last equation, both r and i start counting from zero for the first row and column. ______________________________________ P(i) is the partition function The first few values of the partition function are (starting with p(0)=1): 1, 1, 2, 3, 5, 7, 11, 15, 22, 30, 42, 56, 77, 101, 135, 176, 231, 297, 385, 490, 627, 792, 1002, 1255, 1575, 1958, 2436, 3010, 3718, 4565, 5604, … (sequence A000041 in OEIS; the link has valuable information about the partition function). ______________________________________ \( \\[15pt] {c_{n,j}} \) is the number of repetitions of the integer j in the \( \\[15pt] {n^{th}} \) partition of the number i ______________________________________ Solving the equation If we do the substitution \( \\[25pt] {a_i=\frac {b_i} {ln(a)} } \), we simplify the first equation to: \( \\[15pt] {\left [ {{i} \choose {r}} \right ]\cdot \left [b_i \right ] = ln(^2a) \, \left [ \sum_{n=1}^{P(i)} \frac{1}{ \prod_{j=1}^{i} c_{n,j}!}\prod_{j=1}^{i}b_j^{c_{n,j}} \right ]} \) ______________________________________ Special base. This equation suggest a special number, which is m=1.7632228343518967102252017769517070804... m is defined by \( \\[15pt] {^2m=e} \) For the base a=m, the equation gets simplified to: \( \\[35pt] {\left [ {{i} \choose {r}} \right ]\cdot \left [b_i \right ] = \left [ \sum_{n=1}^{P(i)} \frac{1}{ \prod_{j=1}^{i} c_{n,j}!}\prod_{j=1}^{i}b_j^{c_{n,j}} \right ]} \) But let's forget about m for now. ______________________________________ We are now very close to the solution. The only obstacle remaining is the product: \( \\[15pt] { \frac{1}{ \prod_{j=1}^{i} c_{n,j}!} } \) If we can do a substitution that get us rid of him, we have the solution: \( \\[15pt] {\left [ {{i} \choose {r}} \right ]\cdot \left [b_i \right ] = ln(^2a) \, \left [ \sum_{n=1}^{P(i)}\prod_{j=1}^{i}b_j^{c_{n,j}} \right ]} \) At this point we only need to substitute \( \\[25pt] {b_i=f^i} \), where f is arbitrary, to get: \( \\[15pt] {\left [ {{i} \choose {r}} \right ]\cdot \left [f^i \right ] = ln(^2a) \, \left [ \sum_{n=1}^{P(i)} f^i \right ] \,=\, ln(^2a) . [P(i) . f^i]} \) ... and we get: \( \\[15pt] { \left [f^i \right ] = \left [ {{i} \choose {r}} \right ]^{-1} \cdot [ ln(^2a) . P(i) . f^i]} \) The choice of f, very probably, determines the value for °a, and the branch of tetration. (01/03/2016, 11:24 PM)marraco Wrote: RE: Taylor polynomial. System of equations for the coefficients. - marraco - 01/14/2016 ^^ Sorry. I made a big mistake. We cannot substitute \( \\[25pt] {b_i=f^i} \) of course. Maybe \( \\[15pt] {b_i=f^{k.i}} \) would work as an approximation, because we know that \( \\[15pt] {b_i} \) tends very rapidly to a line on logarithmic scale. Anyways, it would be of little use. We know that the \( \\[15pt] {a_i} \) are the derivatives of \( \\[15pt] {^xa|_0} \) , so a Fourier or Laplace transform would turn the derivatives into products. But that would mess with the rest of the equation. RE: Taylor polinomial. System of equations for the coefficients. - marraco - 01/14/2016 Here I make an expansion of a row, in hope that it helps somebody to digest the equation. (01/13/2016, 04:32 AM)marraco Wrote: We are now very close to the solution. The only obstacle remaining is the product: The product is what I called "the integer divisor" (05/03/2015, 04:35 AM)marraco Wrote: \( \mathbf{^^ Here I expanded the row for i=9 of the equation: \( \\[15pt] { \left [ \sum_{n=1}^{P(9)} \frac{a.ln(a)^{\sum_{j=1}^{9}c_{n,j}}}{ \prod_{j=1}^{9} c_{n,j}!}\prod_{j=1}^{9}a_j^{c_{n,j}} \right ]} \) after the substitution \( \\[25pt] {a_i=\frac {b_i} {ln(a)} } \): \( = ln(^2a) \, \left [ \sum_{n=1}^{P(9)} \prod_{j=1}^{9}{\frac{b_j^{c_{n,j}}} { c_{n,j}!} \right ] \,=\, ln(^2a) \, \left [\frac {b_9^1} {1!}+\frac {b_1^1} {1!}\frac {b_8^1} {1!}+\frac {b_2^1} {1!}\frac {b_7^1} {1!}+\frac {b_3^1} {1!}\frac {b_6^1} {1!}+\frac {b_4^1} {1!}\frac {b_5^1} {1!}+\frac {b_1^2} {2!}\frac {b_7^1} {1!}+\frac {b_1^1} {1!}\frac {b_2^1} {1!}\frac {b_6^1} {1!}+\frac {b_1^1} {1!}\frac {b_3^1} {1!}\frac {b_5^1} {1!}+\frac {b_1^1} {1!}\frac {b_4^2} {2!}+\frac {b_2^2} {2!}\frac {b_5^1} {1!}+\frac {b_2^1} {1!}\frac {b_3^1} {1!}\frac {b_4^1} {1!}+\frac {b_3^3} {3!}+\frac {b_1^3} {3!}\frac {b_6^1} {1!}+\frac {b_1^2} {2!}\frac {b_2^1} {1!}\frac {b_5^1} {1!}+\frac {b_1^2} {2!}\frac {b_3^1} {1!}\frac {b_4^1} {1!}+\frac {b_1^1} {1!}\frac {b_2^2} {2!}\frac {b_4^1} {1!}+\frac {b_1^1} {1!}\frac {b_2^1} {1!}\frac {b_3^2} {2!}+\frac {b_2^3} {3!}\frac {b_3^1} {1!}+\frac {b_1^4} {4!}\frac {b_5^1} {1!}+\frac {b_1^3} {3!}\frac {b_2^1} {1!}\frac {b_4^1} {1!}+\frac {b_1^3} {3!}\frac {b_3^2} {2!}+\frac {b_1^2} {2!}\frac {b_2^2} {2!}\frac {b_3^1} {1!}+\frac {b_1^1} {1!}\frac {b_2^4} {4!}+\frac {b_1^5} {5!}\frac {b_4^1} {1!}+\frac {b_1^4} {4!}\frac {b_2^1} {1!}\frac {b_3^1} {1!}+\frac {b_1^3} {3!}\frac {b_2^3} {3!}+\frac {b_1^6} {6!}\frac {b_3^1} {1!}+\frac {b_1^5} {5!}\frac {b_2^2} {2!}+\frac {b_1^7} {7!}\frac {b_2^1} {1!}+\frac {b_1^9} {9!} \right ] \) The problematic terms come from the factors \( \\[25pt] {\frac {b_i^q}{q!} \). The q! divisors may emerge not from the term raised to q. q! could emerge from the absence of the other terms: \( \\[25pt] {b_i^0} \). For example, the term \( \\[25pt] {\frac {b_1^2} {2!} .\frac {b_2^2} {2!} .\frac {b_3^1} {1!} } \) is actually \( \\[25pt] {\frac {b_1^2} {2!} .\frac {b_2^2} {2!} .\frac {b_3^1} {1!} \,.\, \frac {b_4^0} {0!}.\frac {b_5^0} {0!}.\frac {b_6^0} {0!}.\frac {b_7^0} {0!}.\frac {b_8^0} {0!}.\frac {b_9^0} {0!}} \) RE: Taylor polynomial. System of equations for the coefficients. - marraco - 03/13/2016 (01/13/2016, 04:32 AM)marraco Wrote: So, we want the vector \( \\[15pt] Thanks to Daniel advice, is easy to see that the red side can be derived as a direct application of Faà di Bruno's formula \( {d^i \over dx^i} f(g(x)) =\sum \frac{i!}{m_1!\,m_2!\,\cdots\,m_i!} \cdot f^{(m_1+\cdots+m_i)}(g(x)) \cdot \prod_{j=1}^i\left(\frac{g^{(j)}(x)}{j!}\right)^{m_j} \) in the blue red equation, (04/30/2015, 03:24 AM)marraco Wrote: We want the coefficients aᵢ of this Taylor expansion:is easy to see on the red side, that \( {m_j=c_{n,j}} \) \( {f(x)=a^x} \) \( {g(x)={^xa}=\sum_{n=0}^{\infty}{a_n .x^n}} \) \( f^{(m_1+\cdots+m_i)}\,_{(g(0))}=a.ln(a)^{\sum_{j=1}^{i}c_{n,j}} \) \( {\frac{g^{(j)}(0)}{j!}=a_j} \) RE: Taylor polinomial. System of equations for the coefficients. - Gottfried - 08/23/2016 (05/01/2015, 01:57 AM)marraco Wrote: This is a numerical example for base a=e Quote:It is a non linear system of equations, and the solution for this particular case is:This is perhaps a good starting point to explain the use of Carleman-matrices in my (Pari/GP-supported) matrix-toolbox, because you've just applied things analoguously to how I do this, only you didn' express it in matrix-formulae. To explain the basic idea of a Carlemanmatrix: consider a powerseries \( \hspace{100} \small f(x) = a_0 + a_1 x + a_2 x^2 + ... \) We express this in terms of the dot-product of two infinite-sized vectors \( \hspace{100} \small V(x) \cdot A_1 = f(x) \) where the column-vector A_1 contains the coefficients \( \small A_1=[a_0,a_1,a_2,...] \) and the row-vector \( \small V(x)=[1,x,x^2,x^3,x^4,...] \) Now to make that idea valuable for function-composition /- iteration it would be good, if the output of such an operation were not simple a scalar, but of the same type ("vandermonde vector") as the input (\( \small V(x) \) ). This leads to the idea of Carlemanmatrices: we just generate the vectors \( \small A_0,A_1,A_2,A_3,A_4,... \) where the vector \( \small A_k \) contains the coefficients for powers of f(x), such that \( \small V(x) \cdot A_k = f(x)^k \) \( \hspace{400} \) ... in a matrix \( \small A \) getting the operation: \( \hspace{100} \small V(x) \cdot A = [ 1, f(x), f(x)^2, f(x)^3 ,... ] \) or \( \hspace{100} \small V(x) \cdot A = V(f(x)) \) Having this general idea we can fill our toolbox with Carlemanmatrices for the composition of functions for a fairly wide range of algebra. For instance the operation INC \( \hspace{100} \text{INC := } \hspace{50} \small V(x) \cdot P = V(x+1) \) and its h'th iteration ADD \( \hspace{100} \text{ADD(h) := INC ^h:= } \hspace{50} \small V(x) \cdot P^h = V(x+h) \) is then only a problem of powers of P The operation MUL needs a diagonal vandermonde vector: \( \hspace{100} \text{MUL(w) := } \hspace{50} \small V(x) \cdot ^dV(w) = V(x*w) \) The operation DEXP (= exp(x)-1) needs the matrix of Stirlingnumbers 2nd kind, similarity-scaled by factorials: \( \hspace{100} \text{DEXP := } \hspace{50} \small V(x) \cdot S2 = V(e^x -1) \) and as an exercise, we see, that if we right-compose this with the INC -operation, we get the ordinary EXP operator, for which I give the matrix-name B: \( \hspace{100} \text{EXP := } \hspace{50} \small V(x) \cdot S2 \cdot P = V( e^x -1) \cdot P = V(( e^x -1) +1) = V(e^x) \) \( \hspace{100} \text{EXP := } \hspace{50} \small V(x) \cdot B = V( e^x) \) Of course, iterations of the EXP require then only powers of the matrix B. To see, that this is really useful, we need a lemma on the uniqueness of power-series. That is, in the new matrix-notation: If a function \( \small V(x) \cdot A_1 = f(x)$ \) is continuous for a (even small) continuous range of the argument x, then the coefficients in A_1 are uniquely determined. That uniqueness of the coefficients in A_1 is the key, that we can look at the compositions of Carleman-matrices alone without respect of the notation with the dotproduct by V(x) and for instance, we can make use of the analysis of Carlemanmatrix-decompositions like \( \hspace{100} \small V(x) \cdot S2 \cdot P = V(x) \cdot B \) and can analyze \( \hspace{100} \small S2 \cdot P = B \) directly, for instance to arrive at the operation LOGP : log(1+x) \( \hspace{100} \small S2 \cdot P = B \\ \hspace{100} \small P = S2^{-1} \cdot B \qquad \text{ where } \qquad S2^{-1} = S1 \\ \hspace{100} \small S2 = B \cdot P^{-1} \\ \) <hr> Now I relate this to that derivation which I've quoted from marraco's post. (05/01/2015, 01:57 AM)marraco Wrote: This is a numerical example for base a=e First we see the Pascalmatrix P on the lhs in action, then the coefficients \( \small [1,a_1,a_2,...] \) of the Abel-function \( \small \alpha(x) \) in the vector, say, A_1 . So the left hand is \( \hspace{100} \small P \cdot A_1 \) To make things smoother first, we assume A as complete Carlemanmatrix, expanded from A_1. If we "complete" that left hand side to discuss this in power series we have \( \hspace{100} \small V(x) \cdot P \cdot A = V(x+1) \cdot A = V( \alpha (x+1)) \) It is very likely, that the author wanted to derive the solution for the equation \( \small \alpha(x+1) = e^{\alpha(x)} \) ; so we would have for the right hand side \( \hspace{100} \small V(x) \cdot A \cdot B = V(\alpha(x)) \cdot B = V( e^{\alpha (x)}) \) and indeed, expanding the terms using the matrixes as created in Pari/GP with, let's say size of 32x32 or 64x64 we get very nice approximations to that descriptions in the rhs of the quoted matrix-formula. What we can now do, depends on the above uniqueness-lemma: we can discard the V(x)-reference, just writing \( \small A \cdot B \) and looking at the second column of B only \( \small A \cdot B[,1] = Y \) we get \( \small y_1 = e \cdot a_1 , y_2 =e \cdot (... ) \) as shown in the quoted post. So indeed, that system of equations of the initial post is expressible by \( \hspace{100} \small P \cdot A = A \cdot B \) and the OP searches a solution for A. <hr> While I've -at the moment- not yet a solution for A this way, we can, for instance, note that if A is invertible, then the equation can be made in a Jordan-form: \( \hspace{100} \small P = A \cdot B \cdot A^{-1} \) which means, that B can be decomposed by similarity transformations into a triangular Jordan block, namely the Pascalmatrix - and having a Jordan-solver for finite matrix-sizes, one could try, whether increasing the matrix-sizes the Jordan-solutions converge to some limit-matrix A. For the alternative: looking at the "regular tetration" and the Schröder-function (including recentering the powerseries around the fixpoint) one gets a simple solution just by the diagonalization-formulae for triangular Carlemanmatrices which follow the same formal analysis using the "matrix-toolbox" which can, for finite size and numerical approximations, nicely be constructed using the matrix-features of Pari/GP. |