![]() |
|
Matrix Operator Method - Printable Version +- Tetration Forum (https://tetrationforum.org) +-- Forum: Tetration and Related Topics (https://tetrationforum.org/forumdisplay.php?fid=1) +--- Forum: Mathematical and General Discussion (https://tetrationforum.org/forumdisplay.php?fid=3) +--- Thread: Matrix Operator Method (/showthread.php?tid=17) |
RE: Matrix Operator Method - Gottfried - 08/27/2007 bo198214 Wrote:Hm, then I somewhere must have an error in my computation.Hmm, I can crosscheck such a computation in the evening. But for a quick reply: did you observe the partial sums for the entries for increasing n by s_n_(i,j) = sum (k=1..n) entry_(i,j)? If their sign oscillate we have a candidate for the Euler-summation (for each entry separately!). Another try may be to use the alternate series for logarithm let f=(a-1)*(a+1) ^-1 log(a) = f/1 + f^3/3 + f^5/5 ... (don't have it at hand, whether this series actually has alternating signs, but I think, it's correct). The same formula can be used for matrices A, where (A+I) is invertible and *all* the eigenvalues of (A+I) are in the admissible range for this series. Anyway, it is worth to look at the partial sums; Euler-summation of the low order 2 can sum for instance 1-2+4-8+16-...+... which is even more divergent than many of our sums, requiring only few terms for a good approximation of the final result. For a more detailed discussion I can do better in the evening. Gottfried RE: Matrix Operator Method - bo198214 - 08/27/2007 Gottfried Wrote:If their sign oscillate ...No, their sign doesnt oscillate. The reason is simply: Some of the Eigenvalues are greater than 1 and hence the logarithm sequence does no more converge. This seems to can be fixed with the series \( \log(A)=\sum_{n=0}^\infty \frac{2}{2n+1} \left((A-I)(A+I)^{-1}\right)^{2n+1} \) which then properly converges. RE: Matrix Operator Method - Gottfried - 08/27/2007 bo198214 Wrote:Well, I have it in my "Hütte Mathematische Tafeln":Gottfried Wrote:If their sign oscillate ...No, their sign doesnt oscillate. \( \log( \frac{1+x}{1-x}) =2\left({x + \frac{x^3}{3} + \frac{x^5}{5} +... }\right) \hspace{100} for |x|<1 \\ \log( \frac{x+1}{x-1})=2\left({x + \frac{x^3}{3} + \frac{x^5}{5} +... }\right) \hspace{100} for |x|>1 \) To apply one of this series to a matrix, all eigenvalues must match the same bound separately. I think, this settles this question for the most interesting cases. For matrices with eigenvalues both <1 and >1 , which occurs with the Bs-matrixes for s outside the range 1/e^e ... e^(1/e) we need still workarounds, like the techniques for divergent summation. But this is then a completely different chapter. I'm happy, we have now arrived at a level of convergence of understanding of (one of?) the core points of concepts. Gottfried RE: Matrix Operator Method - bo198214 - 08/29/2007 Gottfried Wrote:With A1 = A-I then log(A) = A1 - A1*A1/2 + A1*A1*A1/3 .... Are you sure about this? For me it rather looks as if they converge. The Eigenvalues are quite different depending on the truncation. Even in the case \( b<\eta \) where you can compute the logarithm via the infinite matrix power series, it should depend on where you truncate the matrix. RE: Matrix Operator Method - Gottfried - 09/03/2007 bo198214 Wrote:Henryk - please excuse the delay. I was a bit exhausted after my search for the eigensystem-solution. Well, I think I've not taken the best wording. What I wanted to say was, that if A is nilpotent, then the series is finite. But A is only nilpotent, if the base is e ( the diagonal is 1 and the diagonal of A-I is zero) - I had in mind, that we were talking about this base. Then the entries of the partial sums up to the power d do not change for rows<d.Gottfried Wrote:With A1 = A-I then log(A) = A1 - A1*A1/2 + A1*A1*A1/3 .... Call B=S2 - I, where S2 is the matrix of stirling-numbers 2'nd kind. The diagonal of B is zero. The entries of the partial sums of a series, for instance I, I+B/1! , I+B/1!+B^2/2!, ... are constant up to rows 0, 1, 2, , ... accordingly, since the additional terms do not contribute to that rows due to the nilpotency. This detail was all I wanted to point out; it's surely nothing much important there... Regards- Gottfried RE: Matrix Operator Method - bo198214 - 09/03/2007 Ah, you meant for the nilpotent case. Surely there the eigenvalues do not change. RE: Matrix Operator Method - Gottfried - 10/08/2007 Hi friends - I'm at a point to ask: "Where does the "matrix-method" stay?", so I'll take the occasion to provide a wider resume. I'll use the two definitions for the related iterations, where I like the Big-T-notation, and also use Big-U: Code: .Also I call 1/e^e < b < e^(1/e) the "safe range" for b in the following. ------------- Integer tetration -------------------------------- (I assume my matrix-notation as known here, but I switched to use the here more common parameter b for the base, where I earlier had used s) Implementation of T(b,x,1) , value in the second column of the result Code: .The matrix approach is just an elementary reformulation here. The needed coefficients for this transformation for x --> b^x or T(b,x,0)-->T(b,x,1) are the coefficients of the appropriate exponential series and are just collected in a matrix-scheme. This is obviously iterable, Code: .Code: .For the easier version U() one finds a discussion of this for instance in Aldrovani/Freitas [AF], which refers to the triangular form of the required operator-matrix; the useful property of row-finiteness is also adressed for instance in Berkolaiko[B], who proves the existence of the matrix-operator for any similar transformation. I'm not sure, whether I can use Berkolaiko's arguments for square-matrices (and thus the original tetration-iteration T()) as well, so I must leave this open, "whether the collection of coefficients can indeed be used as matrix" But the numerical results, which are always approximations based on the finite truncation, suggest that integer iteration and matrix-powers are interchangeable also for the T()-and U()-transformation: a) numerical results by iteration and by matrix-powers coincide. b) linear combinations of different V(x)- vectors result in linear combinations of the according V(y)-vectors as expected c) infinite sums of various V(x) -vectors give expected results(tetra-geometric series) For a subset of parameters the result can be crosschecked by conventional scalar evaluation (possibly Euler-summation required) and agrees with these results. d) even infinite sums of powers of Bb give results which are compatible with conventional scalar evaluation when the parameter b is in the safe range, and this can also be extended for parameters b even outside this range (b>e^(1/b)) (analytical arguments, which back that observation are based on the eigensystem-hypothesis, see below) ------ continuous tetration ------------------------ The continuous version depends then completely on the possibility to interprete the collection of coefficients used in the integer version in fact as a matrix, including the option to take matrix-logarithms and diagonalization (eigensystem-decomposition), since we need the concept of fractional iteration and thus of fractional matrix-powers. Numerically Again the numerical results agree with the expected results for the safe range of the base-parameter b and manageable height-parameters h. Already numerical approximations for the matrix-logarithm and for the diagonalization using finite dimensions up to dim=32 and dim=64 show the expected behaviour in a certain range of the parameters, although the empirical eigenvalues have unknown degree of approximation to the "true" eigenvalues, which are unknown. Anyway, approximative results can be found even for some b outside the "safe range" (if not too far) and h not too high. Just use your favorite Eigen-solver and apply fractional powers... Analytically Based on a hypothese about the structure of the eigenvalues a formal and analytical solution for the eigenvectors was found for parameter b in the "safe range". Again, the results for these parameters agree with the expected values when approximated by scalar operations, and fractional iterates were computed for some examples. It was also possible to apply the hypothesis about the eigen-system for values b>e^(1/e) when the required parameters were taken from the set of complex fixpoints for that b-parameters. The matrices for some example parameters could be reproduced perfectly, for instance for b=3 and b=7, h=1, which matched the matrices given by the simple integer-approach. Also integer powers were correctly reproduced. Still one problem: fractional powers for b>eta A problem occured with non-integer powers here. The fractional powers of matrices, when constructed based on the eigensystem hypothese, were different from the matrices, which were computed by a numerical eigensystem-solver. Possibly the results are just complex rotations of each other (but I didn't confirm this yet), or the hypothese must be modified to use complex conjugates or the like. -------------------------------------------------------- The formula for computation of the eigensystem makes use of the following decomposition. Let D be the diagonalmatrix containing the eigenvalues, dV(u), W the matrix of eigenvectors such that Code: Bb = W^-1 * D * WCode: W = X * P~ * dV(t)Code: Bb = (dV(t^-1) * P^-1~ * X^-1) * dV(u) * (X * P~ * dV(t))Code: D = dV(u)Here X and P are triangular (and thus invertible), X, dV(t) and dV(u) are depending on t, but only dV(u) is modified by the tetration-height-parameter h, such that we must insert D^h = dV(u^h) . The coefficients in X are finite polynomials in t and u, having also denominators of products of (1-u),(1-u^2),(1-u^3),... which indicates singularities inherent to this method (see also Daniel's recent post, I'll add the reference later, I'm writing this in a notepad without the http:-references at hand). While all coefficients of the individual matrices are then finitely computable, the analytical computation of W^-1 still involves evaluation of infinite series, because P^-1~ is not row-finite (but may for numerical computation simply be approximated by numerical inversion of W) This means in practice, that there will be concurring methods for the numerical evaluation of the complete matrix-product, in the sense of using the best order for the evaluation by exploitation of the associativity of the matrix-products. ----------------------------------------- So everything is still based on heuristics and hypotheses, which should be attempted to be proven now to close the case. For me it is now nearly without doubt, that this method -for its implicite and underlying definition of tetration- will come out as a formal coherent/consistent method, at least as the formal skeleton/description. But -besides the need of formal proofs- I have still two problems: 1) the described matrix-approach allows dimensions of 24,32,64 and thus as few terms for the final series. This is way too few for a general implementation, even for tests of approximations this is often too few. Jay announced series with up to 700 terms - so since this seems to be possible the method should be reviewed in this aspect. 2) fractional powers for the analytically and eigensystem-constructed Bb-matrices, for b>eta. I could exploratively try to find, what's going wrong with the result and to find a remedy this way. But I would like to find first a hypothesis for the reason (and the remedy) of this problem. It surely has to do with the different branches of complex logarithms, but I don't see in which way this is precisely involved in the current case. The integer powers come out fine... The U()-problem seems to be simpler than the T()-problem, since we deal only with triangular matrices with a more obvious eigensystem. Possibly the "closing of the case" will be easier done if first attempted via the analysis of the U()-transformation. Anyway, if it cannot be done in near future I think I'm taking a longer break - I'm already feeling a bit exhausted and tired of the subject. Gottfried [AF] CONTINUOUS ITERATION OF DYNAMICAL MAPS# R. Aldrovandi and L.P. Freitas physics/9712026 16 Dec 1997 [B]Analysis of Carleman Representation of Analytical Recursions G. Berkolaiko JOURNAL OF MATHEMATICAL ANALYSIS AND APPLICATIONS 224, 81-90 1998. ARTICLE NO. AY985986 RE: Matrix Operator Method - Gottfried - 10/14/2007 One encouraging result for fractional iterates with non-real base Well , perhaps I was too much scared because of the difficult approximations of the matrix-operator-method for the cases when not e^-e < b < eta. With a careful computation of the halfiterate of the complex base "I" I got Code: . y1= {I,1}^^0.5 ~ 1.16729812784 + 0.735996102206*IRemember my hypothetical formula for continuous tetration y = {b,x}^^h implemented by V(y)~ = V(x)~ * (dV(log(b))*B)^h = V(x)~ * Bb^h and y = V(y)[1] To arrive at the desired result, Bb^h must be constructed by the analytical description: Let W^-1 * D * W = Bb be the eigendecomposition of Bb and W^-1 * D^h * W = Bb^h the h'th power of Bb This can simply approximated with any eigensystem-solver, when fed with the matrix Bb. Exponentiate the eigenvalues and recompute... But the result will then be only heuristical and we don't know the degree of approximation. ------------- Following my hypothese about the further structure of the eigen-matrices, we can compute this structure analytically. I assumed: W^-1 = dV(1/t)*P^-1 ~ * X^-1 and W = X * P~ * dV(t) P is the known lower triangular pascal-matrix, X is a lower triangular matrix depending on t and u, and dV(t) is a diagonalmatrix containing the consecutive powers of t. The eigenvalues in D are the consecutive powers of u. Here t and u are dependend on the base-parameter b, such that t^(1/t) = b u = log(t) With my fixpoint-tracer I can first find a solution for t and b (at least) for some values b outside the range e^-e < b <e^(1/e), and it seems even for complex values of b (I have to make this sure for the general case of complex b) The entries of X can be composed by finite polynomials in t and u and since X is triangular its inverse as well as X^-1 * D^h * X can be computed exactly/symbolically without need of approximation. As I have already indicated in a previous post, the direct computation of W^-1 is *not* possible without approximation, since P^-1 ~ is not rowfinite, but the order of computation of the whole matrix expression can be optimized to have only one situation, where we need approximative summation, giving inexact values. This can be the last step. The whole formula in the analytical decomposition is V(y)~ = V(x)~ * Bb^h V(y)~ = V(x)~ * W^-1 * D^h * W = V(x)~ * (dV(1/t)*P^-1 ~ * X^-1 ) * D^h *(X * P~ * dV(t) Exploiting associativity we implement this by = (V(x)~ * dV(1/t)*P^-1 ~ )* X^-1 * D^h * X * (P~ * dV(t) V(y/t-1)~ = V(x/t-1)~ * X^-1 * D^h * X Denote M(t,h) = X^-1 * D^h * X then V(y/t-1)~ = V(x/t-1)~ * M(t,h) M(t,h) is computed with exact terms, and only its second column is needed for the final computation. Call this terms m_r using r for the row-index. Let (x/t-1) = z then y/t-1 = sum(r=0..inf) m_r * z^r and finally y = ((sum(r=0..inf) m_r * z^r ) + 1)*t The terms of the series m_r * z^r still do not converge over the short run of 32 terms, which I have available. But they are far better summable by Euler-summation than the terms, which occur, if I applied the naive form of summation using the second column of the precomputed Bb^h. Possibly the problems, which remained for the matrix-method, can be reduced to mere acceleration of convergence (or regular summation of alternating divergent series), and are then just object of technically improvements of the numerical methods. That would be a very nice outcome, because we had then a firm base, from where only optimizations are required... (This is worth a good glass of wine tonight :-) ) Gottfried RE: Matrix Operator Method - Gottfried - 04/04/2008 Hi all - just as a note: I'm preparing a compilation for my matrix-method. I've not yet finished it. But because the classes started recently I'll possibly have not much time to continue in advance and/or intensity. So I thought to provide the first half part, which at least explains the "naive" method (and for instance relates it to the Bell-matices) If you like, see at Matrix-approach and send useful comments Gottfried RE: Matrix Operator Method - Gottfried - 04/17/2008 Hi - although I wanted to be a bit detached for some weeks and look into the basics of the divergent summation thing, I've just made a big step to backup my eigensystem-decomposition by a compeletely elementary iterative method, and I want to share this. The symbolic eigensystem-decomposition is enormous resources-consuming and also not yet well documented and verified in a general manner. Here I propose a completely elementary (while still matrix-based) way to arrive at the same result as by eigensystem-analysis. ---------------------------------------------------- As you might recall from my "continuous iteration" article I've found a matrix representation for the coefficients of the powerseries for the U-tetration \( \hspace{24} U_t^{^oh}(x) \), where \( U_t(x)= U_t^{^o1}(x) = t^x - 1 \). Here I'll extend the notation for the powerseries a bit, compared to the text in the article: \( \hspace{24} U_t^{^oh}(x) = a_1(t,h)* b_1/d_1 * x/1! + a_2(t,h)*b_2/d_2 * x^2/2! + ... \) Let me, as earlier, use the notation u=log(t) for this text. Then * the b_k are powers of u (but of no special interest here), * d_k are products of (u^j-1), like \( (u-1)*(u^2-1)*...*(u^{k-1}-1) \) and * a_k(t,h) are bivariate polynomials, whose numerical coefficients are the sole mysteries in this powerseries. Since we have two formal variables in each a_k, the coefficients can be arranged in matrices, whose row- and column-multiplieres are the consecutive powers of the parameters. For instance a3(t,h) looks like the following: \( \hspace{24} u=log(t) \\ v=u^h \\ \begin{matrix} {rll} a_3(t,h)*b_3/d_3/3! &=& ((u^3 + 2*u^2)/(6*u^3 - 6*u^2 - 6*u + 6))*v^3 \\ & & + u^2/(-2*u^2 + 4*u - 2)*v^2 \\ & & + ((2*u^3 + u^2)/(6*u^3 - 6*u^2 - 6*u + 6))*v \end{matrix} \) The 3! removed from denominator makes this \( \hspace{24} \begin{matrix} {rll} a_3(t,h)*b_3/d_3 &=& ((u^3 + 2*u^2)/(u^3 - u^2 - u + 1)) *v^3 \\ && + 3 u^2/(-*u^2 + 2*u - 1) *v^2 \\ && + ((2*u^3 + u^2)/(u^3 - u^2 - u + 1))*v \end{matrix} \) d3 is the common denominator, it is \( \hspace{24} d_3 = (u^2-1)*(u-1) =(u^3 - u^2 - u + 1) \) This removed it is \( \hspace{24} a_3(t,h)*b_3 =(u^3 + 2*u^2)*v^3 + (-3*u^3 - 3*u^2)*v^2 + (2*u^3 + u^2)*v \) Also b3 = u^2 removed, this is \( \hspace{24} a_3(t,h) =(u + 2)*v^3 + (-3*u - 3)*v^2 + (2*u + 1)*v \) Now the numerical coefficients of this polynomial can be arranged in a matrix Code: `Code: `Now v is u^h and we have the interesting relation, depending on h that any integer height h provides a columnshift by h rows. So if h=0 we have no shift Code: `If h=1 we have Code: `Code: `and we have another sum, where also d3 cancels the denominator. With h=2 we have the most interesting case (concerning a_3) Code: `where again d3 cancels the denominator. From inspection of the first few A-matrices I concluded, that for each integer h the numerical coefficients of all A-matrices cancel the denominator. This even allows u=1, t=exp(1) and via fixpointshift the tetration-bases b=e^(1/e)) for integer heights, since the zero-denominators cancel. It is interesting concerning fractional heights, for instance if h is half-integer, all these column-shifting has a special consequence. For h=1/2 we have Code: `where the cancelling of the denominators cannot occur. This may give a feeling for the mysteries of fractional iteration. ======================================================================== But this is only for recalling some known things. This msg is intended to cover another aspect. The symbolic eigen-decomponsition, which gives us the A-matrices, is extremely costly in time and memory, so I was searching for another method to determine the A-matrices independently. My idea was the striking property, that the coefficients seem to guarantee, that for integer heights the denominator is always a factor of the numerator. So the A-matrices must be expressible in some multiples of the denominators d, let's express their polynomial-coefficients as vector D. If these multiples can be determined apriori then the costly eigenvalue-decomposition is not needed. Yesterday I found the solution (with the help of members of the seqfan-mailing list), and this is an extremely simple routine. We use the same example A_3 here, but since we don't want to use the eigendecomposition, all numerical coefficients in A are unknown. So we have Code: `and the row-sums must equal an integer composition of D3 = column([1,-1,-1,1). We know by 3'rd and 4'th row, that the row-sums RS are zero, so in Code: `the unknown k0 must be zero. The next condition exploits the properties at h=1 since we have Code: `and we have that Code: `but have no information about k0 here. From Eigenanalysis we know, that k0=1 The next condition, using h=2 is Code: `and since this must again be an integer composition of the polynomial d3, we may express this by the matrix-multiplication Code: `where MD is the matrix built from D3 with column-shift Code: `and K3 must be a vector now with integer weigths for each column in MD3 If K3 would be known, the result MD3 * K3 must equal RS, and from RS we can uniquely determine the unknown coefficients a0 to b2. So, if the analoguous K-vectors for all A were known, also all numerical coefficients in the A-matrices were immediately known. This is the concept of my new solution. I collected some K-vectors into a matrix (from the known eigensystem-based solutions) to find a recursive generation rule for this matrix and posted the first nontrivial K-matrix into the mailing list, and a member (Richard Mathar) indeed found a computation-rule for this first K-matrix (there called "H"-matrices) , which I could systematize and generalize to a very simple iteration-process for all K- and then subsequently A-matrices analoguously. Here an excerpt of my concluding mail to seqfan-list (I'm a bit tired of typing, but edited it a bit for this msg...) Code: `(end of mail to seqfan) So, for our members, who did not understand my matrx-method with eigendecomposition (or did not trust it ;-) ) here is a completely elementary approach, simple to implement. However - now two hypotheses are involved, which need proof: a) that indeed the property holds, that A-matrices using integer values of height h contain the denominators d_k as factor b) that the Mathar-process indeed provides the correct H-matrices. I checked that up to h=20 and did not find an error. I append also some A-matrices, computed by the above process. Note, they are transposes compared to my mentioned text about "continuous iteration" Gottfried ======================================================================== Code: `======================================================================= |