11/05/2007, 12:12 PM
andydude Wrote:Ok, I understand now. I just did the same thing with Aldrovandi's diagonalization method. Aldrovandi and others have shown that when you diagonalize the Koch/Bell/Carleman matrix of a function \( M[f] = M[\sigma_f^{-1}] \cdot D \cdot M[\sigma_f] \), then the diagonal matrix contains the eigenvalues, i.e. the powers of \( f_1 = f'(0) \), and the diagonalizing matrix is the inverse of the Koch/Bell/Carleman matrix of the Schroeder function.So this got me thinking if the eigensystem decomposition (or matrix diagonalization) produces a regular Schroeder function.
Yes this method is Gottfried's method, I somewhere already posted that in the case of hyperbolic iteration (power series developed at a fixed point), Gottfried's method gives the formal power series iteration, which is the regular iteration.
However this method is also applicable to developments at non-fixed points (in this case the D is no more powers of \( f_1 \) but still a diagonal matrix). I now realize that this method usually would depend on the development point. For example if we consider \( f(x)=\sqrt{2}^x \) with the fixed points 2 and 4. And we start with the diagonalization at development point 2 we get the regular iteration at 2. If we move the development point continuously to 4 (which's regular iteration is different from the one at 2) the iterates must have changed ...
Quote:Oh, see: The regular Abel function is only determined up to an additive constant and the regular Schroeder function is only determined up to a multiplicative constant. In our case of slog we simply fix one Abel function by the condition \( \text{slog}(1)=0 \).
It does, but of course eigenvectors are only unique up to scaling, so I suppose you could think of it as a question of convention rather than uniqueness.
Quote:While I was doing this I noticed something very interesting. We know the relationship between the Abel and Schroeder function is \( \sigma_f(x) = f_1^{\alpha_f(x)} \), which means the inverse relationship is:
\(
\begin{tabular}{rl}
\sigma_f(x) & = (f_1)^{\alpha_f(x)} \\
\sigma_f(\alpha_f^{-1}(x)) & = (f_1)^{x} \\
\alpha_f^{-1}(x) & = \sigma_f^{-1}\left((f_1)^{x}\right) \\
\end{tabular}
\)
and replacing f with \( DE_h(x) = h^x-1 \), we get
\(
\begin{tabular}{rl}
\alpha_{DE}^{-1}(x)
& = \sigma_{DE}^{-1}\left(\ln(h)^{x}\right) \\
& = \ln(h)^x + (\ln(h)^x)^2 \frac{\ln(h)}{2(\ln(h)-1)} + (\ln(h)^x)^3 \frac{\ln(h)^2(\ln(h)+2)}{6(\ln(h)-1)^2(\ln(h)+1)} + \cdots \\
& = e^{x\ln(\ln(h))} + e^{2x\ln(\ln(h))} \frac{\ln(h)}{2(\ln(h)-1)} + e^{3x\ln(\ln(h))} \frac{\ln(h)^2(\ln(h)+2)}{6(\ln(h)-1)^2(\ln(h)+1)} + \cdots
\end{tabular}
\)
because \( DE'(0) = \ln(h) \), and because the matrix P represents the inverse Schroeder function. What I find interesting about this is that it is almost a Fourier expansion of the exponential of iteration of DE, and that it is almost easier to compute than the Schroeder function, since you don't even need to invert P!
Uff, can you just say what \( h \) and \( DE \) is?
