Seems we have a basic misunderstanding here, though I dont know where yet. Ok, let me nail the facts:
We have something that is called the matrix operator method, this truncates \( B_b \) (the Carleman matrix of \( b^x \)) to n
then decomposes uniquely via Eigenvalues \( {B_b}_{|n}=W_{|n} D_{|n} W_{|n}^{-1} \). And then defines \( {B_b}^t = \lim_{n\to\infty} W_{|n} {D_{|n}}^t W_{|n}^{-1} \). And we get the coeffecients of \( {\exp_b}^{\circ t} \) from the first column of \( {B_b}^t \).
I want your acknowledgement on this.
If we let aside the finite truncation then the matrices are no more of much help as they are simply another way of expressing the Schroeder method:
\( {\exp_b}^{\circ t} = \sigma^{-1}\circ \mu_{c^t} \circ \sigma \)
where \( \sigma \) is the Schroeder function (corresponding to the matrix W), which merely must satisfy \( \exp_b = \sigma^{-1}\circ \mu_c \circ \sigma \) (which is our disguised Schroeder equation \( \sigma(\exp_b(x))=c\sigma(x) \)) and which is usually applied on a function with fixed point at 0 with \( f'(0)=c \).
The problem with this or with the directly related Abel equation is the non-uniqueness. One can force uniqueness when considering (analytic) development in a fixed point. This coincides with the regular iteration.
Outside a fixed point we can arbitrarily (by those sine modifications) deform solutions for getting new solutions.
And the additional problem exists that regular solutions at different fixed points usually dont coincide.
These problems comes with the infinite number of coefficients and are caried over 1-1 if we express the problem with matrices instead of analytic functions. So I can not see what we can get from matrices what we not already know about the analytic functions. But perhaps you have to explain in more detail what you mean by
And we have to think about a name for this additional method.
We have something that is called the matrix operator method, this truncates \( B_b \) (the Carleman matrix of \( b^x \)) to n
then decomposes uniquely via Eigenvalues \( {B_b}_{|n}=W_{|n} D_{|n} W_{|n}^{-1} \). And then defines \( {B_b}^t = \lim_{n\to\infty} W_{|n} {D_{|n}}^t W_{|n}^{-1} \). And we get the coeffecients of \( {\exp_b}^{\circ t} \) from the first column of \( {B_b}^t \).
I want your acknowledgement on this.
If we let aside the finite truncation then the matrices are no more of much help as they are simply another way of expressing the Schroeder method:
\( {\exp_b}^{\circ t} = \sigma^{-1}\circ \mu_{c^t} \circ \sigma \)
where \( \sigma \) is the Schroeder function (corresponding to the matrix W), which merely must satisfy \( \exp_b = \sigma^{-1}\circ \mu_c \circ \sigma \) (which is our disguised Schroeder equation \( \sigma(\exp_b(x))=c\sigma(x) \)) and which is usually applied on a function with fixed point at 0 with \( f'(0)=c \).
The problem with this or with the directly related Abel equation is the non-uniqueness. One can force uniqueness when considering (analytic) development in a fixed point. This coincides with the regular iteration.
Outside a fixed point we can arbitrarily (by those sine modifications) deform solutions for getting new solutions.
And the additional problem exists that regular solutions at different fixed points usually dont coincide.
These problems comes with the infinite number of coefficients and are caried over 1-1 if we express the problem with matrices instead of analytic functions. So I can not see what we can get from matrices what we not already know about the analytic functions. But perhaps you have to explain in more detail what you mean by
Quote:based on an analytical description of each entry, then I actually work with finite truncations of an assumed infinite matrix, which may provide multiple solutions for the same composed theoretical result matrix.
And we have to think about a name for this additional method.
