Well, the problem of non-uniqueness in the infinite case: Yes - we settled this already.
My emphasis in my msg was the similarity-scaling, which matrix-implements the fixpoint-shift: this is only correct for the infinite size.
If we have, for the Bell-matrix Bb for base b
Bb = P^(-1)~ * X * P~
or
P~ * Bb * P^(-1)~ = X (such that X can be used by similarity-conditions)
we have X triangular only in the infinite case, since the rows of P~ matrix-multiplied by columns of Bb must be infinite to give correct results.
For the truncated case the approximations are getting worse with lower rows in P~ , and in the last row we even have only one single term, which gives nearly no approximation at all. The quality of approximation is then depending on the rate of decrease of terms in the columns in Bb (which, actually, is always given, but may be sufficient only at "late" indexes - too late to have enough approximation in the truncated case; well, with b=sqrt(2) we may go with truncation of size 32x32 and approximations to 6,8 or even more digits)
This is not so with the right part Bb * P^(-1)~ : there we have finite expressions for each entry of the result and can provide accuracy to arbitrary degree - independent of the chosen size of truncation.
So, in the finite case, for P~ * Bb * P^(-1)~ = X we don't get a triangular X, although it will be exactly "similar" to Bb(truncated), in the sense, that the eigenvalues, eigenvectors etc obey all known rules for similarity-transform.
Again - I didn't want to reconsider the problem of non-uniqueness here.
For the case of b outside this range I found that always a part of the eigenvalues(truncated matrices) converge to that logarithms, but another part vary wildly; this may be explained, because a truncated matrix cannot have an infinite eigenvalue, so the set of empirical eigenvalues must be somehow, which describes "best" the general behave of the matrix in operations with finite size
Here is another example (where I used s instead of b), look at pages "eigenvalues at critical s" and "eigenvalues for s=e^e^-1". I have some more examples for other bases and the variety of sets of occuring eigenvalues with increasing matrix-size and could provide them here, if you like.
(P.s. Where I use the simple P in the first part of this msg - this is only a sketch. In fact we need the appropriate t'th power of P, where t^(1/t) = b )
My emphasis in my msg was the similarity-scaling, which matrix-implements the fixpoint-shift: this is only correct for the infinite size.
If we have, for the Bell-matrix Bb for base b
Bb = P^(-1)~ * X * P~
or
P~ * Bb * P^(-1)~ = X (such that X can be used by similarity-conditions)
we have X triangular only in the infinite case, since the rows of P~ matrix-multiplied by columns of Bb must be infinite to give correct results.
For the truncated case the approximations are getting worse with lower rows in P~ , and in the last row we even have only one single term, which gives nearly no approximation at all. The quality of approximation is then depending on the rate of decrease of terms in the columns in Bb (which, actually, is always given, but may be sufficient only at "late" indexes - too late to have enough approximation in the truncated case; well, with b=sqrt(2) we may go with truncation of size 32x32 and approximations to 6,8 or even more digits)
This is not so with the right part Bb * P^(-1)~ : there we have finite expressions for each entry of the result and can provide accuracy to arbitrary degree - independent of the chosen size of truncation.
So, in the finite case, for P~ * Bb * P^(-1)~ = X we don't get a triangular X, although it will be exactly "similar" to Bb(truncated), in the sense, that the eigenvalues, eigenvectors etc obey all known rules for similarity-transform.
Again - I didn't want to reconsider the problem of non-uniqueness here.
bo198214 Wrote:So the conjecture is that the eigenvalues of the truncated Carleman/Bell matrix of \( b^x \) converge to the set of powers of \( \ln(a) \) where \( a \) is the lower (the attracting) fixed point of \( b \)?Yes, for the cases of b in the range of convergence. (maybe some excpetions: b=1 or b=exp(1) or the like)
For the case of b outside this range I found that always a part of the eigenvalues(truncated matrices) converge to that logarithms, but another part vary wildly; this may be explained, because a truncated matrix cannot have an infinite eigenvalue, so the set of empirical eigenvalues must be somehow, which describes "best" the general behave of the matrix in operations with finite size
Here is another example (where I used s instead of b), look at pages "eigenvalues at critical s" and "eigenvalues for s=e^e^-1". I have some more examples for other bases and the variety of sets of occuring eigenvalues with increasing matrix-size and could provide them here, if you like.
(P.s. Where I use the simple P in the first part of this msg - this is only a sketch. In fact we need the appropriate t'th power of P, where t^(1/t) = b )
Gottfried Helms, Kassel

