03/25/2009, 07:06 PM
Ansus Wrote:What is \( a \) in this formula?
\( a \) is the fixed point: \( b^a=a \).
|
regular slog
|
|
03/25/2009, 07:06 PM
Ansus Wrote:What is \( a \) in this formula? \( a \) is the fixed point: \( b^a=a \).
03/25/2009, 11:16 PM
Ansus Wrote:Is this correct: I am only familiar with Maple and Sage, so I can not help you with this. However in Maple the formula \( \frac{W(-\ln(b))}{-\ln(b)} \) works.
03/26/2009, 01:38 PM
Ansus Wrote:\( Maybe you have to specify the proper branch. (But as I told I can not test because I dont have Mathematica available.)
03/26/2009, 04:55 PM
Ansus Wrote:Anyway with any value of a I cannot get anything close to what expected. What shall I say? It worked for me. For base \( \sqrt{2} \) the fixed point is \( 2 \), thatswhy this base is so preferred, you dont need to compute the fixed point seperately.
03/27/2009, 08:03 AM
Ansus Wrote:Great! Now it works, but only for a limited range of bases. Particularly it works for the base \( \sqrt{2} \). I used this formula:
04/02/2009, 01:20 AM
bo198214 Wrote:The Abel function has also a singularity at 0.Just realized, this is only if the fixed point is 0. bo198214 Wrote:\( FS=cS \)This should be \( SF=cS \), which means you can't simplify the matrix like you did. The formula you give is a matrix representation of \( \sigma(f(x)) = \sigma(cx) \) if those are Bell matrices, or \( f(\sigma(x)) = c\sigma(x) \) if those are Carleman matrices. Andrew Robbins
04/02/2009, 02:31 PM
andydude Wrote:otherwise at the fixed point. The regular iteration theory always assumes the fixed point at 0. If not one just considers the function \( f(x+a)-a \) where \( a \) is the fixed point.bo198214 Wrote:The Abel function has also a singularity at 0.Just realized, this is only if the fixed point is 0. Quote:actually thats also wrong. However it is only an intermediate error in my derivation.bo198214 Wrote:\( FS=cS \)This should be \( SF=cS \), Lets show the correct equations: \( \sigma(f(x))=c\sigma(x) \) or, with \( \mu_c(x)=cx \): \( \sigma\circ f = \mu_c \circ \sigma \) if we take the Bell matrices: \( FS=SM \) where \( M \) is the Bell matrix of \( \mu_c \). This is the diagonal matrix: \( M=\begin{pmatrix} c &0 & 0 &\dots &0\\ 0 & c^2 & 0&\dots& 0\\ &&\vdots&\\ 0 & &\dots& 0 & c^n\\ \end{pmatrix} \) I think Gottfried calls this the Vandermonde matrix. The right multiplication of this matrix multiplies each \( k \)-th column with \( c^k \). If we truncate \( S \) to its first column \( \vec{\sigma} \) we get hence: \( F\vec{\sigma} = c\vec{\sigma} \) and this can then be transformed to: \( (F-cI)\vec{\sigma}=0 \) which I used for my further derivations. (10/07/2007, 10:30 PM)bo198214 Wrote: Now there is the the so called principal Schroeder function \( \sigma_f \) of a function \( f \) with fixed point 0 with slope \( s:=f'(0) \), \( 0<s<1 \) given by: Sometimes a thing needs a whole life to be recognized... In the matrix-method I dealt with the eigen-decomposition of the (triangular) dxp_t() -Bellmatrix U_t to satisfy the relation \( \hspace{24} U_t = W * D * W^{-1 } \) While the recursion to compute W and W^-1 efficiently is easy and is working well, I did not have a deeper idea about the structure of the columns in W. Now I found, it just agrees with the above formula: \( \hspace{24} W = \lim_{h\to\infty} {U_t}^h * diag({U_t}^h)^{-1} \) which is exactly the above formula; we even can write this, if we refer to the second column of U_t^h as F°h, the second column of W as S, and s = F[1] while F°h[1] = F[1]^h =s^h , then we have \( \hspace{24} S = \lim_{h\to\infty} \frac {F^{\circ h}}{s^h} \) Something *very* stupid ... <sigh> But, well, now also this detail is explained for me. <Hmmm I don't know why the forum software merges my two replies (to two previous posts of Henryk) into one So here is the second post> (04/02/2009, 02:31 PM)bo198214 Wrote: This is the diagonal matrix:Not exactly. I call the Vandermonde-*matrix* VZ (and ZV=VZ~) the *collection* of consecutive vandermonde V(x)-vectors ´ VZ = [V(0), V(1) , V(2), V(3), ...] \\ Vandermondematrix Your M is just c*dV( c) in my notation: the vandermondevector V( c) used as diagonalmatrix (and since the first entry is not c^0 I noted the additional factor c) Gottfried
Gottfried Helms, Kassel
07/31/2009, 08:55 AM
|
|
« Next Oldest | Next Newest »
|
| Possibly Related Threads… | |||||
| Thread | Author | Replies | Views | Last Post | |
|
|
E^^.5 and Slog(e,.5) | Catullus | 7 | 11,139 |
07/22/2022, 02:20 AM Last Post: MphLee |
|
|
Slog(Exponential Factorial(x)) | Catullus | 19 | 23,750 |
07/13/2022, 02:38 AM Last Post: Catullus |
|
|
Slog(x^^^2) | Catullus | 1 | 3,276 |
07/10/2022, 04:40 AM Last Post: JmsNxn |
|
|
Slog(e4) | Catullus | 0 | 2,597 |
06/16/2022, 03:27 AM Last Post: Catullus |
| A support for Andy's (P.Walker's) slog-matrix-method | Gottfried | 4 | 12,166 |
03/08/2021, 07:13 PM Last Post: JmsNxn |
|
| Some slog stuff | tommy1729 | 15 | 62,390 |
05/14/2015, 09:25 PM Last Post: tommy1729 |
|
| Regular iteration using matrix-Jordan-form | Gottfried | 7 | 25,842 |
09/29/2014, 11:39 PM Last Post: Gottfried |
|
| A limit exercise with Ei and slog. | tommy1729 | 0 | 6,200 |
09/09/2014, 08:00 PM Last Post: tommy1729 |
|
| A system of functional equations for slog(x) ? | tommy1729 | 3 | 14,737 |
07/28/2014, 09:16 PM Last Post: tommy1729 |
|
| slog(superfactorial(x)) = ? | tommy1729 | 3 | 14,170 |
06/02/2014, 11:29 PM Last Post: tommy1729 |
|