04/15/2008, 12:17 AM
Before we move on, lets overview the natural method for a bit.
There are actually 4-5 "natural" methods that I can think of for obtaining the coefficients of superlog(). Each has certain constants (indicated by 0 or the like), and variables (indicated by #) which can be set to almost anything.
Since the global behavior of the coefficients of the super-logarithm are still not very well known, the only expansion point which can be estimated so far seems to be \( z_0 = 0 \). Maybe Jay can correct me on this. So in order to take advantage of the accelerated superlog, we must use the expansion about zero. All input values must be appropriately exponentiated or logarithmized, before applying the series.
In general, the natural method uses the a matrix that represents the Abel function, the solution to the matrix equation is a vector which represents the coefficients of the Abel function (superlog in this case). To summarize what Jay Fox found, instead of solving for all of the coefficients, we estimate all coefficients from 50..100, and we only solve for coefficients 1..50 for example. By estimating the later coefficients, the accuracy of the early coefficients is improved. I will call this the natural_superlog() function. Since it depends on an approximation number (called n), this will be another argument to the function.
I am very impressed with Jay Fox's error bound. This would allow us to ask questions like "What is e^^pi to 14 significant figures?" since we would know what is the matrix size that would allow this precision. According to his error bound, 14 digits would require a \( 10^7 \times 10^7 \) matrix using my method. With Jay's method however, there seems to be a variable that does not exist with my method, the size of the estimated/accelerated coefficients. Jay's method can reduce the matrix size down to 2000 or 200 depending on the ratio between solved/natural coefficients and estimated/accelerated coefficients. It seems that the more estimated coefficients there are, the greater the precision. Tuning this number to increase the speed of the calculations will be most valuable in calculating the fixed points of tetration and pentation as Ivars and GFR have mentioned.
Here is an example matrix for the accelerated natural super-log approach:
what Jay's method is doing is replacing the bottom part of the Abel matrix with the identity matrix, and replacing the 1-vector with the estimated coefficients. While this speeds up the solving process, it also seems to increase the accuracy and precision of the final solution. This matrix shows that there are two variables that dictate the matrix (aside from the base b): the number of solved coefficients \( n_{sol} \) and the number of estimated coefficients \( n_{est} \). In the example above these numbers would be \( n_{sol} = 4 \) and \( n_{est} = 3 \) giving a \( 7 \times 7 \) matrix.
Interestingly, Jay has talked about solving the matrix given above, then simply putting more estimated coefficients in the series. This doesn't actually increase the accuracy, but decreases it, because instead of obeying the Abel functional equation exactly, it obeys it to 0.000001 accuracy, for example. This method of first solving, THEN using estimated coefficients is equivalent to the matrix:
which does not increase accuracy of the solution, but makes it worse! This is because the very nice estimates that we have for the outer coefficients are not being used in the solution at all. So when you are constructing an algorithm to do this, make sure that you implement the first matrix, NOT the second matrix. The first matrix will make it better, the second matrix will make it worse. And if you didn't want to use estimated coefficients at all, that would make the matrix look like this:
as in my original paper. The reason why it takes so long for these coefficients to converge using my method is because of the implicit assumption that \( x_5 = 0 \) when in fact \( x_5 \approx 0.01 \) or something.
This method can be used to increase the accuracy and precision of the coefficients of the natural super-logarithm, but how do they affect the accuracy and precision of the final answer? I found a paper that solves this problem exactly, called The Taylor Polynomial Error Formula, which allows you to ask what precision can be found when only using the first n terms of a Taylor series, i.e. a Taylor polynomial.
\(
f(x) - p_n(x) = \frac{1}{n!} \int_{0}^{x} (x - t)^n f^{(n+1)}(t) dt
\)
This formula can be used to derive the series-precision from the coefficient-precision that Jay has found.
Applying this formula to superlog will hopefully be the subject of my next post.
Andrew Robbins
There are actually 4-5 "natural" methods that I can think of for obtaining the coefficients of superlog(). Each has certain constants (indicated by 0 or the like), and variables (indicated by #) which can be set to almost anything.
- (b=E, z_0=0, n_sol=#, n_est=0) -- Peter Walker's original method
- (b=#, z_0=0, n_sol=#, n_est=0) -- An improved method I described in my first paper
- (b=#, z_0=#, n_sol=#, n_est=0) -- An improved method I described in this post
- (b=#, z_0=0, n_sol=#, n_est=#) -- Jay Fox's accelerated method described here, here, and here.
- (b=#, z_0=#, n_sol=#, n_est=#) -- Unknown
Since the global behavior of the coefficients of the super-logarithm are still not very well known, the only expansion point which can be estimated so far seems to be \( z_0 = 0 \). Maybe Jay can correct me on this. So in order to take advantage of the accelerated superlog, we must use the expansion about zero. All input values must be appropriately exponentiated or logarithmized, before applying the series.
In general, the natural method uses the a matrix that represents the Abel function, the solution to the matrix equation is a vector which represents the coefficients of the Abel function (superlog in this case). To summarize what Jay Fox found, instead of solving for all of the coefficients, we estimate all coefficients from 50..100, and we only solve for coefficients 1..50 for example. By estimating the later coefficients, the accuracy of the early coefficients is improved. I will call this the natural_superlog() function. Since it depends on an approximation number (called n), this will be another argument to the function.
I am very impressed with Jay Fox's error bound. This would allow us to ask questions like "What is e^^pi to 14 significant figures?" since we would know what is the matrix size that would allow this precision. According to his error bound, 14 digits would require a \( 10^7 \times 10^7 \) matrix using my method. With Jay's method however, there seems to be a variable that does not exist with my method, the size of the estimated/accelerated coefficients. Jay's method can reduce the matrix size down to 2000 or 200 depending on the ratio between solved/natural coefficients and estimated/accelerated coefficients. It seems that the more estimated coefficients there are, the greater the precision. Tuning this number to increase the speed of the calculations will be most valuable in calculating the fixed points of tetration and pentation as Ivars and GFR have mentioned.
Here is an example matrix for the accelerated natural super-log approach:
Code:
[ # # # # # # # ] [ X_1 ] [ 1 ]
[ # # # # # # # ] [ X_2 ] [ 0 ]
[ # # # # # # # ] [ X_3 ] [ 0 ]
[ # # # # # # # ] [ X_4 ] = [ 0 ]
[ 0 0 0 0 1 0 0 ] [ X_5 ] [ # ]
[ 0 0 0 0 0 1 0 ] [ X_6 ] [ # ]
[ 0 0 0 0 0 0 1 ] [ X_7 ] [ # ]what Jay's method is doing is replacing the bottom part of the Abel matrix with the identity matrix, and replacing the 1-vector with the estimated coefficients. While this speeds up the solving process, it also seems to increase the accuracy and precision of the final solution. This matrix shows that there are two variables that dictate the matrix (aside from the base b): the number of solved coefficients \( n_{sol} \) and the number of estimated coefficients \( n_{est} \). In the example above these numbers would be \( n_{sol} = 4 \) and \( n_{est} = 3 \) giving a \( 7 \times 7 \) matrix.
Interestingly, Jay has talked about solving the matrix given above, then simply putting more estimated coefficients in the series. This doesn't actually increase the accuracy, but decreases it, because instead of obeying the Abel functional equation exactly, it obeys it to 0.000001 accuracy, for example. This method of first solving, THEN using estimated coefficients is equivalent to the matrix:
Code:
[ # # # # 0 0 0 ] [ X_1 ] [ 1 ]
[ # # # # 0 0 0 ] [ X_2 ] [ 0 ]
[ # # # # 0 0 0 ] [ X_3 ] [ 0 ]
[ # # # # 0 0 0 ] [ X_4 ] = [ 0 ]
[ 0 0 0 0 1 0 0 ] [ X_5 ] [ # ]
[ 0 0 0 0 0 1 0 ] [ X_6 ] [ # ]
[ 0 0 0 0 0 0 1 ] [ X_7 ] [ # ]which does not increase accuracy of the solution, but makes it worse! This is because the very nice estimates that we have for the outer coefficients are not being used in the solution at all. So when you are constructing an algorithm to do this, make sure that you implement the first matrix, NOT the second matrix. The first matrix will make it better, the second matrix will make it worse. And if you didn't want to use estimated coefficients at all, that would make the matrix look like this:
Code:
[ # # # # ] [ X_1 ] [ 1 ]
[ # # # # ] [ X_2 ] [ 0 ]
[ # # # # ] [ X_3 ] [ 0 ]
[ # # # # ] [ X_4 ] = [ 0 ]as in my original paper. The reason why it takes so long for these coefficients to converge using my method is because of the implicit assumption that \( x_5 = 0 \) when in fact \( x_5 \approx 0.01 \) or something.
This method can be used to increase the accuracy and precision of the coefficients of the natural super-logarithm, but how do they affect the accuracy and precision of the final answer? I found a paper that solves this problem exactly, called The Taylor Polynomial Error Formula, which allows you to ask what precision can be found when only using the first n terms of a Taylor series, i.e. a Taylor polynomial.
\(
f(x) - p_n(x) = \frac{1}{n!} \int_{0}^{x} (x - t)^n f^{(n+1)}(t) dt
\)
This formula can be used to derive the series-precision from the coefficient-precision that Jay has found.
Applying this formula to superlog will hopefully be the subject of my next post.
Andrew Robbins

