10/13/2007, 07:13 AM
I need to take stock of where I'm at, because what seemed complicated before is even more complicated, now that I have a better handle on this monster.
First of all, I want to be clear that my conjecture on the slog being like the sum of two complex conjugate parts isn't to imply that either part on its own is an Abel function, or whatever the terminology would be. It's easy to see why when we consider a point where either function on its own is a complex value, such as 0.5+0.2i. The conjugate is 0.5-0.2i, and the averaging of these two is the real number 0.5. However, exp(0.5+0.2i) = 1.616+0.328i, and averaged with its conjugate we get 1.616 (just keeping a few sig-figs). However, exp(0.5) is 1.649, so clearly it's not just a matter of adding two otherwise complex-valued continuous tetration solutions. No, it's a more subtle blending that is occurring here.
So yes, I see both primary fixed points being involved, yielding singularities that are very log-like. In fact, to see this in action, consider the following functions of complex z, with \( a_0 \) and \( \overline{a_0} \) as the primary fixed points for base e:
\(
\begin{eqnarray}
F_0(z) & = & \log_{a_0}\left(z-a_0\right)+\log_{\overline{a_0}}\left(z-\overline{a_0}\right) \\
F_1(z) & = & \log_{a_0}\left(e^z-a_0\right)+\log_{\overline{a_0}}\left(e^z-\overline{a_0}\right) \\
F_2(z) & = & \log_{a_0}\left(e^{e^z}-a_0\right)+\log_{\overline{a_0}}\left(e^{e^z}-\overline{a_0}\right) \\
\end{eqnarray}
\)
\( F_0 \) has singularities at the two primary fixed points. Interestingly, \( F_1 \) has singularities at the primary fixed points, as well as at \( 2\pi k i \) offsets of the fixed points. Going a step further, \( F_2 \) has singularities at the natural logarithms of all these points, in all branches (i.e., at \( 2\pi k i \) offsets).
In fact, you could continue with this process, and you should easily be able to verify for yourself that each of these singularities is a singularity in the slog for base e, if you drill deep enough into the logarithmic branches.
But in the exponential branches, these singularities cannot be "seen". Unlike the plain logarithm, in which each branch looks the same as another, the various branches of the slog have a fractal nature to them, so that we cannot easily define how the slog "looks" at any point without also making explicit which branch we are on. The origin is a fairly special case, which is difficult to describe without pictures.
The slog is not a function, nor is tetration. This is important to understand. The logarithm is not a function, in the sense that analytically continuing it from any point by integrating its derivative yields multiple values for any point. Yet exponentiation is in fact a function, so long as you're clear on how you define the base.
For example, \( f(z)=z^2 \) is a function, but the inverse \( f^{-1}(z) \) is not a function. Exponentiation is like f(z), while the logarithm is like \( f^{-1}(z) \), with multiple values possible.
On the other hand, the slog and sexp are like the trying to find functions y=g(x) and x=h(y) of the relation \( x^2+y^2=4 \). Neither g nor h is a function, though we can limit ourselves to certain domains and find functions for those "parts".
From this frame of reference, with x and y as complex numbers, and thus four degrees of freedom. I.e., we have a 4-D space with a 2-D fractal surface which represents all points on this slog/sexp relation. We might try to find x=slog(y), or we might try to find y=sexp(x). Either way, we're not really finding "functions", not unless we limit our domains very carefully.
This will also help us to understand why the slog doesn't "look" the same at a given point, depending on which branch we're in. Unlike the logarithm, which is like a simple "screw" in 4-D space, the slog is much more complex. However, it's all one 2-D manifold, as far as I can tell, with continuous derivatives at all points in all branches (except at the singularities), so the entire structure's information can be encoded by the power series developed at the origin in the "backbone".
I've been able to extract a fair amount of precision from the matrix method, but at this point the numerical precision is of less concern to me than uncovering a more fundamental method of deriving the solution. The matrix method yields the correct singularities with the correct structure, but it's still not at all obvious to me that this should ever have worked. It's still "magic" to me, still some sort of voodoo. I'd rather derive it from basic principles (basic being a relative term here).
I suppose none of this is making sense, since I still have yet to post source code or decent pictures, other than the few. Those few should be sufficient for those who study them, but more might be needed for those with less time to devote to forming complex mental abstractions.
First of all, I want to be clear that my conjecture on the slog being like the sum of two complex conjugate parts isn't to imply that either part on its own is an Abel function, or whatever the terminology would be. It's easy to see why when we consider a point where either function on its own is a complex value, such as 0.5+0.2i. The conjugate is 0.5-0.2i, and the averaging of these two is the real number 0.5. However, exp(0.5+0.2i) = 1.616+0.328i, and averaged with its conjugate we get 1.616 (just keeping a few sig-figs). However, exp(0.5) is 1.649, so clearly it's not just a matter of adding two otherwise complex-valued continuous tetration solutions. No, it's a more subtle blending that is occurring here.
So yes, I see both primary fixed points being involved, yielding singularities that are very log-like. In fact, to see this in action, consider the following functions of complex z, with \( a_0 \) and \( \overline{a_0} \) as the primary fixed points for base e:
\(
\begin{eqnarray}
F_0(z) & = & \log_{a_0}\left(z-a_0\right)+\log_{\overline{a_0}}\left(z-\overline{a_0}\right) \\
F_1(z) & = & \log_{a_0}\left(e^z-a_0\right)+\log_{\overline{a_0}}\left(e^z-\overline{a_0}\right) \\
F_2(z) & = & \log_{a_0}\left(e^{e^z}-a_0\right)+\log_{\overline{a_0}}\left(e^{e^z}-\overline{a_0}\right) \\
\end{eqnarray}
\)
\( F_0 \) has singularities at the two primary fixed points. Interestingly, \( F_1 \) has singularities at the primary fixed points, as well as at \( 2\pi k i \) offsets of the fixed points. Going a step further, \( F_2 \) has singularities at the natural logarithms of all these points, in all branches (i.e., at \( 2\pi k i \) offsets).
In fact, you could continue with this process, and you should easily be able to verify for yourself that each of these singularities is a singularity in the slog for base e, if you drill deep enough into the logarithmic branches.
But in the exponential branches, these singularities cannot be "seen". Unlike the plain logarithm, in which each branch looks the same as another, the various branches of the slog have a fractal nature to them, so that we cannot easily define how the slog "looks" at any point without also making explicit which branch we are on. The origin is a fairly special case, which is difficult to describe without pictures.
The slog is not a function, nor is tetration. This is important to understand. The logarithm is not a function, in the sense that analytically continuing it from any point by integrating its derivative yields multiple values for any point. Yet exponentiation is in fact a function, so long as you're clear on how you define the base.
For example, \( f(z)=z^2 \) is a function, but the inverse \( f^{-1}(z) \) is not a function. Exponentiation is like f(z), while the logarithm is like \( f^{-1}(z) \), with multiple values possible.
On the other hand, the slog and sexp are like the trying to find functions y=g(x) and x=h(y) of the relation \( x^2+y^2=4 \). Neither g nor h is a function, though we can limit ourselves to certain domains and find functions for those "parts".
From this frame of reference, with x and y as complex numbers, and thus four degrees of freedom. I.e., we have a 4-D space with a 2-D fractal surface which represents all points on this slog/sexp relation. We might try to find x=slog(y), or we might try to find y=sexp(x). Either way, we're not really finding "functions", not unless we limit our domains very carefully.
This will also help us to understand why the slog doesn't "look" the same at a given point, depending on which branch we're in. Unlike the logarithm, which is like a simple "screw" in 4-D space, the slog is much more complex. However, it's all one 2-D manifold, as far as I can tell, with continuous derivatives at all points in all branches (except at the singularities), so the entire structure's information can be encoded by the power series developed at the origin in the "backbone".
I've been able to extract a fair amount of precision from the matrix method, but at this point the numerical precision is of less concern to me than uncovering a more fundamental method of deriving the solution. The matrix method yields the correct singularities with the correct structure, but it's still not at all obvious to me that this should ever have worked. It's still "magic" to me, still some sort of voodoo. I'd rather derive it from basic principles (basic being a relative term here).
I suppose none of this is making sense, since I still have yet to post source code or decent pictures, other than the few. Those few should be sufficient for those who study them, but more might be needed for those with less time to devote to forming complex mental abstractions.
~ Jay Daniel Fox

