These domains are developed on two separate lines of thought, the regular one and the natural one. I'll start with the regular one.
When considering geometric series, the general rule is:
Ok, now for the natural one.
The 3 major problems I had with natural tetration, is with proving that the coefficients are solvable, the coefficients converge, and that the series formed from these coefficients converges. One thing that I noticed recently that I'm still very interested in, but have not verified completely is as follows. Using the notation
I noticed that for all bases for which the coefficients converged, that \( |s_1(b)| > |s_k(b)|^{1/k} \) for all k. I'm still researching this, so don't take my word for this yet... But using that observation, I thought perhaps I could turn it around, and say that the bases for which the natural super-logarithm are well defined, are exactly those bases which satisfy \( |s_1(b)| > |s_k(b)|^{1/k} \). This makes the proof almost trivial! Since all you have to do is divide the super-logarithm by \( |s_1(b)| \), and then the root-test of the coefficients would be bounded by a constant.
As for the "triangle" above, I have no justification for why it would be a triangle, but my tests for finding where the root-formula hosts true have indicated that those are the bases for which it is most obvious that the root-formula holds true. So the question still remains, what are the bases for which the natural tetration/super-logarithm is well-defined?
Andrew Robbins
PS. I'm sorry for the lack of rigor in the previous posts, these ideas are somewhat vague, but I have tried to provide as much rigor as I can considering these are open problems...
When considering geometric series, the general rule is:
\( \sum_{k=0}^{n-1} a r^k = \begin{cases}
a n & \text{if } r = 1 \\
a \frac{1 - r^{n}}{1 - r} & \text{otherwise}
\end{cases}
\)
while the limit as n goes to infinity is:a n & \text{if } r = 1 \\
a \frac{1 - r^{n}}{1 - r} & \text{otherwise}
\end{cases}
\)
\( \sum_{k=0}^{\infty} a r^k = \begin{cases}
\frac{a}{1 - r} & \text{if } |r| < 1 \\
\infty & \text{otherwise}
\end{cases}
\)
In Szekeres' and Henryk's discussion of the regular method, the Abel function is defined in terms of the Schroeder function (which I have tried to codify here for future reference) as follows:\frac{a}{1 - r} & \text{if } |r| < 1 \\
\infty & \text{otherwise}
\end{cases}
\)
\(
\sigma(x) = \lim_{n\rightarrow\infty} \frac{\exp^n(x)}{\exp'(x_0)^n}
\)
where many series within the regular approach can be defined in terms of the Schroeder function, with inverses, logarithms, and so on, we can get the Abel function, the general iterate, etc. The line of thought that prompted this thread was that the Schroeder function is defined as a limiting case of an iterated function. While one could define the Schroeder function in terms of Geisler's coefficients, causing no limit of domain, the problem of defining the coefficients rests with Geisler's method which not many completely understand. I think there are more people who understand the Schroeder function than Geisler's method, so it makes more sense to defined things in terms of it. This is where the problem comes in. Geisler's coefficients are finite geometric series. The Schroder function has coefficients which are also finite geometric series, as can be seen in this post I made regarding them. However, the process of defining the Schroeder function in terms of a limit requires that we use the \( n\rightarrow\infty \) version of the geometric series formula, which means that (|r|<1) instead of (r/=1) for the finite case. So while the coefficients of the Schroeder function may be finite series (r/=1), the process of finding/defining these coefficients involves a limit to infinity, which would essentially be an infinite series (|r|<1). Somewhere in here is my reason for limiting the domain of regular tetration to bases which lie in the convergence region of infinitely iterated exponentials.\sigma(x) = \lim_{n\rightarrow\infty} \frac{\exp^n(x)}{\exp'(x_0)^n}
\)
Ok, now for the natural one.
The 3 major problems I had with natural tetration, is with proving that the coefficients are solvable, the coefficients converge, and that the series formed from these coefficients converges. One thing that I noticed recently that I'm still very interested in, but have not verified completely is as follows. Using the notation
\(
\begin{tabular}{rl}
\text{slog}_b(x) & = \sum_{k=0}^{\infty} s_k(b) x^k \\
\text{slog}_b(x)_n & = \sum_{k=0}^{n} s_{nk}(b) x^k
\end{tabular}
\)
\begin{tabular}{rl}
\text{slog}_b(x) & = \sum_{k=0}^{\infty} s_k(b) x^k \\
\text{slog}_b(x)_n & = \sum_{k=0}^{n} s_{nk}(b) x^k
\end{tabular}
\)
I noticed that for all bases for which the coefficients converged, that \( |s_1(b)| > |s_k(b)|^{1/k} \) for all k. I'm still researching this, so don't take my word for this yet... But using that observation, I thought perhaps I could turn it around, and say that the bases for which the natural super-logarithm are well defined, are exactly those bases which satisfy \( |s_1(b)| > |s_k(b)|^{1/k} \). This makes the proof almost trivial! Since all you have to do is divide the super-logarithm by \( |s_1(b)| \), and then the root-test of the coefficients would be bounded by a constant.
As for the "triangle" above, I have no justification for why it would be a triangle, but my tests for finding where the root-formula hosts true have indicated that those are the bases for which it is most obvious that the root-formula holds true. So the question still remains, what are the bases for which the natural tetration/super-logarithm is well-defined?
Andrew Robbins
PS. I'm sorry for the lack of rigor in the previous posts, these ideas are somewhat vague, but I have tried to provide as much rigor as I can considering these are open problems...

