04/21/2023, 04:26 AM
Hey, everyone!
So I'd like to publish this code, but first I'd like to draw out what this code does differently than Sheldon.
First of all, we are going to rely on Sheldon's code for the construction of a single sequence. We will call this sequence:
\[
a_n = \text{slog}^{(n)}(1)\\
\]
Where Kneser's super logarithm can be written as:
\[
\text{slog}(z) = \sum_{n=1}^\infty a_n z^n\\
\]
Sheldon's code is pre-run in this code; which is the attached 80kb file. This is done to grab this sequence to a good enough accuracy; 200 terms and about 50-60 digits of accuracy. Which will be good for our purposes. This series converges for \(|z| < |L|\), but for our purposes we only need it to converge for \(|z| < |\Im(L)|\)--which the generation of the sequence \(a_n\) I use, Sheldon's code is unbelievably accurate.
The reason I wrote this code; and not just use the masterpiece that is fatou.gp; is pretty straightforward. Sheldon's fatou.gp does not accept polynomial arguments. It is purely numerical in its arguments. This means something pretty straightforward. In Pari-GP, we can write something like
And out spits the Taylor series of \(\sin\) about \(z = 1\):
Sheldon's code does not do this. Sheldon's code cannot adapt to the polynomial case....
Now sheldon's code has all the tools to actually produce; say the superlogarithm taylor series at \(1\). But! nowhere in sheldon's code can we just write
And expect this to print out the taylor series at 1.
This may seem trivial; and from a mathematical point of view it definitely is. In sheldon's code, you would write:
But this is lengthy, and fucking time consuming to say the least. And good luck doing this on the fly with random points.
That is what my code solves. It's a two birds one stone kind of thing. To define \(\text{slog}(z)\), we only need it to be defined in a neighborhood (which is the initial grab of sheldon's code). Then from there we can run a recursive algorithm, that looks super super super fucking simple. But I've been slaving at this thing for a while. And it uses the divergence of orbits of \(\exp\) in a super clever way (if I do say so myself
)
I am going to package code for one thing, and one thing only. This is Kneser's Super Logarithm. The code I am packaging draws a chi-star branch-cut near the fixed point \(L\), and branches at \(\Re(L) + \pi i\) towards positive infinity horizontally (I've included a graph, and I've included a graph explaining some of these points). The chi-star is actually seen completely through recursion. (I've included a graph of the branch cut near \(L\) as well, which has the unmistakable look of the chi-star).
This function is \(2 \pi i\) periodic, and we can choose how it branches. There are multiple versions of Kneser's super logarithm; we just have to move the branch cut around. Think of this as moving the branch cut of the logarithm; but it can move much much more chaotically--I'd love to have more control over it; but for the moment, this branch cut looks really good.
I like calling this a T-shaped super logarithm; because it takes it's major branch cut along the line \(\Re(z) > \Re(L)\) and \(\Im(z) = (2k+1)\pi i\)--and it curls towards \(L\) or \(L^*\) as a fancy caligraphy version of the letter T would. But for the moment, I am just calling this the recursive sheldon method. Because it turns Sheldon's fatou.gp's taylor series for slog at \(1\) (which as far as I am concerned is the hard part) into a workable slog that handles polynomial values.
So, again, I can write something like:
And this will be the Taylor data of the function \(\text{slog}(3+z+z^3)\). The same way you could do with \(\sin\) or \(\cos\) or \(\exp\) or any function in pari. It takes polynomial data, and produces polynomial data. Which to me, is just the only real flaw in Sheldon's fatou.gp
This will work much much simpler. It is not as good as Sheldon's code; not in any way. Not until we can run Sheldon's initialization, and adapt the functions I have in the works will this be as good. And even then, I'm only confident for values \(b > \eta\). I'm going to post my version of Kneser's tetration soon too; it just relies on sheldon's series at \(0\) again, and then runs a similar recursive protocol to get the value everywhere. But we need slog first...
I'm going to graph the Kneser super logarithm for \(-5 \le \Re(z) \le 5\) and \(-5 \le \Im(z) \le 5\). And then I will draw some labels on the graph to give you an idea of what this means.
Here is a small legend describing what's happening in this graph:
Here is a zoom in on the chi star about \(L\)
This is just describing the branching points of my code; and how I've written the super logarithm. This is pretty much all you need to know about that.
I am releasing my code as a .zip file; and there are two things you need to know. You have two .gp files. The first is "SLOG_SERIES_ABOUT_1.gp". This stores an 80kb file which is just Sheldon's generation of \(a_n\). The second file is the actual code-- "recursive_sheldon.gp"; which is a very very brief code block. Despite how brief it is, I hope you guys don't underestimate it. This is a very difficult recursion protocol that required many sleepless nights
I should note now, that \(\text{slog}(\exp(z)) -1 = \text{slog}(z)\) under my code; but this does not mean that \(\text{slog}(\log(z)) +1 = \text{slog}(z)\) necessarily; because the principle branch of the logarithm may not be the logarithm we want. Which I think has a lot to do with the chi star. The correct statement is that, for some \(k \in \mathbb{Z}\) we have \(\text{slog}(\log(z+2\pi i k)) + 1 = \text{slog}(z)\)--which is the entire point of Kneser's construction.
But I digress.
Unzip this file in a Pari-gp home folder; or whatever your home-folder is; so that this thing works. You should only need to read "recursive_sheldon.gp" to use this program. But "recursive_sheldon.gp" reads the first file as its first action. And to do such; Pari needs the file "SLOG_SERIES_ABOUT_1.gp" somewhere it can read it.
From there you have the function:
Which will handle polynomial operations in its argument. And it is a specific Kneser super logarithm.
Code is super fucking fast. Go wild.
Recursive Sheldon.zip (Size: 41.47 KB / Downloads: 407)
If you're interested in coding, look how simple my program looks
This has always been my schtick; making recursions within recursions within recursions look clean as fuck!
LET'S GO!
On to getting sexp through slog now. Giving me a headache lately...
God I wish Sheldon already had polynomial arguments built in...
Regards, James
So I'd like to publish this code, but first I'd like to draw out what this code does differently than Sheldon.
First of all, we are going to rely on Sheldon's code for the construction of a single sequence. We will call this sequence:
\[
a_n = \text{slog}^{(n)}(1)\\
\]
Where Kneser's super logarithm can be written as:
\[
\text{slog}(z) = \sum_{n=1}^\infty a_n z^n\\
\]
Sheldon's code is pre-run in this code; which is the attached 80kb file. This is done to grab this sequence to a good enough accuracy; 200 terms and about 50-60 digits of accuracy. Which will be good for our purposes. This series converges for \(|z| < |L|\), but for our purposes we only need it to converge for \(|z| < |\Im(L)|\)--which the generation of the sequence \(a_n\) I use, Sheldon's code is unbelievably accurate.
The reason I wrote this code; and not just use the masterpiece that is fatou.gp; is pretty straightforward. Sheldon's fatou.gp does not accept polynomial arguments. It is purely numerical in its arguments. This means something pretty straightforward. In Pari-GP, we can write something like
Code:
sin(1+z)And out spits the Taylor series of \(\sin\) about \(z = 1\):
Code:
%1 = 0.84147098480789650665250232163029899962 + 0.54030230586813971740093660744297660373*z - 0.42073549240394825332625116081514949981*z^2 - 0.090050384311356619566822767907162767288*z^3 + 0.035061291033662354443854263401262458318*z^4 + 0.0045025192155678309783411383953581383644*z^5 - 0.0011687097011220784814618087800420819439*z^6 - 0.00010720283846590073757955091417519377058*z^7 + 2.0869816091465687168960871072180034713 E-5*z^8 + 1.4889283120263991330493182524332468136 E-6*z^9 - 2.3188684546072985743289856746866705237 E-7*z^10 - 1.3535711927512719391357438658484061942 E-8*z^11 + 1.7567185262176504350977164202171746391 E-9*z^12 + 8.6767384150722560201009222169769627834 E-11*z^13 - 9.6522995946024749181193209902042562590 E-12*z^14 - 4.1317801976534552476671058176080775159 E-13*z^15 + O(z^16)Sheldon's code does not do this. Sheldon's code cannot adapt to the polynomial case....
Now sheldon's code has all the tools to actually produce; say the superlogarithm taylor series at \(1\). But! nowhere in sheldon's code can we just write
Code:
slog(1+z)And expect this to print out the taylor series at 1.
This may seem trivial; and from a mathematical point of view it definitely is. In sheldon's code, you would write:
Code:
slogtaylor(1)But this is lengthy, and fucking time consuming to say the least. And good luck doing this on the fly with random points.
That is what my code solves. It's a two birds one stone kind of thing. To define \(\text{slog}(z)\), we only need it to be defined in a neighborhood (which is the initial grab of sheldon's code). Then from there we can run a recursive algorithm, that looks super super super fucking simple. But I've been slaving at this thing for a while. And it uses the divergence of orbits of \(\exp\) in a super clever way (if I do say so myself
)I am going to package code for one thing, and one thing only. This is Kneser's Super Logarithm. The code I am packaging draws a chi-star branch-cut near the fixed point \(L\), and branches at \(\Re(L) + \pi i\) towards positive infinity horizontally (I've included a graph, and I've included a graph explaining some of these points). The chi-star is actually seen completely through recursion. (I've included a graph of the branch cut near \(L\) as well, which has the unmistakable look of the chi-star).
This function is \(2 \pi i\) periodic, and we can choose how it branches. There are multiple versions of Kneser's super logarithm; we just have to move the branch cut around. Think of this as moving the branch cut of the logarithm; but it can move much much more chaotically--I'd love to have more control over it; but for the moment, this branch cut looks really good.
I like calling this a T-shaped super logarithm; because it takes it's major branch cut along the line \(\Re(z) > \Re(L)\) and \(\Im(z) = (2k+1)\pi i\)--and it curls towards \(L\) or \(L^*\) as a fancy caligraphy version of the letter T would. But for the moment, I am just calling this the recursive sheldon method. Because it turns Sheldon's fatou.gp's taylor series for slog at \(1\) (which as far as I am concerned is the hard part) into a workable slog that handles polynomial values.
So, again, I can write something like:
Code:
slog(3+z+z^3)
%5 = (1.0882491361342880006740084286921804608 - 3.0063724780796382184006418466606948781 E-213*I) + (0.29115832796648167809725524322185564842 - 4.2985348618744138323506310791882604230 E-213*I)*z + (-0.073058087280068937319111790527254704389 - 1.8234095242410207856218419746620351633 E-213*I)*z^2 + (0.30906430318032394074383665825674159246 - 4.1897609825248264566947421175187408385 E-213*I)*z^3 + (-0.15003947348997580178695026854441790161 - 3.7375361335197652472952818219673848366 E-213*I)*z^4 + (0.054393160100824104295983990735795975243 + 3.6554042757599526721536442242419098001 E-214*I)*z^5 + (-0.088795793636932988949383072894007777190 - 2.1706608366752410124466129020609338442 E-213*I)*z^6 + (0.057063148885711291737111899080612861627 + 4.9496068494730215180038825093665536939 E-214*I)*z^7 + (-0.023786965031661616616678995786661053535 - 4.3770862104259230226798875141047733361 E-214*I)*z^8 + (0.024433796556719230288603965703938032319 + 3.0396655573025851940848179720683926404 E-214*I)*z^9 + (-0.016199252859544990921842738471640929930 - 2.4210942404539025512507413445001625652 E-215*I)*z^10 + (0.0060308759446497098213760978794565312281 - 2.2763777761389472516107042355740946622 E-214*I)*z^11 + (-0.0042317802061312785718865677904055132077 + 5.9505884466721817452992013498376064760 E-214*I)*z^12 + (0.0020024615039366304497142300439925822512 - 9.3758538853969812664997453501690138299 E-214*I)*z^13 + (0.00055848650419835654512223904676720455130 + 1.0105935609685363361212628308636116681 E-213*I)*z^14 + (-0.0011039988112221366853484752442266692821 - 1.3223560560644804059580500561334890551 E-213*I)*z^15 + O(z^16)And this will be the Taylor data of the function \(\text{slog}(3+z+z^3)\). The same way you could do with \(\sin\) or \(\cos\) or \(\exp\) or any function in pari. It takes polynomial data, and produces polynomial data. Which to me, is just the only real flaw in Sheldon's fatou.gp
This will work much much simpler. It is not as good as Sheldon's code; not in any way. Not until we can run Sheldon's initialization, and adapt the functions I have in the works will this be as good. And even then, I'm only confident for values \(b > \eta\). I'm going to post my version of Kneser's tetration soon too; it just relies on sheldon's series at \(0\) again, and then runs a similar recursive protocol to get the value everywhere. But we need slog first...
I'm going to graph the Kneser super logarithm for \(-5 \le \Re(z) \le 5\) and \(-5 \le \Im(z) \le 5\). And then I will draw some labels on the graph to give you an idea of what this means.
Here is a small legend describing what's happening in this graph:
Here is a zoom in on the chi star about \(L\)
This is just describing the branching points of my code; and how I've written the super logarithm. This is pretty much all you need to know about that.
I am releasing my code as a .zip file; and there are two things you need to know. You have two .gp files. The first is "SLOG_SERIES_ABOUT_1.gp". This stores an 80kb file which is just Sheldon's generation of \(a_n\). The second file is the actual code-- "recursive_sheldon.gp"; which is a very very brief code block. Despite how brief it is, I hope you guys don't underestimate it. This is a very difficult recursion protocol that required many sleepless nights
I should note now, that \(\text{slog}(\exp(z)) -1 = \text{slog}(z)\) under my code; but this does not mean that \(\text{slog}(\log(z)) +1 = \text{slog}(z)\) necessarily; because the principle branch of the logarithm may not be the logarithm we want. Which I think has a lot to do with the chi star. The correct statement is that, for some \(k \in \mathbb{Z}\) we have \(\text{slog}(\log(z+2\pi i k)) + 1 = \text{slog}(z)\)--which is the entire point of Kneser's construction.
But I digress.
Unzip this file in a Pari-gp home folder; or whatever your home-folder is; so that this thing works. You should only need to read "recursive_sheldon.gp" to use this program. But "recursive_sheldon.gp" reads the first file as its first action. And to do such; Pari needs the file "SLOG_SERIES_ABOUT_1.gp" somewhere it can read it.
From there you have the function:
Code:
slog(z)Which will handle polynomial operations in its argument. And it is a specific Kneser super logarithm.
Code is super fucking fast. Go wild.
Recursive Sheldon.zip (Size: 41.47 KB / Downloads: 407)
If you're interested in coding, look how simple my program looks
This has always been my schtick; making recursions within recursions within recursions look clean as fuck!
LET'S GO!
On to getting sexp through slog now. Giving me a headache lately...
God I wish Sheldon already had polynomial arguments built in...
Regards, James

