Posts: 1,214
Threads: 126
Joined: Dec 2010
02/26/2023, 11:52 AM
(This post was last modified: 02/26/2023, 02:48 PM by JmsNxn.)
Okay, so take this post with a Grain of salt. I wanted to look deeper into this question. Because my suspicion is that we can say that sometimes, these are natural analytic continuations. I apologize if this is dumb. I'm just spit balling
I'd like to add my own two cents on:
\[
f(x) = \sum_{n=0}^\infty \frac{x^n}{1+x^n} \frac{1}{2^n}\\
\]
First of all, the word I used to use for these functions is a function that maps \(\mathbb{C} \to \mathbb{C}\) upto a measure zero set in \(\mathbb{C}\). In this case, the measure zero set is the unit circle \(U\). So that \(f : \mathbb{C}/U \to \mathbb{C}\). There is absolutely zero way to analytically continue these functions, or refer to them as analytic continuations of each other. By standard literature; they are two functions which are holomorphic on disjointed domains of \(\mathbb{C}\).
But yes, ONE is more natural; obviously the one which is just plugging in the number. This is a fallacy that Cauchy called the Generality of Algebra. And it was a stark criticism of Euler. Who upon which, Euler would use the Generality of Algebra to get correct results. But he also got incorrect results.
But this isn't what I want to talk about. I want to use Cauchy's philosophy on "filtering out" when you can use the Generality of Algebra, and when you can't...
Let's take \(|x| > 1\), and let's define a contour \(C\) which is the parameterization of \(|x| = 2\).
\[
F_k = \int_C \frac{f(x)}{x^k}\,dx\\
\]
For \( k \in \mathbb{Z}\). Then this object expands as:
\[
\sum_{n=0}^\infty \int_C \frac{x^n}{1+x^n}\frac{dx}{x^k} \frac{1}{2^n}
\]
So let's evaluate:
\[
a_{kn} = \int_C \frac{x^n}{1+x^n}\frac{dx}{x^k}\\
\]
And:
\[
a_{kn} = 2\pi i b_{kn} + 2\pi i\sum_{q^n = -1} \frac{q^n}{\prod_{p\neq q\,\,\,\,p^n = -1} (q-p)} \frac{1}{q^k}
\]
The product value is actually a simple equation. Despite looking funky. We write this as:
\[
1+x^n = \prod_{p^n = -1} (x-p)\\
\]
The derivative at this for \(q^n = -1\), is simply:
\[
nq^{n-1} = \prod_{q \neq p\,\,\,p^n = -1} (p-q)\\
\]
Where, the first term of the residues of \(b_{kn}\) is just:
\[
\frac{x^n}{1+x^n} = \sum_{k=0}^\infty b_{kn} x^k\\
\]
And for \(k < 0\) we know that \(b_{kn}\) is \(0\) because the Laurent series has no negative terms. So when we expand this we get that:
\[
F_k = \sum_{n=0}^\infty \left(2\pi i b_{kn} + 2\pi i\sum_{q^n = -1} \frac{n}{q^{k-1}}\right)\frac{1}{2^n}\\
\]
Now let's sum this across all \(k\). Let's ignore the convergence of this (also, I may have screwed up a negative sign here and there). But regardless, it looks something like this:
\[
\sum_{k=-\infty}^\infty \sum_{n=0}^\infty a_{kn}= \int_C f(x) \sum_{k=-\infty}^{\infty} x^k\,dx\\
\]
Now, by "the generality of algebra", as I mentioned before:
\[
\begin{align}
\sum_{k=-\infty}^{\infty} x^k &= \sum_{k=-\infty}^{-1}x^k + \sum_{k=0}^\infty x^k\\
&= \frac{1}{x}\frac{1}{1-\frac{1}{x}} + \frac{1}{1-x}\\
&= 0\\
\end{align}
\]
So we've shown that:
\[
\sum_{k\in \mathbb{Z}} \sum_{n=0}^\infty a_{kn} = 0\\
\]
......... right?
NO!
This may seem a little handwavey, because I used the generality of Algebra.
And you'd be write to reject that! Instead, check the math "under the integral"...
You can check the sum:
\[
\sum_{k=-\infty}^\infty \sum_{n=0}^\infty \left(2\pi i b_{kn} + 2\pi i n\sum_{q^n = -1}\frac{1}{q^{k-1}}\right) \frac{1}{2^n} = 1\\
\]
This is some Gaussian bullshit. But the second summand is reducible into a bunch of Fourier sums which sum to zero. You can see this more obviously by writing:
\[
\sum_{q^n = -1}\frac{1}{q^{k-1}} = \sum_{j=1}^n e^{-\pi i (k-1) \frac{j}{n}} = \phi_n(k)\\
\]
This produces a sum of the \(n\)'th roots of \(1\) or the \(n\)'th roots of \(-1\). But, either way \(\phi_n(k) = 0\) for all \(n\) and \(k\).
And the sum:
\[
\sum_{k=-\infty}^\infty \sum_{n=0}^\infty b_{kn} = f(1) = 1\\
\]
So "Generality of Algebra" predicted \(0\) initially--remembering the "under the integral action" reminded us it should be \(1\). Which makes sense in a Dirac sense, the sum:
\[
\int_C ...\sum_{k\in\mathbb{Z}}x^k = \int_C \delta\\
\]
But ignore this, it's unnecessary........
Now it gets beautiful!
Let's try to reconstruct \(f(x)\) for \(|x| < 1\) solely using \(f(x)\) for \(|x| > 1\). Well.... All we need to do is write:
\[
f(x) = \sum_{k=0}^\infty 2 \pi i F_k x^k\\
\]
And this sum converges to the original sum. Because the residues just disappear!!!!!!!!!!!!!!!!!!!!!!
So, sure there is a wall of singularities! But, the residues of the poles on the wall of the singularities tend to zero as you add them up!!!!!!!!!!!!!!!!!!!!!!!!!
This avoids the "Generality of algebra" fallacy, which just rearranges sums. Because for all we know \(f(x)\) for \(|x| > 1\), we may have had a different function \(g(x)\) for \(|x| < 1\) which is actually better than \(f(x)\) for \(|x| < 1\), and is the "not really analytic continuation". But instead, we can take contours \(C_R\) for \(|x| = R\) and \(R \in (0,1) \cup (1,\infty)\). And we get that:
\[
f(x) = \sum_{k=0}^\infty 2 \pi i F_k x^k = \sum_{k=0}^\infty x^k\sum_{n=0}^\infty \frac{b_{kn}}{2^n}\\
\]
And:
\[
F_k = \int_{C_R} \frac{f(x)}{x^k}\,dx\\
\]
Because even when we hit the wall of singularities; the sum of the residues is \(0\)
Hope this helps. If I'm being an idiot again, be sure to tell me! I'm still not entirely certain what you are trying to do. So calling me an idiot only helps me learn  But to me, this relates deeply to the differential nature of a function. Because functions \(f\) which take \(\mathbb{C} \to \mathbb{C}\) almost everywhere, tend to be solutions to first order differential equations. I never had a word for it. I would just write:
\[
f: \mathbb{C}/\mathcal{D} \to \mathbb{C}\\
\]
And
\[
\int_{\mathcal{D}} f(x) dA = 0\\
\]
Where this is a double integral with the standard Lebesgue area measure \(dA\)....
I'm happy to be an idiot though, lmao
Sincere Regards, James
Posts: 1,214
Threads: 126
Joined: Dec 2010
02/26/2023, 02:30 PM
(This post was last modified: 02/26/2023, 02:37 PM by JmsNxn.)
Tommy was talking about Riemann mappings. I can give you an idea of the Riemann mapping in question. The one he is searching for.
\[
\begin{align}
f(x) &= \sum_{n=0}^\infty \frac{x^n}{1+x^n}\frac{1}{2^n}\\
f(1) &= 1\\
f'(1) &= 1/2 \\
f(1+t) &= 1 + \frac{t}{2} + O(t^2)\\
\end{align}
\]
This is a standard quadratic Riemann surface branch. It is the only one on the unit circle. So we can think of this as a "quadratic" Riemann surface, with a single branch point.
Think of this point as the panama canal on the 3d topology of the earth. Where the western hemisphere is the unit disk \(|x| < 1\) and the eastern hemisphere is the unit disk \(|x|>1\).
We can now think of the panama canal as the value \(1 \in \mathbb{C}\) topologically. Where 1, is the sole point through, but we can't encircle this point without passing through a branch; traveling from water to land...
But ignoring this stupid metaphor....
This is what we do when we deal with Abel functions. But the study of Abel functions is explicit. We want to calculate them. We want to break it down to it's atoms. But Milnor, in his book, starts with basic Riemann mapping, Riemann surface. And consistently uses this as a foundation for existence of these things. In similar fashion, I am arguing for this Riemann mapping.
Since the topological mapping is quadratic. It looks like \(\sqrt{x}\), which has a single branch point and is quadratic... It's a second order Abel graph, in many ways.
This means the function \(f(1+t)\) is mappable to:
\[
\sqrt{t}\\
\]
For \(t \approx 0\). So that:
\[
h(f(1+h^{-1}(t))) = \sqrt{t}\\
\]
The function:
\[
\sqrt{t} : \mathbb{C}/(-\infty,0) \to \mathbb{C}/(-\infty,0)\\
\]
Thereby we can find a Riemann mapping of:
\[
h:\mathbb{C}/(U/1) \to \mathbb{C}/(-\infty,0]\\
\]
Where then, (Again, this is just implicit existence. Not construction.), we get:
\[
\sqrt{t} = h(f(h^{-1}(t)+1))\\
\]
If anyone has any questions, I will point out that they are called Riemann surfaces, because they relate to multidimensional surfaces of the Cauchy-Riemann equations. This stuff gets super advanced. So I can point you in the direction, I can't teach it to you. And I am not proving a solution. I am simply narrating my memory on the existence of these transforms.
The main point and the most important part, is that:
\[
h(1+t) = 1 + \frac{t}{2} + O(t^2)\\
\]
Let's take \(q \in U/1\)... then:
\[
h(q+ t) = \infty\,\,\,\text{or, at best}\,\, h(q) + O(t^2)\\
\]
This is also a common Riemann surface. It relates to the border being the Real Line. But at infinity, both functions merge...
Maybe, because you're interested in Modular functions Caleb, you'd be interested in this...
Nothing but love, Tommy. But this is how I remember these arguments going.
Posts: 1,214
Threads: 126
Joined: Dec 2010
02/26/2023, 03:03 PM
(This post was last modified: 02/26/2023, 03:04 PM by JmsNxn.)
Also, a quick guess of the Tommy Riemann Mapping is:
\[
g(t) = f(1+t) - 1 - \frac{t}{2} = g_2t^2 + O(t^3)\\
\]
This is the only point on the boundary which has a second order expansion. Thereby, using a Bottcher coordinate \(G\), we have that:
\[
G(g(t)) = G(t)^2\\
\]
.......... Again, it's a quadratic Riemann surface......
Posts: 1,924
Threads: 415
Joined: Feb 2009
Thank you James !
that is a nice contribution explaining some of my and some of your ideas.
regards
tommy1729
Posts: 1,924
Threads: 415
Joined: Feb 2009
(02/24/2023, 12:55 AM)Caleb Wrote: (02/24/2023, 12:30 AM)tommy1729 Wrote: Apart from the idea of reflection formula's ( which may or may not be a good idea )
I want to take for example the prime zeta function P(s).
It is well known to have formulas that converge for Re(s) > 1 or Re(s) > 0.
There also exists a formula for Re(s) > 1/2 :
Assuming the RH then \[P(s) = s \int_2^\infty \pi(x) x^{-s-1}dx = s \int_2^\infty (\pi(x)-Li(x)) x^{-s-1}dx+ L(s) \\ = s \int_2^\infty (\pi(x)-Li(x)) x^{-s-1}dx+L(2) + \log(s-1)+ \int_2^s \frac{2^{1-u}-1}{u-1}du\] where the latter integrals converge and are analytic for Re(s) > 1/2.
\[ Li(x) = \int_2^x \frac{dt}{\log t}\] \[L(s) = s\int_2^\infty Li(x) x^{-s-1}dx = \int_2^\infty \frac{x^{-s}}{\log x}dx \] \[ = L(2)+\int_2^s L'(u)du = L(2) + \log(s-1)+ \int_2^s \frac{2^{1-u}-1}{u-1}du\]
since \[L'(s) = -\int_2^\infty x^{-s}dx = \frac{2^{1-s}}{s-1}\]
I guess that is clear to all here.
Now the natural boundary at Re(s) = 0 is made completely of log singularities getting dense.
Maybe we should make distinctions of the type of natural boundaries we are getting.
I mean for instance
g(x) = (1 - x)(1 - x^3)(1 - x^5)(1 - x^7)...
or h(x) = 1 + x^2 + x^2^2 + x^2^3 + ...
are having "different" natural boundaries, like accumilations of zeros.
i SAID MAYBE lol
But now,
what is the value of P(s) for Re(s) < 0 ??
OR is this type of natural boundary unsuitable because it has logs instead of poles and zero's ??
AND IF UNSUITABLE , WHAT DOES THAT MEAN ?? no continuation for some but for others we do ?
I will definitely have a talk about that with my friend mick.
I want to point out that the derivative of the prime zeta has an infinite amount of poles instead of logs.
and the inverse of the derivative of prime zeta has an infinite amount of zero's on Re(s) = 0.
Im holding back on making conjectures , im a bit confused.
regards
tommy1729 This is a good question. Let me try to answer it by sharing my motivation in studying the series I decided to study in the post.
Analytical continuation beyond natural boundaries is hard! I don't think anyone knows how to do it in general. I suspect it can't be done in general-- because I don't think it would be meaningful to analytically continue a series that has purely random coefficents for instance. Since the problem is so hard, I'm choosing to study the easiest possible example I could come up with.
The examples I study have a really nice property. For instance, consider the following series
\[f(x)=\sum_{n=0}^\infty \frac{x^n}{1+x^n} \frac{1}{2^n}\]
This series has a natural boundary, since 1+x^n provides a dense set of poles. So, it cannot be analytically continued to |x|>1. However, the series is still well-defined outside of |x|>1. In particular, I can compute f(2) by just plugging into the series. This gives me a very natural candidate for a definition of \( f(x)\) outside the boundary.
So, I choose to study these much easier functions, and try to analyze what sort of properties they have. My goal looks like this
\[ \text{Get a bunch of very easy examples of functions with natural boundaries } \to \]
\[\text{ Study those examples in depth, and understand the mechanism underlying how those functions behave } \to \]
\[\text{ Try to generalize that mechanism into harder functions }\]
The prime zeta function is definitely in the category of "harder functions." I don't know if logarithmic singularities will cause the continuation to behave differently than regular poles-- that's something I will only find out once I've studied the easy examples in depth!
So, its not neccesariy that the prime zeta function has properties that make it unsuitable for continuation-- its that I haven't yet figured out the right way to do continuation beyond natural boundaries.
Falsifiability is a deductive standard of evaluation of scientific theories and hypotheses, introduced by the philosopher of science Karl Popper in his book The Logic of Scientific Discovery (1934)
Black swan theory is also interesting.
Why the f do I mention this ? ( I sound like James now lol )
Well unfalsifiable is a major concept in my philosophy thinking and moral ( and extends to politics and other topics and many debates and criticism but that is not the subject here )
I think Karl Popper ideas are not respected enough these days.
But this is about math.
Proof and falsifiable are somewhat important concept in math.
Now are there statements made that are true or false , falsifiable or unfalsifiable ?
Well maybe.
Here is the thing.
If you define things arbitrary ergo there were multiple ways ;
then you basically create something like axioms , rules , tautologies , ZFC etc
such things are , even when self-consistant , unfalsifiable.
This was once the critisism of summability methods :
you can define those values like you want.
AND MOST IMPORTANTLY : nobody can disprove them.
1 + 1 + 1 + ... = - sqrt (89)
sure why not.
The only " disproof " is using another way to do the summability.
or add additional rules or properties like analytic continuation.
But the problem is clear here.
This " magic "
1 + 1 + 1 + ... = - sqrt (89)
and similar was once considered bad math
" the master forbids it " is a famous quote we all know.
Ok so how does this relate here ?
mick and I created functions that " satisfy " or " should satisfy "
f( -s ) = f( s )
Now we find a natural boundary
and we get a contradition when we use that " zeta expansion "
So you work around that by adding residue and thereby defining the function for Re(s) < 0.
While loosing the property f( -s ) = f(s)
So we end up
f( -s ) =/= f(s).
HOWEVER
we could also just plug in values and then
f( s ) = f( -s )
and then criticize the zeta expansion as unvalid.
YOU picked the first.
But Is there proof it is better ??
Does that even make the zeta expansion valid or is it still not valid ? ( afteral the boundary still exists ! )
If the zeta expansion is not valid , your motivation is weak.
And even if it is valid , your motivation is still weak.
And what do you do here ( with your function ) ??
YOU SAY JUST PLUG IN THE VALUE !
Now James may find that the residue cancel anyway , but still.
In fact you did not even consider the residues , you just plugged in the value.
Now I hate to use your example function against you.
And I love your ideas and posts.
I use caps and f but Im not hostile.
It is just something that is bugging me.
If sometimes plugging in is ok and sometimes not and both choices are analytic at the same places ...
That makes me feel like I watched a magic show.
It clearly is an unfinished theory.
And that is for the cases where we already have a continuation !!!!
When we dont have one yet and use those ideas it becomes even more ... dubious ?
And then there will be " masters who forbid it "
And critics will say it is unfalsifiable.
Now analytic continuation is however falsifiable when the function is properly defined.
Now I know I was more optimist in the past , but I had this issue from the start. I only express it now.
My summability method might be a way to resolve things,
then again it was intended
1) for entire functions
2) used an interpolation which is also arguable something " random " the master forbids
Also what is there are multiple natural boundaries ??
regards
tommy1729
Posts: 1,924
Threads: 415
Joined: Feb 2009
(02/26/2023, 11:52 AM)JmsNxn Wrote: Okay, so take this post with a Grain of salt. I wanted to look deeper into this question. Because my suspicion is that we can say that sometimes, these are natural analytic continuations. I apologize if this is dumb. I'm just spit balling 
I'd like to add my own two cents on:
\[
f(x) = \sum_{n=0}^\infty \frac{x^n}{1+x^n} \frac{1}{2^n}\\
\]
First of all, the word I used to use for these functions is a function that maps \(\mathbb{C} \to \mathbb{C}\) upto a measure zero set in \(\mathbb{C}\). In this case, the measure zero set is the unit circle \(U\). So that \(f : \mathbb{C}/U \to \mathbb{C}\). There is absolutely zero way to analytically continue these functions, or refer to them as analytic continuations of each other. By standard literature; they are two functions which are holomorphic on disjointed domains of \(\mathbb{C}\).
But yes, ONE is more natural; obviously the one which is just plugging in the number. This is a fallacy that Cauchy called the Generality of Algebra. And it was a stark criticism of Euler. Who upon which, Euler would use the Generality of Algebra to get correct results. But he also got incorrect results.
...
Lets define
\[
f(x) = \sum_{n=0}^\infty \frac{x^n}{1+x^n} \frac{1}{2^n}\\
\]
and
g(x)
\[
g(x) = \sum_{n=0}^\infty \frac{x^n}{1+x^n} \frac{1}{3^n}\\
\]
Now there probably exist entire functions A(z),B(z) such that
A(f(z)) = B(g(z))
The idea was that taylor series of f or g would still have the same natural boundary because their powers do.
(although the multiplicity changes)
This leads to the idea that if we accept the continuation by plug-in for f and g then
Q(z) as a taylor expansion in two variables (a(z),b(z))
where a(z) = f(z) , b(z) = g(z)
starts to make sense.
Also
\[
t(x) = \sum_{n=0}^\infty \frac{x^n}{1+x^n} 2^n\\
\]
by using 1 + 2 + 4 + 8 + ... = -1
might make sense.
t(1) =
\[
t(1) = \sum_{n=0}^\infty \frac{1}{1+1} 2^n\\
\]
so t(1) = - 1/2.
But we must be careful , we do not know if using such ideas is self-consistant with eachother.
I think this is somewhat in the spirit of what Caleb wanted to do.
regards
tommy1729
Posts: 1,924
Threads: 415
Joined: Feb 2009
also this is still pretty standard.
I mean lambert is very related.
If you turn a lambert series into a taylor you might get a taylor than can be expanded beyond the unit radius.
Also
(1 - x^(2n)) = (1 - x^n)(1 + x^n)
so the + is not so special.
so throwing in another product does not necc make it more exotic , it could even be more standard.
regards
tommy1729
Posts: 51
Threads: 6
Joined: Feb 2023
Damn I'm sorry guys, I've done a terrible job of explaining what I'm trying to do here. Let me try to rectify that by explaining my motivation, goals and thoughts.
First, let me share my mindset. I have been curious about natural boundaries for a long time. But, when I tend to research them, I usually find the following opinion by other intelligent mathemtician: Studying functions beyond natural boundaries is not meaningful-- the answer is not unique, and there is nothing useful that can be done with such objects. Studying functions beyond natural boundaries has no applications-- it is useless.
Many mathematicians have a similar opinion about divergent series in general. But, as I'm sure all of us here are aware, divergent series actually do have fruitful applications and can help solve meaningful problems. For instance, the study of differential equations gives us real problems whose solutions are best understood in terms of divergent series.
However, I am embrassed to admit that even after being curious about generalized analytical continuation (GAC) (i.e. continuation beyond natural boundaries), I had never seen any real problem whose solution was best understood in terms of GAC. I had seen a few connections to problems in non-linear differential equations, but that was about it. Therefore, I had gradually become more and more convinced that the people were right-- GAC is useless.
My opinion changed recently, when I saw the MO discussion of the function \( f(x)=\sum (-1)^n x^{2^n} \). If we view this series as part of a more general \( F(a,x)=\sum (-1)^n x^{a^n} \), then finding the value of \( f(x) \) REQUIRES evaluating \( F(a,x) \) beyond its natural boundary. However, in the context of this problem, what we really want to do is to get a nicer form of \(f(x)\), one thats much easier to evaluate. So, we want to find a different way to evaluate f(x) outside its natural boundary. Here's a really useful representation of \( f(x) \) when \( |a|<1 \)
\[ F(a,x) = \sum_{n=0}^\infty \frac{\ln(x)^k}{k!(1+a^k)}\]
But, as you all saw in the original post, this series doesn't converge to \( f(x) \) if we plug in \( F(2,x) \). Now, this is very perplexing, and when I saw this, I wondered what the relationship between \( f(x) \) and \(F(2,x) \) is. More precisely, I wanted to know, what is \(f(x)-F(2,x)\). As you saw in the post, the answer is related to the residues of inside \( F(a,x) \)
Why is this important? Because, this is an application of GAC on a real problem! It would be incredibly hard to understand the relationship between \(f(x)\) and \(F(2,x)\) without appealing to continuation beyond natural boundaries.
Now, I want to make something clear-- in the post, I'm not making any claims in the post! I realize now it seems like I'm claiming "This is the right way to continue these series beyond their natural boundaries." It was not my intent to say this. It was to say " Here are two functions that are related to to each other. Its a pretty tricky task to figure out exactly what their difference it. However, if we view these functions from the perpsective of GAC, then it becomes easy to solve the problem. Therefore, GAC is a useful thing to study, since it helps us solve at least one specific type of problem (that problem being finding the difference between two related sums)"
I should emphasize that this is why I'm choosing series that converge everywhere. Its so that the question can be analyzed without appealing to any GAC. For real values of x, the difference
\[ \sum (-1)^n x^{2^n} -\sum_{n=0}^\infty \frac{\ln(x)^k}{k!(1+a^k)}\]
is just a plain, well-defined expression. No tricks, no magic, no continuation beyond boundaries--no BS. I can plug the expression into a calculator to get the answer. HOWEVER, the point of the post is to say that we can also find the value of this expression in a precise and closed form way by considering GAC and picking up residues in the right way.
Examples such as these shatter the opinion that GAC is useless or hopelessly non-unique. The difference between those two expressions is a real number, so in the context of this problem, the only valid continuation is the one that gives you the correct real number.
Therefore, to Tommy's point,
Quote:So we end up
f( -s ) =/= f(s).
HOWEVER
we could also just plug in values and then
f( s ) = f( -s )
and then criticize the zeta expansion as unvalid.
YOU picked the first.
But Is there proof it is better ??
Does that even make the zeta expansion valid or is it still not valid ? ( afteral the boundary still exists ! )
If the zeta expansion is not valid , your motivation is weak.
And even if it is valid , your motivation is still weak.
And what do you do here ( with your function ) ??
YOU SAY JUST PLUG IN THE VALUE !
I'm NOT claiming either continuation is better. I'm just trying to analyze what we have to do to make the continuation consistent with the approach in which we just plug in numbers and evaluating. In other words, when I have the sum
\( \sum (-1)^n x^{2^n} \) I want to know the right way to continue
\[ \sum_{n=0}^\infty \frac{\ln(x)^k}{k!(1+a^k)}\]
So that it agrees with \( \sum (-1)^n x^{2^n} \). This doesn't mean this is the objectively right continuation of \( \sum_{n=0}^\infty \frac{\ln(x)^k}{k!(1+a^k)} \), its just the one that agrees with \( \sum (-1)^n x^{2^n} \), since the whole point of the original problem was to find a nicer form for \( \sum (-1)^n x^{2^n} \).
Tommy, you are completely right that this is an unfinished theory, this post is only my first exploration. However, I hope you can see that the theory is NOT un-falsifiable. The entire point was of this exploration is to find cases like
\[ \sum (-1)^n x^{2^n} -\sum_{n=0}^\infty \frac{\ln(x)^k}{k!(1+a^k)}\]
where I can directy plug into a calculator to be able to figure out if certain continuations were correct. In fact, in my post, you'll that in case (1) I get the wrong answer, so there is still more to understand what is going on in those cases.
Let me know if this makes sense to you guys-- I feel I still haven't done a great job explaining what happening, so let me know about any questions
Posts: 1,924
Threads: 415
Joined: Feb 2009
(02/26/2023, 10:42 PM)Caleb Wrote: Damn I'm sorry guys, I've done a terrible job of explaining what I'm trying to do here. Let me try to rectify that by explaining my motivation, goals and thoughts.
First, let me share my mindset. I have been curious about natural boundaries for a long time. But, when I tend to research them, I usually find the following opinion by other intelligent mathemtician: Studying functions beyond natural boundaries is not meaningful-- the answer is not unique, and there is nothing useful that can be done with such objects. Studying functions beyond natural boundaries has no applications-- it is useless.
Many mathematicians have a similar opinion about divergent series in general. But, as I'm sure all of us here are aware, divergent series actually do have fruitful applications and can help solve meaningful problems. For instance, the study of differential equations gives us real problems whose solutions are best understood in terms of divergent series.
However, I am embrassed to admit that even after being curious about generalized analytical continuation (GAC) (i.e. continuation beyond natural boundaries), I had never seen any real problem whose solution was best understood in terms of GAC. I had seen a few connections to problems in non-linear differential equations, but that was about it. Therefore, I had gradually become more and more convinced that the people were right-- GAC is useless.
My opinion changed recently, when I saw the MO discussion of the function \( f(x)=\sum (-1)^n x^{2^n} \). If we view this series as part of a more general \( F(a,x)=\sum (-1)^n x^{a^n} \), then finding the value of \( f(x) \) REQUIRES evaluating \( F(a,x) \) beyond its natural boundary. However, in the context of this problem, what we really want to do is to get a nicer form of \(f(x)\), one thats much easier to evaluate. So, we want to find a different way to evaluate f(x) outside its natural boundary. Here's a really useful representation of \( f(x) \) when \( |a|<1 \)
\[ F(a,x) = \sum_{n=0}^\infty \frac{\ln(x)^k}{k!(1+a^k)}\]
But, as you all saw in the original post, this series doesn't converge to \( f(x) \) if we plug in \( F(2,x) \). Now, this is very perplexing, and when I saw this, I wondered what the relationship between \( f(x) \) and \(F(2,x) \) is. More precisely, I wanted to know, what is \(f(x)-F(2,x)\). As you saw in the post, the answer is related to the residues of inside \( F(a,x) \)
Why is this important? Because, this is an application of GAC on a real problem! It would be incredibly hard to understand the relationship between \(f(x)\) and \(F(2,x)\) without appealing to continuation beyond natural boundaries.
Now, I want to make something clear-- in the post, I'm not making any claims in the post! I realize now it seems like I'm claiming "This is the right way to continue these series beyond their natural boundaries." It was not my intent to say this. It was to say "Here are two functions that are related to to each other. Its a pretty tricky task to figure out exactly what their difference it. However, if we view these functions from the perpsective of GAC, then it becomes easy to solve the problem. Therefore, GAC is a useful thing to study, since it helps us solve at least one specific type of problem (that problem being finding the difference between two related sums)"
I should emphasize that this is why I'm choosing series that converge everywhere. Its so that the question can be analyzed without appealing to any GAC. For real values of x, the difference
\[ \sum (-1)^n x^{2^n} -\sum_{n=0}^\infty \frac{\ln(x)^k}{k!(1+a^k)}\]
is just a plain, well-defined expression. No tricks, no magic, no continuation beyond boundaries--no BS. I can plug the expression into a calculator to get the answer. HOWEVER, the point of the post is to say that we can also find the value of this expression in a precise and closed form way by considering GAC and picking up residues in the right way.
Examples such as these shatter the opinion that GAC is useless or hopelessly non-unique. The difference between those two expressions is a real number, so in the context of this problem, the only valid continuation is the one that gives you the correct real number.
Therefore, to Tommy's point,
Quote:So we end up
f( -s ) =/= f(s).
HOWEVER
we could also just plug in values and then
f( s ) = f( -s )
and then criticize the zeta expansion as unvalid.
YOU picked the first.
But Is there proof it is better ??
Does that even make the zeta expansion valid or is it still not valid ? ( afteral the boundary still exists ! )
If the zeta expansion is not valid , your motivation is weak.
And even if it is valid , your motivation is still weak.
And what do you do here ( with your function ) ??
YOU SAY JUST PLUG IN THE VALUE !
I'm NOT claiming either continuation is better. I'm just trying to analyze what we have to do to make the continuation consistent with the approach in which we just plug in numbers and evaluating. In other words, when I have the sum
\( \sum (-1)^n x^{2^n} \) I want to know the right way to continue
\[ \sum_{n=0}^\infty \frac{\ln(x)^k}{k!(1+a^k)}\]
So that it agrees with \( \sum (-1)^n x^{2^n} \). This doesn't mean this is the objectively right continuation of \( \sum_{n=0}^\infty \frac{\ln(x)^k}{k!(1+a^k)} \), its just the one that agrees with \( \sum (-1)^n x^{2^n} \), since the whole point of the original problem was to find a nicer form for \( \sum (-1)^n x^{2^n} \).
Tommy, you are completely right that this is an unfinished theory, this post is only my first exploration. However, I hope you can see that the theory is NOT un-falsifiable. The entire point was of this exploration is to find cases like
\[ \sum (-1)^n x^{2^n} -\sum_{n=0}^\infty \frac{\ln(x)^k}{k!(1+a^k)}\]
where I can directy plug into a calculator to be able to figure out if certain continuations were correct. In fact, in my post, you'll that in case (1) I get the wrong answer, so there is still more to understand what is going on in those cases.
Let me know if this makes sense to you guys-- I feel I still haven't done a great job explaining what happening, so let me know about any questions
yes i see.
thats partially why i mentioned continu at the 0.
But you also mentioned divergent functions like theta and prime zeta and such where plug-in is problematic.
You then used double sums and made fubini's head explode.
I will have to think about these.
notice you gave a solution to mick's MSE that included not plugging in despite being well defined locally if we did.
the issue was the resummation of zeta , which you somewhat solved. but not by plug-in. two skools of thought ?
ON THE OTHER HAND those two might be the only ways to make sense of it, unless we resummation overdose maybe.
**
multivariable analogues of your theory could imo help to do number theory. IN FACT I THINK most problems can be restated as such.
However that requires more insight and might not be easier than the original statements.
Im talking density conjectures here , not collatz.
**
also
as you well know it is not certain resummation always works to get values.
Im aware you are aware of the unfinished state of the theory.
thanks for your attention.
regards
tommy1729
Posts: 51
Threads: 6
Joined: Feb 2023
(02/27/2023, 12:52 AM)tommy1729 Wrote: (02/26/2023, 10:42 PM)Caleb Wrote: Damn I'm sorry guys, I've done a terrible job of explaining what I'm trying to do here. Let me try to rectify that by explaining my motivation, goals and thoughts.
First, let me share my mindset. I have been curious about natural boundaries for a long time. But, when I tend to research them, I usually find the following opinion by other intelligent mathemtician: Studying functions beyond natural boundaries is not meaningful-- the answer is not unique, and there is nothing useful that can be done with such objects. Studying functions beyond natural boundaries has no applications-- it is useless.
Many mathematicians have a similar opinion about divergent series in general. But, as I'm sure all of us here are aware, divergent series actually do have fruitful applications and can help solve meaningful problems. For instance, the study of differential equations gives us real problems whose solutions are best understood in terms of divergent series.
However, I am embrassed to admit that even after being curious about generalized analytical continuation (GAC) (i.e. continuation beyond natural boundaries), I had never seen any real problem whose solution was best understood in terms of GAC. I had seen a few connections to problems in non-linear differential equations, but that was about it. Therefore, I had gradually become more and more convinced that the people were right-- GAC is useless.
My opinion changed recently, when I saw the MO discussion of the function \( f(x)=\sum (-1)^n x^{2^n} \). If we view this series as part of a more general \( F(a,x)=\sum (-1)^n x^{a^n} \), then finding the value of \( f(x) \) REQUIRES evaluating \( F(a,x) \) beyond its natural boundary. However, in the context of this problem, what we really want to do is to get a nicer form of \(f(x)\), one thats much easier to evaluate. So, we want to find a different way to evaluate f(x) outside its natural boundary. Here's a really useful representation of \( f(x) \) when \( |a|<1 \)
\[ F(a,x) = \sum_{n=0}^\infty \frac{\ln(x)^k}{k!(1+a^k)}\]
But, as you all saw in the original post, this series doesn't converge to \( f(x) \) if we plug in \( F(2,x) \). Now, this is very perplexing, and when I saw this, I wondered what the relationship between \( f(x) \) and \(F(2,x) \) is. More precisely, I wanted to know, what is \(f(x)-F(2,x)\). As you saw in the post, the answer is related to the residues of inside \( F(a,x) \)
Why is this important? Because, this is an application of GAC on a real problem! It would be incredibly hard to understand the relationship between \(f(x)\) and \(F(2,x)\) without appealing to continuation beyond natural boundaries.
Now, I want to make something clear-- in the post, I'm not making any claims in the post! I realize now it seems like I'm claiming "This is the right way to continue these series beyond their natural boundaries." It was not my intent to say this. It was to say "Here are two functions that are related to to each other. Its a pretty tricky task to figure out exactly what their difference it. However, if we view these functions from the perpsective of GAC, then it becomes easy to solve the problem. Therefore, GAC is a useful thing to study, since it helps us solve at least one specific type of problem (that problem being finding the difference between two related sums)"
I should emphasize that this is why I'm choosing series that converge everywhere. Its so that the question can be analyzed without appealing to any GAC. For real values of x, the difference
\[ \sum (-1)^n x^{2^n} -\sum_{n=0}^\infty \frac{\ln(x)^k}{k!(1+a^k)}\]
is just a plain, well-defined expression. No tricks, no magic, no continuation beyond boundaries--no BS. I can plug the expression into a calculator to get the answer. HOWEVER, the point of the post is to say that we can also find the value of this expression in a precise and closed form way by considering GAC and picking up residues in the right way.
Examples such as these shatter the opinion that GAC is useless or hopelessly non-unique. The difference between those two expressions is a real number, so in the context of this problem, the only valid continuation is the one that gives you the correct real number.
Therefore, to Tommy's point,
Quote:So we end up
f( -s ) =/= f(s).
HOWEVER
we could also just plug in values and then
f( s ) = f( -s )
and then criticize the zeta expansion as unvalid.
YOU picked the first.
But Is there proof it is better ??
Does that even make the zeta expansion valid or is it still not valid ? ( afteral the boundary still exists ! )
If the zeta expansion is not valid , your motivation is weak.
And even if it is valid , your motivation is still weak.
And what do you do here ( with your function ) ??
YOU SAY JUST PLUG IN THE VALUE !
I'm NOT claiming either continuation is better. I'm just trying to analyze what we have to do to make the continuation consistent with the approach in which we just plug in numbers and evaluating. In other words, when I have the sum
\( \sum (-1)^n x^{2^n} \) I want to know the right way to continue
\[ \sum_{n=0}^\infty \frac{\ln(x)^k}{k!(1+a^k)}\]
So that it agrees with \( \sum (-1)^n x^{2^n} \). This doesn't mean this is the objectively right continuation of \( \sum_{n=0}^\infty \frac{\ln(x)^k}{k!(1+a^k)} \), its just the one that agrees with \( \sum (-1)^n x^{2^n} \), since the whole point of the original problem was to find a nicer form for \( \sum (-1)^n x^{2^n} \).
Tommy, you are completely right that this is an unfinished theory, this post is only my first exploration. However, I hope you can see that the theory is NOT un-falsifiable. The entire point was of this exploration is to find cases like
\[ \sum (-1)^n x^{2^n} -\sum_{n=0}^\infty \frac{\ln(x)^k}{k!(1+a^k)}\]
where I can directy plug into a calculator to be able to figure out if certain continuations were correct. In fact, in my post, you'll that in case (1) I get the wrong answer, so there is still more to understand what is going on in those cases.
Let me know if this makes sense to you guys-- I feel I still haven't done a great job explaining what happening, so let me know about any questions
yes i see.
thats partially why i mentioned continu at the 0.
But you also mentioned divergent functions like theta and prime zeta and such where plug-in is problematic.
You then used double sums and made fubini's head explode.
I will have to think about these.
notice you gave a solution to mick's MSE that included not plugging in despite being well defined locally if we did.
the issue was the resummation of zeta , which you somewhat solved. but not by plug-in. two skools of thought ?
ON THE OTHER HAND those two might be the only ways to make sense of it, unless we resummation overdose maybe.
**
multivariable analogues of your theory could imo help to do number theory. IN FACT I THINK most problems can be restated as such.
However that requires more insight and might not be easier than the original statements.
Im talking density conjectures here , not collatz.
**
also
as you well know it is not certain resummation always works to get values.
Im aware you are aware of the unfinished state of the theory.
thanks for your attention.
regards
tommy1729
Quote:notice you gave a solution to mick's MSE that included not plugging in despite being well defined locally if we did.
the issue was the resummation of zeta , which you somewhat solved. but not by plug-in. two skools of thought ?
ON THE OTHER HAND those two might be the only ways to make sense of it, unless we resummation overdose maybe.
Just to be clear, my "solution" wasn't a continuation. All I did was prove that IF we want to define a MEROMORPHIC continuation, it must have a branch cut. Then, I showed the value of that branch cut, and what the meromorphic continuation looks like.
Basically, the argument in the MSE post says (in the special case of mick's function):
- We can keep that f(s) = f(-s), but we have to introduce a branch cut to make this work.
- Or, we can give up f(s) = f(-s), but then we have to deal with a natural boundary
If I'm being honest, I believe the 'true' solution is (2). Or at the very least, I think (2) is bound to be much more interesting. But, I don't really know how to do (2) yet, so thats why my solution was assuming we wanted to do (1).
|