Understanding \(f(z,\theta) = e^{e^{i\theta}z} - 1\)
#11
Alright, let's try this ONE LAST TIME! I screwed up my elementary calculus, god I hate linear substitutions. I'm going to go much slower this time... And I'm going to do the number theory correctly....




The value \(\theta\) is in the set \(\mathbb{H}^*\).

\[
\mathbb{H}^* = \{ \theta \in \mathbb{C}\,|\, \Im(\theta) > 0, \Re(\theta) \in 2\pi\mathbb{Q}\}\\
\]

This means, \(\theta\) can be writ as:

\[
\theta = 2 \pi \frac{p}{q} + it\\
\]

Naturally, \(p,q \in \mathbb{Z}\) with \(q >0\) and \(t > 0\). Let's, additionally assume that this is a reduced fraction. This means that \((p,q) = 1\)--these two numbers are co-prime. So now, we know that:

\[
e^{i\theta q} = e^{2 \pi i p - qt} \in (0,1)\\
\]

Additionally, it is the smallest natural power of \(e^{i\theta}\) to be in this domain. So that for all \(0 \le j < q\) the value \(e^{i\theta j} \not \in (0,1)\). We should also note the period of \(e^{i\theta q s}\); which is \(\ell = \frac{2\pi}{q\theta}\).

\[
e^{i\theta q (s+\ell)} = e^{i\theta q s} e^{i\theta q \ell} = e^{i\theta q s} e^{i\theta q \frac{2 \pi}{q\theta}} = e^{i\theta q}\\
\]

This gives us our set up...

How many exponentials \(E(s)\) are there that satisfy:

\[
\begin{align}
E(1) &= e^{i\theta q}\\
E(s_0 + s_1) &= E(s_0) E(s_1)\\
\end{align}
\]

?

They are given as \(E_k(s) = e^{i\theta q s} e^{2\pi i k s}\) for \(k \in \mathbb{Z}\). These are all possible solutions--which we will write in a more refined manner, as a function of when \(k=0\). This is pretty elementary:

\[
E_k(s) = E_0(s+ k\ell s) = e^{i\theta q (s+k\ell s)}\\
\]

This is nothing too enlightening. But, let's now switch gears for a moment. I think Mphlee will appreciate this.

Let's try to reconstruct \(e^{i\theta s}\) using \(E_k(s)\). Well, the naive way is to map \(s \to \frac{s}{q}\)... But this doesn't always work. For example, let \(k=1\), then:

\[
E_1(\frac{s}{q}) = e^{i\theta s} e^{2\pi i \frac{s}{q}}\\
\]

This naive operation, only works when \(k \in q \mathbb{Z}\)... And so, we have now created \(q\) unique root operations which satisfy the following equation. Let \(H_{jk}(s)\) be holomorphic, for \(0 \le j < q\)-- where now, the equation is:

\[
\begin{align}
H_{jk}(s) &= E_{j+qk}(\frac{s}{q})\\
H_{jk}(1)^q &= e^{i\theta q}\\
H_{jk}(1) &= \sqrt[q]{e^{i\theta q}}\\
H_{jk}(s_0) H_{jk}(s_1) &= H_{jk}(s_0+s_1)\\
\end{align}
\]

Now, here is where things get interesting! Since \(e^{i\theta q} \in (0,1)\) it has a \(q\)'th root \(0 < \lambda < 1\), it has an exponential:

\[
L_0(s) = \lambda^s\\
\]

Additionally, there is a list of \(L_k(s) = L_0(s) e^{2 \pi i k s}\). But!!!!!

\[
\begin{align}
L_k(1)^q &= e^{i\theta q}\\
L_k(s_0)L_k(s_1) &= L_k(s_0 + s_1)\\
\end{align}
\]

As these are the total list of solutions: there is some matching game going on between these functions. This is not as hard as you may guess. To begin, we want to find the value \(j\) and \(k\) of \(H_{jk}\) such that \(H_{jk} = L_0\). The trick to remember is that, an exponential function, is unique upto its period... So...

\[
\lambda^{s + \frac{2\pi i}{\log \lambda}} = \lambda^s\\
\]

The value \(\log \lambda\) is actually a little tricky to find. This is where I fucked up before; I had the general idea, but I screwed up some subtleties..... So let's walk the reader through it.

The value \(i\theta\) is complex, the value \(i \theta q\) is complex. The value \(e^{i\theta q}\) is real positive. Thereby, the value \(\log e^{i\theta q}\) is real; we can write this using the positive logarithm. So that \(\log |e^{i\theta q}|\) is a more correct form. The value \(i\theta q\) is complex, but there is some value \(2\pi i m\) for \(m \in \mathbb{Z}\) we can add to \(i\theta q\) to get \(\log |e^{i\theta q}|\). Now we have to divide this nonsense by \(q\)... The trick to all of this is just writing \(\Re i \theta\).... So we have that \(\log \lambda = \Re i\theta = -t\)!

This means the period is precisely \(\frac{2\pi i}{\Re(i\theta)} = \frac{2 \pi i}{t}\). Nothing new there really, just reconfirming it. And so, what value \(H_{jk}\) has a period of \(\frac{2\pi i}{\Re(i\theta)}\)?

This isn't that hard when we recognize, we are asking, what value of \(j,k\) produces a purely imaginary period... So let's remind ourselves of \(H_{jk}\):

\[
H_{jk} = E_{j+qk}(\frac{s}{q}) = e^{i \theta s} e^{2\pi i s\frac{j+qk}{q}}\\
\]

The period of which is \(\mu\), and all we have to do is solve the equation:

\[
\begin{align}
\mu &= \frac{2 \pi}{\theta +2 \pi \frac{j+qk}{q}}\\
\mu &\in i\mathbb{R}\\
\end{align}
\]

Since \(\theta \in \mathbb{H}^*\), and that its real part is \(\Re \theta = 2 \pi \frac{p}{q}\) for a rational \(\frac{p}{q}\); we are solving the equation:

\[
\frac{p}{q} + \frac{j+kq}{q} = 0\\
\]

Which is just \( j+kq = -p\). Which gives us our solution...... The easiest solution is given by setting \(k=0\), by which we choose \(p = j \bmod{q} \). The value \(p\) is coprime to \(q\), so the solutions are actually:

\[
j \equiv p \pmod{q}\\
\]

Where \(p\) and \(q\) are coprime!

You can see much more clearly now, by what I mean when I say that Gaussian sums are going to play a large role in this.



NOW! Let's translate all of this into the discussion of the mellin transform. We begin by writing:

\[
f_\theta(z) = e^{i\theta}z + O(z^2)\\
\]

The ONLY way we can take the mellin transform is in the case:

\[
f_\theta^{\circ q}(z) = e^{i\theta q} z + O(z^2)\\
\]

If I use the mellin transform/fractional calculus to iterate this, I am producing \(\lambda^{qs}z + O(z^2)\). I have just detailed how to turn \(\lambda^{qs}\) into \(e^{i\theta s}\)--which arises through linear substitutions. Which then tells us how to get:

\[
f_\theta^{\circ s}(z) = e^{i\theta s} z + O(z^2)\\
\]




[Enter Analytic Number Theory]

We have to figure out for each \(\theta\) the value \(p\) and the value \(q\)--reduce them to coprime factors. Then we have to manage all solutions \(j\) which are congruent to \(p\) mod \(q\). This produces a function:

\[
f_\theta^{\circ s}(z) : \mathbb{H}^* \times \mathbb{C}_{\Re(s) > 0} \times \mathcal{N} \to \mathcal{N}\\
\]

The values are written:

\[
\begin{align}
\theta &\in \mathbb{H}^*\\
s &\in \mathbb{C}_{\Re(s) > 0}\\
z \in \mathcal{N} &= \{z \in \mathbb{C}\,|\,|z| < \delta,\,\delta=\delta(\theta,s)\}\\
\end{align}
\]

This can be easily extended through Normality theorems, and all that bs to a larger domain in \(\theta\).

Call the domain:

\[
\mathbb{H} = \{\theta \in \mathbb{C}\,|\, \Im(\theta) > 0\}\\
\]

And quite clearly \(\mathbb{H}^*\) is dense in \(\mathbb{H}\). It is the best kind of dense too...

Therefore, taking a sequence of values \(\theta_n^* \in \mathbb{H}^*\), which are cauchy, and satisfy:

\[
\text{For all}\, \epsilon >0\,\text{there exists}\,N\,\text{such for}\,n,m>N,\,\,|\theta_n^* - \theta_m^*|< \epsilon\\
\]

By this we naturally have \(\theta_n^* \to \theta \in \mathbb{H}\). Now let's do the same thing:

\[
f_{\theta^*_n}(z) \to f_{\theta}(z)\\
\]

And additionally--there is a dense cover of points where:

\[
f_{\theta^*_n}^{\circ s}(z) \approx \to f_{\theta}^{\circ s}(z)\\
\]

BUT!!!

To prove this we're going to have to invoke a distribution theorem, on how "tightly packed together" these solutions are................. Can you say Prime Number Theorem...? We won't need to actually call upon it, much weaker results will suffice. But we're going to need a weak version on how densely solutions of \(j = j(\theta) \in \mathbb{Z}\) are packed together in the equation:

\[
j = p \pmod{q}\\
\]

Where \(p\) and \(q\) are co-prime. Luckily all of this has been done. This is some Gauss/Dirchlet bullshit. We don't talk about it here often, because, I don't think anyones noticed it.

But for instance, a famous result of Dirichlet, is that in fixing \(p\) and fixing \(q\). There exists infinite prime numbers \(\rho\) which satisfy:

\[
\rho = p \pmod{q}\\
\]

But proving this is one of the hardest results in mathematics Tongue   But we're going to kinda need "density of primes" results like this in order to rigorously prove that we can limit the fractional iterations properly.

Working with \(\theta \in \mathbb{H}^*\) is always possible; and relatively easy. When \(\theta \in \mathbb{H}\)... we can't avoid the equation \(j \equiv p \pmod q\).... We need to know the asymptotics/densities, even if they're lesser estimates, to makes sure the equation: \(f_{\theta^*_n} \to f_\theta\).
Reply
#12
So, the bonus points of all this.......

\[
\vartheta(x,z) = \sum_{n=0}^\infty f_\theta^{\circ q n}(z) \frac{(-x)^n}{n!}\\
\]

Then:

\[
\frac{d^s}{dx^s}\Big{|}_{x=0} \vartheta(x,z) = \lambda^{qs}z + O(z^2)\\
\]

The value \(\lambda^{s+as} = e^{i\theta s}\), for some \(a\). Then:

\[
\frac{d^{\frac{s+as}{q}}}{dx^{\frac{s+as}{q}}}\Big{|}_{x=0} \vartheta(x,z) = e^{i\theta s}z + O(z^2)\\
\]

The differintegral above, always satisfies a semigroup operation:

\[
D^{s_0} D^{s_1} = D^{s_0 + s_1}\\
\]

And this is carried over in the Mellin transform, from that, it is just like the exponential function Wink It's just like the semi-groups of Gottfried's matrices; but it's the integral interpretation.

But as \(\mathbb{H}^* \to \mathbb{H}\), we have that \(a \to \infty\). To ensure convergence, we need to bound the growth of \(a\). Bounding the growth of \(a\) is a deep problem in analytic number theory... It has everything to do with the density of the solutions \( j \equiv p \pmod{q}\)....

This will absolutely produce an integral though.... as \( a \to \infty\), we can have an integral interchange. It's escaping me at the moment. But we're going to get something really nice.


EDIT:

It's also important to note that:

\[
f_\theta^{\circ s}(z)\\
\]

Will be holomorphic when \(|e^{i\theta s}| < 1\), where by \(|z| < \delta\) is fixed for \(\delta>0\). So long as \(\theta \in \mathbb{H}^*\)--by which the set \(\mathbb{H}\), can at worst be a boundary behaviour \(|e^{i\theta s} | \le 1\) when \(|z| \le \delta\). This is enough to give us some stability to this construction. But getting everything to work, we need to talk about \(j \equiv p \pmod{q}\)... We don't need the super advanced shit. Just some old timey guesstimates....
Reply
#13
Hi, I hate to waste anyone time but in some place I'm really puzzled by really obvious things...  my math is not that strong prolly. I have many obstacles in getting this.

1) Let's see if I get this right when you say "we know that \(e^{i\theta q} = e^{2 \pi i p - qt} \in (0,1)\)...
Let \(\theta \in \mathbb H^*=2\pi\mathbb Q+\mathbb R^+ i\), then there exists a minimal \(q\in\mathbb N\) s.t. we have the form \(\theta q=2\pi p+itq \in 2\pi\mathbb N+\mathbb R^+ i \). Clearly \(q_\theta\) depends on theta... we could call it the d-part of \(\theta\), call it \(q={\sf d}(\theta)\). We  have \({\sf d}:\mathbb H^*\to\mathbb N\)

note for myself: \(\mathbb H^*\) is closed under addition, but since it has not zero in it is not a monoid, just a semigroup...

Now you say that \((e^{i\theta})^{{\sf d}(\theta)}\in (0,1)\) but its not obvious... if I compute it as \(e^{i(\theta {\sf d}(\theta) )}=e^{i(2\pi p+it{\sf d}(\theta))}=e^{i2\pi p}/ e^{t{\sf d}(\theta)} \)

then, maybe, since the first factor has exponent a multiple of 2pi we get \[ e^{i(\theta {\sf d}(\theta) )}=1/ e^{t{\sf d}(\theta)}\]

2) Then something seems off with the formula there. Imho. Set \(\ell(\theta)=\frac{2\pi}{\theta {\sf d}(\theta) }=\frac{2\pi}{2\pi p +it{\sf d}(\theta) }\) and consider the function \(E(s):=(e^{i\theta  {\sf d}(\theta) })^s\), clearly it is an exponential with \(E(1)=e^{i\theta  {\sf d}(\theta) }\) as you are setting up and we get some identities

\[E(-s)=e^{i\theta  {\sf d}(\theta)(-s)}=(e^{t{\sf d}(\theta)})^{s}\]
\[E(s+t)=E(s)\cdot E(t)\]

And we get a group homomorphism.... And then the periodicity, maybe it is just a typo in your formula?
Periodicity is about finding elements in the kernel of the homomorphism. If the kernel is nontrivial it contains non-zero \(l\in {\rm ker}E\) s.t. \(E(k)=1\). This implies that \(E(s+l)=E(s)\). The kernel of a group morphism must be itself a subgroup of the domain group... it measures the injectivity in a sense. Also the kernel is closed under scalar multiplication by integers. So we have many periods: if \(l\) is in the kernel, call it a period, then \(E(al)=E(l)^a=1\) this means that the kernel is a integral vector space... has is a generating set/basis?

\[E(s+\ell(\theta))=e^{i\theta  {\sf d}(\theta)(s+\ell(\theta))}=E(s)\cdot e^{i\theta  {\sf d}(\theta)\frac{2\pi}{\theta {\sf d}(\theta) }}=E(s)\]

.... up unitl this point i'm almost there but I feel weak. Skimming ahead I see I'll need some time and effort to put all the pieces together... so just more questions before I give my try:

3) I dont get the definition of the set \(\mathcal N\), what is \(\delta\)? Is it dependent on the choice of the arguments...

4) at the end I kinda get the big picture... we are working on a subset generated by rational multiples in order to approximate the function on a bigger, complete, set. But I still miss why we are doing this. To obtain a  fractional iteration of... the multiplication? You are laying this out as a toy model so that you can apply the whole machinery on different functions?

All of this seems a very long story. Can you also streamline the structure of the construction... the big picture.
For example, I don't remember anymore how this is related with Euler identity.

Mother Law \(\sigma^+\circ 0=\sigma \circ \sigma^+ \)

\({\rm Grp}_{\rm pt} ({\rm RK}J,G)\cong \mathbb N{\rm Set}_{\rm pt} (J, \Sigma^G)\)
Reply
#14
1)

Yes, so \(q_\theta = \mathbf{d}(\theta): \mathbb{H}^* \to \mathbb{N}\). Forgive me, but I will write \(q_\theta\). Note that all of these variables depend on each other--this is a system of variables which depend on each other.

I apologize that the period is not obvious to you. Since \(q_\theta \in \mathbb{N}\). We have that:

\[
e^{i\theta q_\theta\left(s + \frac{2 \pi}{\theta q_\theta}\right)} = e^{i\theta q_\theta s} e^{i\theta q_\theta \frac{2\pi}{\theta q_\theta}} = e^{i\theta q_\theta s} e^{2 \pi i} = e^{i\theta q_\theta s}\\
\]

This is also the minimal period; every periodic element \(l\), as you wrote, is a multiple : \(l \in \frac{2 \pi}{q_\theta \theta} \mathbb{Z}\)... There's no smaller period. This is because \(p\) and \(q\) are coprime.

2)

Now it's natural to be confused at this point, maybe I should have gone slower. Shy  You write:

\[
\left(e^{i\theta q_\theta}\right)^s\\
\]

And say this is an exponential satisfying my list of conditions--which you are absolutely correct!

BUT!

\[
e^{i\theta q_\theta} = \lambda_\theta^q \in (0,1)\\
\]

While \(0 < \lambda_\theta < 1\). This is precisely the problem, so your confusion is in the right direction at least Tongue . We want to turn \(\lambda_\theta^{qs}\) into \(e^{i\theta s}\)...

I should also note, that when I said "period" I meant "minimal period". I apologize, that's standard terminology in complex analysis. So yes \(4\pi i\) is a period of \(e^{s}\) but it's not THE period, which is \(2 \pi i\). So when I say "the period" I actually mean the minimal period. It's just a poor affectation of language, lol. I apologize.

3) Okay, so the set \(\mathcal{N}\) is difficult to describe fully. I was just giving a sketch. So let's take a compact set \(\mathcal{K} \subset \mathbb{H}\). Let's let \(|e^{i\theta s}| < 1\). For \(\theta \in \mathcal{K} \cap \mathbb{H}^*\), and \(s \in \mathcal{S}\) such that \(|e^{i\theta s}| < 1\)--then \(\delta\) depends on \(\mathcal{K}\) and \(\mathcal{S}\). That alone.

Essentially what I meant, is that for \(\theta\) in a neighborhood \(\mathcal{K}\), and \(|e^{i\theta s}|<1\)--which restricts the variable \(s \in \mathcal{S}=\mathcal{S}(\mathcal{K})\)--there exists a value \(\delta > 0\), such that \(f_\theta^{\circ s}(z) : \mathcal{K} \times \mathcal{S} \times \mathcal{N} \to \mathcal{N}\), where \(\mathcal{N}\) is the domain \(|z| < \delta\)...

I apologize, I was being a tad too implicit here. It can get pretty symbol heavy for this, I was just giving a quick sketch...

4)

Okay, so this is the big reveal....

\[
\frac{d^s}{dx^s} \Big{|}_{x=0} \sum_{n=0}^\infty f_\theta^{\circ q_\theta n}(z) \frac{x^n}{n!} = \lambda^{q_\theta s}z + O (z^2)\\
\]

Where yes, a linear substitution will work fine if \(\theta \in \mathbb{H}^*\)--and \(\lambda^{s + as} = e^{i\theta s}\). But Mphlee....

How the fuck to I do this when \(\mathbb{H}^* \to \mathbb{H}\), where \(q_\theta \to \infty\) ??????


The sum will converge to \(0\)! I know this may seem trivial, but it's very much not. We have to be able to make an educated guess on the growth of \(q_\theta\)--this will relate to a lot of number theory...

We have to perform the action \(\theta \to q\theta\) before we perform the mellin transform. Then we perform the linear substitution pull back, to get \(\theta\) again. But when \(q \to \infty\)--we have to understand "how fast it goes to infinity" and "how fast the pullback pulls us back to reality".

As \(\mathbb{H}^* \to \mathbb{H}\), it is not entirely obvious that \(f_\theta^{\circ s}\) converges... We know that there are solution sets--but are they Mellin transformable!? We don't know that yet!




PLEASE ASK MORE QUESTIONS! I'm happy to answer and I think I can explain better when somebody asks me questions...

Also the way this relates to Euler's formula is a tad difficult to explain again. It only works for \(q = 2\)--for arbitrary \(q\) it relates to roots of unity. But:

\[
\sum_{n=0}^\infty f^{\circ n}(z) \frac{x^n}{n!} = \vartheta[f](x,z)\\
\]

Then:

\[
\vartheta[f](ix,z) = \sum_{n=0}^\infty f^{\circ 2n}(z) \frac{(-1)^n x^{2n}}{2n!} + i\sum_{n=0}^\infty f^{\circ 2n+1}(z) \frac{(-1)^n x^{2n+1}}{2n+1!}\\
\]

Both sums on the right are mellin transformable, insuring the object on the left is mellin transformable. We don't need this entirely, I was mistaken--I thought the Gamma functions would pull out, but they actually cancel out... I think I have a rough idea of how to find square roots though, which look like this...
Reply
#15
Sometimes, Mphlee, it helps to work with an example. So let's do exactly that.

Let \(\xi\) be an irrational number, \(\xi \in \mathbb{R}/\mathbb{Q}\)--let's additionally ask that \(0 < \xi < 1\).

Then, to approximate \(\xi\), we take a sequence of rational numbers: \(\xi_m = \frac{p_m}{q_m}\)-- where \((p_m,q_m) = 1\), and additionally \(p_m < q_m\). Now, let's move forward, and look at:

\[
\mathbb{H}^* \to \mathbb{H}\\
\]

But we will restrict: \(0 < \Re(\theta) < 2 \pi\). Let us write:

\[
\theta_m = 2 \pi \frac{p_m}{q_m} + it\\
\]

Where we're given the identity \(\theta_m \to 2 \pi \xi + it\) as \(m \to \infty\)...

It is not entirely obvious how to show that \(f_{\theta_m}^{\circ s}(z) \to f_\theta^{\circ s}\) under the integral... There are solutions, but it's not obvious: which solution the integral chooses, and what manner it converges. In order to do this, we need to understand the maximal growth of \(q_m\). Now, there are restricted forms of irrational numbers which we write as:

\[
|\xi - \frac{p_m}{q_m}| < \frac{M}{q_m^{2+\epsilon}}\\
\]

This is a result that we are constantly finding better and better bounds on. But, we know that a choice \(\epsilon \approx 1/4\) should work fine for all \(0 < \xi < 1\). This is a very deep result, I am not going to elaborate until I have all my jigsaw pieces in order.

So what we want to do, is use that \(\theta_m = \theta + O(1/q^{2+\epsilon})\). We need to use this growth. This is derived using a lot of number theory/ analytic number theory... And has been used in complex dynamics before... Any discussion of non parabolic neutral fixed points will involve something similar. (Any thing that talks about \(g(z) = e^{2 \pi i \xi} z + O(z^2)\), and its dynamics will have some kind of discussion like this... )

So, the question becomes:

\[
f_\theta^{\circ s}(z) = \frac{d^{\tau_\theta s}}{dx^{\tau_\theta s}}\Big{|}_{x=0} \sum_{n=0}^\infty f_\theta^{q_\theta n}(z) \frac{x^n}{n!}\\
\]

As \(q \to \infty\) (because \(m \to \infty\)\), and \(\tau \to \infty \in \widehat{\mathbb{C}}\) (because \(q \to \infty\)) does this object converge? Where we can bound \(\theta_m \to \theta\) as \(\frac{1}{q_m^{1+\epsilon}}\)... Everything should work out fine, but I need to get the exact details right. And getting the exact details right, involves discussing how we approximate \(\theta \in \mathbb{H}\) using elements \(\theta^* \in \mathbb{H}^*\). And doing such, involves looking at Gauss sums and the discussion of \( j \equiv p \pmod{q}\).

Now we can pretty much avoid talking about this, if we just prepackage our discussion to:

\[
|\theta^* - \theta| < \frac{M}{q^{2+\epsilon}}\\
\]

But just know, to properly bound \(\epsilon\)... we need analytic number theory. I'm happy to just reference satisfactory results, but any result will be done using Gauss sums/analytic number theory. We can comfortably set \(\epsilon =0\), but just know, we needed Dirichlet's analytic number theory to prove that \(\epsilon = 0\) always works.

This relates deeply to number theory, and algebraic number theory also. Liouville's transcendental number is written as:

\[
A = \sum_{n=1}^\infty 10^{-n!}\\
\]

Because this value converges so damn fast, with the partial sum values \(A_m = \frac{p_m}{q_m}\), we can guess that \(|A_m - A| < \frac{M}{q_m^m}\)... this means it is transcendental. Because we cannot bound an algebraic number in this manner. To get every element of \(\xi \in \mathbb{R}\), and create a bound for \(\xi^* \in \mathbb{Q}\), so that \(|\xi - \xi^*| < \frac{M}{|q|^{2+\epsilon}}\), we have to be super fucking careful. I believe, for all algebraic/transcendental numbers the correct \(\epsilon\) is about \(1/4\)--but I may be mistaken. It is definitely \(\epsilon > 0\), and that's all I need, lol. But taking \(\epsilon = 0\), produces the boundary level perfectly. And is an old timey result which will suffice, lol.

So essentially for all \(\theta^* \in \mathbb{H}^*\) and \(0<\Re\theta^* < 2\pi\), where \(\Re \theta^* = 2 \pi\frac{p}{q}\). Then as \(\theta^* \to \theta\), we converge in the exact manner:

\[
|\theta^* - \theta| < \frac{M}{q^{2+\epsilon}}\\
\]

For some \(\epsilon > 0\)..........

(\(M\) is just a constant depending solely on the \(\theta^* \to \theta\)...)

Regards, James

EDIT:

For fuck's sakes I mixed up some stuff. The absolute bound is that:

\[
|\theta^* - \theta| < \frac{M}{q^2}\\
\]

This doesn't affect any of my results. If anything it makes my results better. I was just playing it safe by thinking \(\epsilon = 1/4\); when Dirichlet proved that \(\epsilon  =0\) is an absolute bound. I need to refresh myself. I realize that what I was worked up about is the case \(2 + \epsilon\), where we try to find \(\epsilon\) here. If we just fix the exponent at \(2\) we are fine.

So we know that:

\[
|\frac{p_m}{q_m} - \xi| < \frac{1}{q_m^2}\\
\]

This is enough to do everything we need to do... but again you need Dirichlet's analytic number theory to prove this in the first place... lol

But this tells us the "growth of \(q_\theta\)" as a function of \(\theta\).... Because we can write it as:

\[
|\Re\theta^* - \Re\theta| < \frac{2\pi}{q^2} = \frac{2\pi}{\mathbf{d}(\theta^*)^2}\\
\]

For \(\Re \theta^* = 2\pi\frac{p}{q}\) and \(\Re\theta = 2\pi \xi\). The values \(0 < p < q\) and \((p,q)=1\) are co-prime. This means we write it as:

\[
\Re\theta = \Re\theta^* + O(\mathbf{d}(\theta^*)^{-2}) = \Re\theta^* + O(q^{-2})\\
\]

EDIT: For a quick read I suggest https://en.wikipedia.org/wiki/Liouville_..._constant)

Liouville really set the stage and we are using basic bounds developed by Dirichlet...
Reply
#16
Mhh I see there is something really deep into this. Many parts still elude me and I can't do nothing. That compact set story and the condition on the domain u sketched are too obscure. I'll get back on the mechanism of \(q_\theta\) and how to reconstruct the original function by, basically, applying the \(q_\theta\) root of the iteration.

Now I see better what you are trying to do... and it is highly related with some work I started to lay out about how rational iterations must be linked together....also the problem of determine that \(q_\theta\) is higly number theoretic... it reminds me of the shit appering in the math behind the Riemann Hypothesis.
Divisibility plays a subtle and fundamental role here... Being able to reframe your discourse in terms of my notes dump on iteration theory thread about iteration will be essential to me. Remember when tommy noticed that it seemed number theory?.

But I need time and calm to do this. Once I do this... which I think is possible and I'll do it for sure in the next days, by setting up a clean and minimal algebraic foundation for this discourse- building up on my algebraic/functorial approach laid out in [ note dump] - i'll be ready to tackle \(\mathbb Q\to \mathbb R\) story of Liouville and so on.
After that only one thing is missing...the meaning of the melling trasform.... recently I watched a quick intro into fourier transforms... using quantum mechanics. The main point i got home was that... you can describe the fundamental objects of QM, instead of by points moving and drawing paths in the space, like in classical dynamics, you can describe them by a wave position function, like a field over the space, or dually using its momentum function (still totally obscure to me). And the fourier transform gets you from one world to the other and back. Now, the position wave function can be se as periodic if we think of the space as in the game "snake", like repeating forever in every direction (like compactifyng the Gauss plane into the Riemann sphere): so a new tool becomes available. We express the position wave function as a sum of "basic elementary waves", a superposition of waves. This is the fourier series. But we express it in exponential form instead of using sines and cosines. The fourier transform can extract from the superposition of waves the single contribution, the weight, of each wave, like you differ-integral is pulling out the coefficients of the auxiliary series... but interpolating it to complex indexes.

It is clear that QM has its fkn role here... and it is months if not years that you hint at it. I believe that the algebraic/functorial undesrtanding of iteration and dynamics admits a similar dual description... and it has to do with the Algebra/Geometry duality. But it is too early for me... I need to review/being introduced to a fuckton of basic material before I can handle this.

Mother Law \(\sigma^+\circ 0=\sigma \circ \sigma^+ \)

\({\rm Grp}_{\rm pt} ({\rm RK}J,G)\cong \mathbb N{\rm Set}_{\rm pt} (J, \Sigma^G)\)
Reply
#17
In Quantum Mechanics, we consider firstly:

\[


\sum_{n=-\infty}^\infty a_n e^{2\pi i n x} = F(x)\\


\]



Where by, the value \(a_n\) is written as:



\[

a_n = \int_0^{1} F(x) e^{-2\pi i n x}\,dx\\


\]




We have a perfect interchange between \(a_n\) and \(F(x)\)... how I've just written it, is just Fourier series. Let's advance this to the fourier transform:



\[


\hat{F}(s) = \int_{-\infty}^\infty F(x) e^{-2\pi i x s}\,dx\\


\]



Where:



\[


F(x) = \int_{-\infty}^\infty \hat{F}(s) e^{2\pi i x s}\,ds\\


\]



This is just the basic precept of \(n \mapsto F(x)\) and \(F(x) \mapsto n\), but it does so in an unbelievably convenient manner. I could go on about Gottfried's work and how it relates to Heisenberg's work--or how my own work relates deeply to Von Neumann/Schrodinger/Feynman shit. But I won't.



Instead I will write that: The fourier transform acts on \(\mathbb{R}\) as an additive thing. When you evolve into algebraic fourier transforms, we get the mellin transform firstly. This is the "fourier transform on the semi-group \(\{\mathbb{R}^+, \times\}\)". Which means we can take \(x \mapsto x^m\), instead of \(x \mapsto mx\). The Mellin transform is a deeply connected object to fourier transforms, but it relates to "multiplication" more than fourier transforms. It acts upon the group \(\{\mathbb{R}^+, \times\}\), rather than the fourier transform acting upon \(\{\mathbb{R},+\}\)...




This gets very difficult though. And Ramanujan gave a very beautiful result, for the multiplicative case. This is known as Ramanujan's Master Theorem. It's in many ways a fancy reduction of Poisson's summation formula. But there's an exception to it. No such formula exists in Fourier analysis--similar results arise, but nothing exactly like that. But nonetheless, this is still Fourier analysis; just instead of additive fourier analysis, we are doing multiplicative.




It helps at this point to understand fourier transforms on algebraic groups. We can define a fourier transform on any group \(\mathbb{G} = \{G, \times\}\), so long as the group satisfies sufficient criteria. This is the basis of Tate's Thesis, and what won him the fields' medal in the fuckin 70s. The "usual fourier transform" is the fourier transform on the group \(\{\mathbb{R},+\}\). The fourier transform on the group \(\{\mathbb{R}^+,\times\}\) is the mellin transform...


We can write Fourier series as Fourier transforms, so this is a strict generalization.



The Mellin transform is a Fourier Transform........ It's just a few variable changes atop the fourier transform. So think of \(\vartheta\) as \(F\), and \(f^{\circ s}\) as \(\widehat{F}\). But \(\vartheta\) exists in the space \(\{\mathbb{R}^+,\times\}\), and so does \(f^{\circ s}\). This just requires a change of variables: \(F(e^x) = \vartheta(x)\) (sorry this is wrong)

Sorry went too fast, the change of variables is a little trickier:

\[
\begin{align}
\vartheta(e^x) &= F(x)\\
f^{\circ s} &= \int_{-\infty}^\infty F(x) e^{-xs}\,dx\\
s &\mapsto 2\pi i s\\
f^{\circ 2 \pi i s} &= \widehat{F}(s)\\
\end{align}
\]

But still, the quantum physics stuff obviously appears. But not really quantum physics, just the math from quantum physics...

EDIT: If you are interested in the Fourier transform, especially as I use it. I highly suggest Stein & Shakarchi's Complex Analysis; and specificly their chapter on the Fourier Transform. You can note that they describe \(F(x)\) holomorphic for \(a<|\Im(x)| < b\), as a primary identifier (where we have some kind of decay as \(|\Re x| \to \infty\)). This is no different then how I consider \(\vartheta(x)\) being differintegrable on a sector \(|\arg(-x)| < \kappa\). All we've done is a change of variables \(\vartheta(e^x) = F(x)\). In my case I work on special cases, where \(F(-\infty) = 0\) is satisfied on the Riemann sphere; which equates to \(\vartheta(x)\) is holomorphic at \(x=0\). That allows for Ramanujan's master theorem--which is really a super fancy Poisson Summation formula type thing, lol.
Reply
#18
I want to describe the following identity--which we will use and abuse. Allez-y:

\[
\theta^* = 2 \pi \frac{p}{q} + it\\
\]

Where \(t \in \mathbb{R}^+\), and \((p,q) =1\) are coprime integers. This implies that \(\theta^* \in 2\pi \mathbb{Q} + i\mathbb{R}^+\). Let \(\theta^* \to \theta\), where we assume: \(\theta \in 2\pi \left(\mathbb{R}/\mathbb{Q}\right) + i\mathbb{R}^+\). We are going to additionally assume that \(\Re \theta^* \in (0,1)\). This assumption isn't necessary but simplifies our language. And additionally, avoids us talking about "different" branches.

As Mphlee pointed out, which I thought was apparent, is that \(q = q(\theta^*)\), is a function of \(\theta^*\). It takes \(2\pi \mathbb{Q} + i\mathbb{R}^+ = \mathbb{H}^* \to \mathbb{N}\)--and is therefore a type of functional/projection type function.

By Dirichlet's Approximation Theorem:

https://www.expii.com/t/dirichlets-appro...eorem-2468

This is pretty basic number theory/analytic number theory, so you guys should be able to get it (no zeta function nonsense, lmao). It's literally the Pigeon Hole Principle Rolleyes --still, it's an indispensable theorem.

\[
|\theta - \theta^*| < \frac{1}{q^2}\\
\]

Thereby we can write:

\[
\theta = \theta^* + O\left(\frac{1}{q(\theta^*)^2}\right)\\
\]

This means that:

\[
|f_\theta(z) - f_{\theta^*}(z)| = O\left(\frac{1}{q(\theta^*)^2}\right)\\
\]

This O-term looks exactly like:

\[
O\left(\frac{1}{q(\theta^*)^2}\right) = \dfrac{\frac{d}{d\theta^*} f_{\theta^*}(z)}{q(\theta^*)^2} + O(\frac{1}{q(\theta^*)^4})\\
\]

Because:

\[
f_\theta(z) = f_{\theta^*}(z) + \frac{d}{d\theta^*} f_{\theta^*}(z) (\theta - \theta^*) + O(\theta - \theta^*)^2\\
\]

Which is just by taylor's theorem. Equally we have the formula:

\[
f_\theta^{\circ n}(z) = f_{\theta^*}^{\circ n}(z)  + \dfrac{\frac{d}{d\theta^*} f_{\theta^*}^{\circ n}(z)}{q(\theta^*)^2} + O(\frac{1}{q(\theta^*)^4})
\]



As for the variable \(z\), we are restricting \(|z| < \delta\), where \(\delta\) can be chosen uniformly for all \(|\theta - \theta^*| < \epsilon\). This means, quite beautifully that:

\[
\vartheta[f_\theta](x,z) = \sum_{n=0}^\infty f_\theta^{\circ n}(z) \frac{x^n}{n!}\\
\]

That:

\[
\vartheta[f_\theta](x,z) = \vartheta[f_{\theta^*}](x,z) + \frac{d}{d\theta^*} \vartheta[f_{\theta^*}](x,z) \frac{1}{q(\theta^*)^2} + O(\frac{1}{q(\theta^*)^4})\\
\]

The bound on this O-term IS NOT UNIFORM IN \(x\). But it is uniform on integrable domains. So if \(\vartheta[f_{\theta^*}]\) is differintegrable in a sector, we can make this O term uniform for that sector.

But this doesn't give us our answer. Instead we want to write:

\[
\vartheta[f_{\theta^*}^{\circ q(\theta^*)}](x,z) = \sum_{n=0}^\infty f_{\theta^*}^{\circ q(\theta^*)n} \frac{x^n}{n!}\\
\]

This causes instantly one of the \(q(\theta^*)\)'s to cancel out...

\[
\frac{d}{d\theta^*} \vartheta[f_{\theta^*}^{\circ q(\theta^*)}](x,z) \frac{1}{q(\theta^*)^2} = \frac{d}{d\theta^*} \vartheta[f_{\theta^*}](x,z) O\left(\frac{1}{q(\theta^*)}\right)
\]


I need to fine tune this argument, but we should get something that expands as:

\[
f_{\theta}^{\circ s}(z) = f_{\theta^*}^{\circ s}(z) + O(\frac{1}{q(\theta^*)})\\
\]

This will cause an integral to arise as \(\theta^* \to \theta\), that I'm not sure of yet. It's evading me. I need more mulling time. But we should expect a result very close to this... So essentially, when we differintegrate, we'll probably lose a square on the error (not necessarily, but these are the best bounds I can think of).

This goes in conjunction that \(\frac{d^s}{dx^s} \frac{d}{d\theta} = \frac{d}{d\theta}\frac{d^s}{dx^s}\)--because I am only considering the best kind of converging integral transforms. None of these weird Caputo style fractional derivatives. My derivative is just fancy fourier transforms Big Grin --differentiating/integrating under a Fourier transform is actually pretty easy if you are in a normal space!
Reply
#19
So let's write the function we care about:



\[


Y(\theta^*,x,z) = \sum_{n=0}^\infty f_{\theta^*}^{\circ q(\theta^*)n}(z) \frac{x^n}{n!}\\

\]



Then:







\[





f_{\theta^*}^{\circ s}(z) = \frac{d^{\frac{s}{q}(1+2\pi i p)}}{dx^{\frac{s}{q}(1+2\pi i p)}}\Big{|}_{x=0} Y(\theta^*,x,z)\\





\]







The value \(\Re \theta^* = 2\pi \frac{p}{q}\) so that: \(p = \frac{\Re\theta^* q(\theta^*)}{2\pi}\). So we reduce everything to:







\[





f_{\theta^*}^{\circ s}(z) = \frac{d^{\frac{s}{q} + i s\Re\theta^*}}{dx^{\frac{s}{q} + i s\Re\theta^*}} \Big{|}_{x=0} Y(\theta^*,x,z)\\





\]







The value \(\frac{s}{q} \to 0\) as \(\theta^* \to \theta\) (\(q(\theta^*) \to \infty\)). The value \(i\Re \theta^* \to 2\pi i \xi = 2 \pi i\lim_{p,q\to\infty} \frac{p}{q}\). For some \(\xi \in (0,1)\) and \(\xi \in \mathbb{R}/\mathbb{Q}\).
















So with these exact incantations and variable dependencies/structures--I'll write what I'm trying to say, and why it's so difficult:







\[





\begin{align}





\lim_{q\to\infty} f_{\theta^*}^{\circ s}(z) &= f_\theta^{\circ s}(z)\\





\frac{d^{\frac{s}{q} + is\Re \theta^*}}{dx^{\frac{s}{q} + is \Re\theta^*}}&= \frac{d^{\frac{s}{q}}}{dx^{\frac{s}{q}}}\frac{d^{is\Re \theta^*}}{dx^{is \Re\theta^*}}\\





&= \frac{d^{\frac{s}{q}}}{dx^{\frac{s}{q}}}\frac{d^{2\pi i s \frac{p}{q}}}{dx^{2\pi i s \frac{p}{q}}}





\end{align}





\]









The exponential \(e^{\lambda^qx}\) is the first order expansion of \(\vartheta[f_{\theta^*}^{\circ q(\theta^*)}](x,z)\)--for \(\lambda = e^{-\Im(\theta^*)} = e^{-t}\), which is fixed. By which let's look at:









\[



\frac{d^{\frac{s}{q}}}{dx^{\frac{s}{q}}} e^{\lambda^qx} = \lambda^{s} e^{\lambda^qx}\\





\]







And:





\[



\begin{align}





\frac{d^{\frac{s}{q}}}{dx^{\frac{s}{q}}}\frac{d^{2\pi i s \frac{p}{q}}}{dx^{2\pi i s \frac{p}{q}}} e^{\lambda^qx}&= e^{i\theta^* s} e^{\lambda^qx}\\





\lim_{q\to\infty} \frac{d^{\frac{s}{q}}}{dx^{\frac{s}{q}}}\frac{d^{2\pi i s \frac{p}{q}}}{dx^{2\pi i s \frac{p}{q}}} e^{\lambda^qx} &= e^{i\theta s}\\





\end{align}





\]







The equation on the left turns into a double integral as \(q \to \infty\)... As \(e^{\lambda^q x} \to 1\), the differintegrals to similar divergence...







We can write this in a plain manner:







\[







e^{2\pi i s \frac{p}{q} - st} e^{\lambda^qx} = e^{i\theta^* s} e^{\lambda^qx} = \frac{1}{\Gamma(-s\frac{1+2\pi i p}{q})} \int_0^\infty Y_q(-x,z)x^{-s\frac{1+2\pi i p}{q}-1}\,dx\\







\]









Letting \(q \to \infty\) produces an integral, we get something like \(\frac{1}{n} \sum_{j=0}^n a_{jn}\).... To do the nit and gritty, cold as concrete proofs, we will need \(q\)'s growth in relation to \(\theta^*\), hence Dirichlet's Approximation Theorem, is the simplest proof from analytic number theory we will need. Hopefully we don't need more; but we might need some fineesed results derived from Dirchlet  Tongue .







Bout to go on my 2-3 month free time vacation. I will get this out clearly. Big Grin
Reply
#20
So, let's start now with:

\[
f_{\theta^*}^{\circ s}(z) = \frac{d^{\frac{s}{q} + 2\pi is\frac{p}{q}}}{ds^{\frac{s}{q} + 2\pi is\frac{p}{q}}}\Big{|}_{x=0} \sum_{n=0}^\infty f_{\theta^*}^{qn}(z)\frac{x^n}{n!}\\
\]

The value \(\theta^* = 2\pi \frac{p}{q} +it\), for \(p,q \in \mathbb{Z}\), coprime--and \(t>0\). The function:

\[
f_\theta(z) = e^{e^{i\theta}z}-1\\
\]

The value \(\theta^* \to \theta = 2\pi \xi + it\), where the error is given as \(O(1/q^2)\). We will refer to \(q = q(\theta^*)\). Where as \(\theta^* \to \theta\) we have \(q(\theta^*) \to \infty\). By which we have the convenient formula \(\theta = \theta^* + O(1/q(\theta^*)^2)\). To set the stage, we will observe the linear state first, for \(m \in \mathbb{N}\):

\[
f_{\theta^*}^{\circ m}(z) = e^{i\theta^*m} z + O(z^2)\\
\]

The value:

\[
f_\theta^{\circ m}(z) - f_{\theta^*}^{\circ m}(z) = \left(e^{i\theta^* m} - e^{i\theta m}\right) z + O(z^2)\\
\]

The first term is precisely:

\[
e^{i\theta^* m} - e^{i\theta m} = mi (\theta^* - \theta) + O(\theta^* - \theta)^2
\]

Setting \(m = q(\theta^*)\) we get:

\[
iq(\theta^*)(\theta^* - \theta) = O(1/q(\theta^*))\\
\]

Thereby:

\[
f_{\theta^*}^{\circ q(\theta^*)}(z) = ze^{i\theta q(\theta^*)}  + zO(1/q(\theta^*)) + O(z^2)\\
\]

And by linear operations:

\[
f_{\theta^*}^{\circ q(\theta^*)n}(z) = e^{i\theta q(\theta^*)n}z + nzO(1/q(\theta^*)) + O(z^2)\\
\]


Thereby the function:

\[
\vartheta[f_{\theta^*}^{\circ q(\theta^*)}](x,z) = z \sum_{n=0}^\infty \left(e^{i\theta q(\theta^*)n} +nO(1/q(\theta^*))\right)\frac{x^n}{n!} + O(z^2)\\
\]

Bounding the error term in \(x\) on this O notation is very difficult, I'm just giving a rough idea at the moment. But it should be fine, if we choose \(x\) belonging to a differintegrable sector. This whole nonsense reduces to:

\[
\vartheta[f_{\theta^*}^{\circ q(\theta^*)}](x,z) = z\left(e^{e^{i\theta q(\theta^*)}x} + O(1/q(\theta^*)) \right)+ O(z^2)\\
\]

Now, let's take, the correct differintegral:

\[
f_{\theta^*}^{\circ s}(z) = \frac{d^{\frac{s}{q} + 2\pi i s \frac{p}{q}}}{ds^{\frac{s}{q} + 2\pi i s\frac{p}{q}}}\Big{|}_{x=0} \vartheta[f_{\theta^*}^{\circ q}](x,z)
\]

Inadvertently, we are performing an infinite sum of neglible terms \(\vartheta \to 0\), just as all of it's terms as \(q \to \infty\). But, we are approaching infinity on the left side like \(q \to \infty\). We are literally getting an equation that looks like:

\[
\lim_{q\to\infty} \sum_{j=0}^{q} O (1/q) = O(1)\\
\]

..... Man I thought compositional integration was hard. This shit has me scratching my head so much more, lmao. I'm getting closer. And I'm pissed off now, lmao.  First order expansion should work fine though:

\[
\frac{d}{dz}\Big{|}_{z=0} f_{\theta^*}^{\circ s}(z) \to \frac{d}{dz}\Big{|}_{z=0} f_{\theta}^{\circ s}(z)
\]

And this will be a weird double integral where you mellin transform than do some weird ass Gauss limit, lmao. Which just equates to using exponential functions, instead of Varthetas, lol. Nothing more than the formula:

\[
\lim_{q\to\infty} \frac{1}{\Gamma(s/q)} \int_0^\infty e^{-e^{i\theta q}x}x^{\frac{s}{q}-1}\,dx = e^{-i\theta s}\\
\]

This limit converges in a very ugly way; it is a normal sequence, but letting \(q = m\) for arbitrary \(m\), causes this to converge to a different value.... We must choose the sequence \(q \to \infty\), where \(\theta = 2 \pi \frac{p}{q} + it + O(1/q^2)\), and is the sole sequence to satisfy these bounds. Another sequence might change our value here.

If \(e^{i\theta} = e^{-t}= \lambda \in (0,1)\), then this is easier, and any sequence \(m\to\infty\) satisfies:

\[
\lim_{m\to\infty} \frac{1}{\Gamma(s/m)} \int_0^\infty e^{-\lambda^{ m}x}x^{\frac{s}{m}-1}\,dx = \lambda^s\\
\]

The trouble is in the complex plane, we are rotating domains, and hitting boundaries, so it's not as straightforward... The sequence \(q \to \infty\) is unique in that it satisfies \(e^{-e^{i\theta q}x} = e^{-\lambda^q(1+O(\frac{1}{q}))x}\), which makes the limit feasible...... Additionally, there are linear substitutions which "turn \(\lambda^s\) into \(e^{i\theta s}\)"..... So we can actually expect \(e^{2 \pi i \frac{p}{q}} = 1 + O(1/q)\) to be a reasonable complex approximation.

At this point we should move the integral from \(\int_0^\infty\), to the differintegral which is much more general, and is impartial to the direction of the integral.... But we don't have to, I'm fixing everything on \(\int_0^\infty\) here. And every step of the limit converges, and the mellin transformed limit converges, and the \(\vartheta\) converges the same. So we're fine....at least for the first terms in \(z\), lmfaooo!!!!!
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  iterating z + theta(z) ? [2022] tommy1729 5 5,645 07/04/2022, 11:37 PM
Last Post: JmsNxn
  [split] Understanding Kneser Riemann method andydude 7 24,666 01/13/2016, 10:58 PM
Last Post: sheldonison
  theta and the Riemann mapping sheldonison 2 12,971 10/11/2011, 12:49 PM
Last Post: sheldonison
  Understanding Abel/Schroeder with matrix-expression Gottfried 12 33,620 05/26/2008, 08:45 PM
Last Post: Gottfried



Users browsing this thread: 30 Guest(s)