Understanding \(f(z,\theta) = e^{e^{i\theta}z} - 1\)
#11
Alright, let's try this ONE LAST TIME! I screwed up my elementary calculus, god I hate linear substitutions. I'm going to go much slower this time... And I'm going to do the number theory correctly....




The value \(\theta\) is in the set \(\mathbb{H}^*\).

\[
\mathbb{H}^* = \{ \theta \in \mathbb{C}\,|\, \Im(\theta) > 0, \Re(\theta) \in 2\pi\mathbb{Q}\}\\
\]

This means, \(\theta\) can be writ as:

\[
\theta = 2 \pi \frac{p}{q} + it\\
\]

Naturally, \(p,q \in \mathbb{Z}\) with \(q >0\) and \(t > 0\). Let's, additionally assume that this is a reduced fraction. This means that \((p,q) = 1\)--these two numbers are co-prime. So now, we know that:

\[
e^{i\theta q} = e^{2 \pi i p - qt} \in (0,1)\\
\]

Additionally, it is the smallest natural power of \(e^{i\theta}\) to be in this domain. So that for all \(0 \le j < q\) the value \(e^{i\theta j} \not \in (0,1)\). We should also note the period of \(e^{i\theta q s}\); which is \(\ell = \frac{2\pi}{q\theta}\).

\[
e^{i\theta q (s+\ell)} = e^{i\theta q s} e^{i\theta q \ell} = e^{i\theta q s} e^{i\theta q \frac{2 \pi}{q\theta}} = e^{i\theta q}\\
\]

This gives us our set up...

How many exponentials \(E(s)\) are there that satisfy:

\[
\begin{align}
E(1) &= e^{i\theta q}\\
E(s_0 + s_1) &= E(s_0) E(s_1)\\
\end{align}
\]

?

They are given as \(E_k(s) = e^{i\theta q s} e^{2\pi i k s}\) for \(k \in \mathbb{Z}\). These are all possible solutions--which we will write in a more refined manner, as a function of when \(k=0\). This is pretty elementary:

\[
E_k(s) = E_0(s+ k\ell s) = e^{i\theta q (s+k\ell s)}\\
\]

This is nothing too enlightening. But, let's now switch gears for a moment. I think Mphlee will appreciate this.

Let's try to reconstruct \(e^{i\theta s}\) using \(E_k(s)\). Well, the naive way is to map \(s \to \frac{s}{q}\)... But this doesn't always work. For example, let \(k=1\), then:

\[
E_1(\frac{s}{q}) = e^{i\theta s} e^{2\pi i \frac{s}{q}}\\
\]

This naive operation, only works when \(k \in q \mathbb{Z}\)... And so, we have now created \(q\) unique root operations which satisfy the following equation. Let \(H_{jk}(s)\) be holomorphic, for \(0 \le j < q\)-- where now, the equation is:

\[
\begin{align}
H_{jk}(s) &= E_{j+qk}(\frac{s}{q})\\
H_{jk}(1)^q &= e^{i\theta q}\\
H_{jk}(1) &= \sqrt[q]{e^{i\theta q}}\\
H_{jk}(s_0) H_{jk}(s_1) &= H_{jk}(s_0+s_1)\\
\end{align}
\]

Now, here is where things get interesting! Since \(e^{i\theta q} \in (0,1)\) it has a \(q\)'th root \(0 < \lambda < 1\), it has an exponential:

\[
L_0(s) = \lambda^s\\
\]

Additionally, there is a list of \(L_k(s) = L_0(s) e^{2 \pi i k s}\). But!!!!!

\[
\begin{align}
L_k(1)^q &= e^{i\theta q}\\
L_k(s_0)L_k(s_1) &= L_k(s_0 + s_1)\\
\end{align}
\]

As these are the total list of solutions: there is some matching game going on between these functions. This is not as hard as you may guess. To begin, we want to find the value \(j\) and \(k\) of \(H_{jk}\) such that \(H_{jk} = L_0\). The trick to remember is that, an exponential function, is unique upto its period... So...

\[
\lambda^{s + \frac{2\pi i}{\log \lambda}} = \lambda^s\\
\]

The value \(\log \lambda\) is actually a little tricky to find. This is where I fucked up before; I had the general idea, but I screwed up some subtleties..... So let's walk the reader through it.

The value \(i\theta\) is complex, the value \(i \theta q\) is complex. The value \(e^{i\theta q}\) is real positive. Thereby, the value \(\log e^{i\theta q}\) is real; we can write this using the positive logarithm. So that \(\log |e^{i\theta q}|\) is a more correct form. The value \(i\theta q\) is complex, but there is some value \(2\pi i m\) for \(m \in \mathbb{Z}\) we can add to \(i\theta q\) to get \(\log |e^{i\theta q}|\). Now we have to divide this nonsense by \(q\)... The trick to all of this is just writing \(\Re i \theta\).... So we have that \(\log \lambda = \Re i\theta = -t\)!

This means the period is precisely \(\frac{2\pi i}{\Re(i\theta)} = \frac{2 \pi i}{t}\). Nothing new there really, just reconfirming it. And so, what value \(H_{jk}\) has a period of \(\frac{2\pi i}{\Re(i\theta)}\)?

This isn't that hard when we recognize, we are asking, what value of \(j,k\) produces a purely imaginary period... So let's remind ourselves of \(H_{jk}\):

\[
H_{jk} = E_{j+qk}(\frac{s}{q}) = e^{i \theta s} e^{2\pi i s\frac{j+qk}{q}}\\
\]

The period of which is \(\mu\), and all we have to do is solve the equation:

\[
\begin{align}
\mu &= \frac{2 \pi}{\theta +2 \pi \frac{j+qk}{q}}\\
\mu &\in i\mathbb{R}\\
\end{align}
\]

Since \(\theta \in \mathbb{H}^*\), and that its real part is \(\Re \theta = 2 \pi \frac{p}{q}\) for a rational \(\frac{p}{q}\); we are solving the equation:

\[
\frac{p}{q} + \frac{j+kq}{q} = 0\\
\]

Which is just \( j+kq = -p\). Which gives us our solution...... The easiest solution is given by setting \(k=0\), by which we choose \(p = j \bmod{q} \). The value \(p\) is coprime to \(q\), so the solutions are actually:

\[
j \equiv p \pmod{q}\\
\]

Where \(p\) and \(q\) are coprime!

You can see much more clearly now, by what I mean when I say that Gaussian sums are going to play a large role in this.



NOW! Let's translate all of this into the discussion of the mellin transform. We begin by writing:

\[
f_\theta(z) = e^{i\theta}z + O(z^2)\\
\]

The ONLY way we can take the mellin transform is in the case:

\[
f_\theta^{\circ q}(z) = e^{i\theta q} z + O(z^2)\\
\]

If I use the mellin transform/fractional calculus to iterate this, I am producing \(\lambda^{qs}z + O(z^2)\). I have just detailed how to turn \(\lambda^{qs}\) into \(e^{i\theta s}\)--which arises through linear substitutions. Which then tells us how to get:

\[
f_\theta^{\circ s}(z) = e^{i\theta s} z + O(z^2)\\
\]




[Enter Analytic Number Theory]

We have to figure out for each \(\theta\) the value \(p\) and the value \(q\)--reduce them to coprime factors. Then we have to manage all solutions \(j\) which are congruent to \(p\) mod \(q\). This produces a function:

\[
f_\theta^{\circ s}(z) : \mathbb{H}^* \times \mathbb{C}_{\Re(s) > 0} \times \mathcal{N} \to \mathcal{N}\\
\]

The values are written:

\[
\begin{align}
\theta &\in \mathbb{H}^*\\
s &\in \mathbb{C}_{\Re(s) > 0}\\
z \in \mathcal{N} &= \{z \in \mathbb{C}\,|\,|z| < \delta,\,\delta=\delta(\theta,s)\}\\
\end{align}
\]

This can be easily extended through Normality theorems, and all that bs to a larger domain in \(\theta\).

Call the domain:

\[
\mathbb{H} = \{\theta \in \mathbb{C}\,|\, \Im(\theta) > 0\}\\
\]

And quite clearly \(\mathbb{H}^*\) is dense in \(\mathbb{H}\). It is the best kind of dense too...

Therefore, taking a sequence of values \(\theta_n^* \in \mathbb{H}^*\), which are cauchy, and satisfy:

\[
\text{For all}\, \epsilon >0\,\text{there exists}\,N\,\text{such for}\,n,m>N,\,\,|\theta_n^* - \theta_m^*|< \epsilon\\
\]

By this we naturally have \(\theta_n^* \to \theta \in \mathbb{H}\). Now let's do the same thing:

\[
f_{\theta^*_n}(z) \to f_{\theta}(z)\\
\]

And additionally--there is a dense cover of points where:

\[
f_{\theta^*_n}^{\circ s}(z) \approx \to f_{\theta}^{\circ s}(z)\\
\]

BUT!!!

To prove this we're going to have to invoke a distribution theorem, on how "tightly packed together" these solutions are................. Can you say Prime Number Theorem...? We won't need to actually call upon it, much weaker results will suffice. But we're going to need a weak version on how densely solutions of \(j = j(\theta) \in \mathbb{Z}\) are packed together in the equation:

\[
j = p \pmod{q}\\
\]

Where \(p\) and \(q\) are co-prime. Luckily all of this has been done. This is some Gauss/Dirchlet bullshit. We don't talk about it here often, because, I don't think anyones noticed it.

But for instance, a famous result of Dirichlet, is that in fixing \(p\) and fixing \(q\). There exists infinite prime numbers \(\rho\) which satisfy:

\[
\rho = p \pmod{q}\\
\]

But proving this is one of the hardest results in mathematics Tongue   But we're going to kinda need "density of primes" results like this in order to rigorously prove that we can limit the fractional iterations properly.

Working with \(\theta \in \mathbb{H}^*\) is always possible; and relatively easy. When \(\theta \in \mathbb{H}\)... we can't avoid the equation \(j \equiv p \pmod q\).... We need to know the asymptotics/densities, even if they're lesser estimates, to makes sure the equation: \(f_{\theta^*_n} \to f_\theta\).
Reply


Messages In This Thread
RE: Understanding \(f(z,\theta) = e^{e^{i\theta}z} - 1\) - by JmsNxn - 01/06/2023, 06:20 AM

Possibly Related Threads…
Thread Author Replies Views Last Post
  iterating z + theta(z) ? [2022] tommy1729 5 5,645 07/04/2022, 11:37 PM
Last Post: JmsNxn
  [split] Understanding Kneser Riemann method andydude 7 24,666 01/13/2016, 10:58 PM
Last Post: sheldonison
  theta and the Riemann mapping sheldonison 2 12,971 10/11/2011, 12:49 PM
Last Post: sheldonison
  Understanding Abel/Schroeder with matrix-expression Gottfried 12 33,620 05/26/2008, 08:45 PM
Last Post: Gottfried



Users browsing this thread: 29 Guest(s)