Teaser
What is the general argument behind the "superfunction trick"?
It seems to me that it is possible to study the general machinery lurking behind this superfunction trick. As JmsNxn correctly notice, he is using a more complex "mutation" of the mentioned trick, but if I can make some progress understanding the old trick, I guess I know how to easily expand the construction category-theoretically to include his iterated composition. Let's begin by setting some notation and by stating a "fake theorem" that illustrates my sentiment about this matter better than words.
Disclaimer on notation.
From now on I will call "a function" only a binary relation that is everywhere defined and single-valued, i.e. for every element in the domain exists a unique element in the codomain and so on...; I'll denote functional composition by juxtaposition \( gf=g\circ f \) and integer iteration by \( f^n \). With \( s \) i vaguely mean the successor endomap and with \( {\rm mul}_a \) the endomap that scales by \( x\mapsto a\cdot x \).
For two given endofunctions \( f:X\to X \) and \( g:Y\to Y \) define the set \( [f,g]\subseteq Y^X \) as the solution set of the equation
\( \chi f=g\chi \)
we define two subsets of sequences \( \phi_n\in (Y^X)^{\mathbb N} \) as follows
\( [f,g]_{\Delta}:=\{\phi_n|\phi_{n+1}f=g\phi_{n}\} \) and \( [f,g]_{\Delta}^{op}:=\{\phi_n|\phi_nf=g\phi_{n+1}\} \)
Ok! We are now ready for the...
SuperLazy Prototheorem. Given functions \( f:X\to X \) and \( g:Y\to Y \) and a sequence of functions \( \phi_{-}:{\mathbb N}\to Y^X \)if the conditions
- for every natural number, \( \phi_nf=g\phi_{n+1} \) or \( \phi_{n+1}f=g\phi_{n} \);
- \( \phi_0 \) is "appropriate";
\( \phi f=g\phi \)
This "fake theorem" depends fundamentally on the existence and on our ability to build sequences of maps satisfying (1). Luckily this is not a problem! Such kinds of sequences exist, are definable by recursion and are abundant: in fact we can prove these
Easy Lemmas. Given functions \( f:X\to X \) and \( g:Y\to Y \). For every function \( \phi:X\to Y \) we prove that:
- \( f \) is split-mono \( \Rightarrow \) exists a sequence \( \alpha_n \) s.t. \( \alpha_0=\phi \) and \( \alpha_{n+1}f=g\alpha_{n} \);
\( f \) is split-epi \( \Rightarrow \) given \( \alpha_n,\alpha'_n\in[f,g]_{\Delta} \) if \( \alpha_0=\alpha'_0 \) then \( \alpha_n=\alpha'_n \);
if \( f \) is iso \( Y^X\simeq [f,g]_{\Delta} \)
- \( g \) is split-epi \( \Rightarrow \) exists a sequence \( \beta_n \) s.t. \( \beta_0=\phi \) and \( \beta_nf=g\beta_{n+1} \);
\( g \) is split-mono \( \Rightarrow \) given \( \alpha_n,\alpha'_n\in[f,g]^{op}_{\Delta} \) if \( \alpha_0=\alpha'_0 \) then \( \alpha_n=\alpha'_n \);
if \( g \) is iso \( Y^X\simeq [f,g]^{op}_{\Delta} \)
- if \( \phi\in [f,g] \) the constant sequence \( \phi! \) is in both \( [f,g]_{\Delta} \) and \( [f,g]_{\Delta}^{op} \);
- for every \( \gamma\in [f,f]_\Delta \) and \( \delta\in [g,g]_\Delta \), if \( \phi_n\in[f,g]_\Delta \) then \( \delta\phi\gamma\in[f,g]_\Delta \);
For the proof of the former I'll have to work by a sequence of, hopefully finite and convergent, approximated attempts: you can read an attempt in the pdf. The latter result is pretty trivial to prove and generalize because it's pure algebra (it's in the Appendix A of the same pdf).
Some context
But let's start from scratches and provide some context. I'm sure I'm not providing the oldest references on this site but in
[TF] 2008 jul, Trappmann, Robbins - Tetration Reference
the principal Abel and Schroeder functions are defined as follow
\( \mathcal A[f]={\lim_{n\to\infty}}(s^{-1})^n\circ\log_a\circ f^n \)
\( \mathcal S[f]={\lim_{n\to\infty}}({\rm mul}_a^{-1})^n\circ f^n \)
for \( a=f'(0) \), \( s \) is the successor and \( {\rm mul}_a \) is multiplication by \( a \). While in the thread
[KSuLog] 2008 nov, bo198214 - Kneser's Super Logarithm:
bo198214 writes that one way to construct the principal Schroeder function is given by the limit
\( \chi=\lim_{n\to\infty}({\rm mul}_c^{-1})^n\circ(s^{-1})^c\circ f^n \)
where \( c \) is a fixed point, which Kneser (1949) composes with an Abel function of multiplication, \( \log \), to obtain an Abel function and solve the half-iterate of exp.
Few posts later Sheldonison defines an Abel function of \( \exp \) (\( \rm{slog} \) developed for the fixed point c) with this limit
\( \psi^{-1}=\lim_{n\to\infty}\exp^n\circ s^c\circ \exp_c\circ (s^{-1})^n \)
\( \psi=\lim_{n\to\infty}s^n\circ\log_c\circ(s^{-1})^c\circ\ln^n \)
Question! What is the general pattern here?! Let's ignore now the convergence issue of the limit for a moment, and I'll try later to, at least, black-box it and return to it when the underlying algebraic argument is clear to me.
The general scheme here seems to comprise the following:
- They select a pair of functions \( f:X\to X \) and \( g:Y\to Y \) with some properties, i.e. continuous, analytic, linear, or no property at all, e.g. \( (f,s) \) in the case of principal Abel; \( (f,{\rm mul}_a) \) in the case of Principal Schroeder; \( (s,\exp) \) in the case of tetration; \( (\exp,s) \) in the case of super-logarithm;
- they go on drawing from their magician's hat a function \( \phi:X\to Y \), a kind of first approximation chosen so as to obtain some desired properties related to the fixed points, to the behavior or to the very success of the limiting construction, e.g. \( \log_{f'(0)} \) in the principal Abel function; the identity in the principal Schroeder function defined in [TF]; subtraction by the fixed-point \( s^{-c} \) in [KSuLog] and, if I'm not mistaken \( 2\sinh \) in the Tommy-method;
- in all the cases shown, by inverting \( f \) or \( g \), they define recursively from a "broken conjugation" a sequence of functions \( \phi_n \) with base of the recursion \( \phi_0=\phi \) such that \( \phi_n\in [f,g]_{\Delta} \). In other words, if the definition of \( [f,g]_{\Delta} \) wasn't very attractive, this means that the sequence \( \phi_n \) behaves imperfectly almost like a solution of \( \chi f=g\chi \);
- they take the limit of the sequence \( \phi_n \) and get the desired function. \( \lim_{n\to\infty}\phi_n=\chi \) Well, what they mean probably is that for every \( x\in X \) they evaluate the sequence \( \phi_n(x)\in Y \) defining for every \( x \) a sequence over \( Y \) and then they evaluate that limit in \( Y \)
\( \phi_n(x_0)\to y_0 \)
\( \phi_n(x_1)\to y_1 \)
\( \phi_n(x_2)\to y_2 \)
\( \vdots \)
studying the subset \( R\subseteq X \) for which the sequence \( \phi_n(x) \) converges.
- what we obtain at the end should be one of the desired inaccessible elements of \( [f,g] \), e.g. \( [f,s] \) is the solution set of the Abel equation on \( f \); \( [f,{\rm mul}_{a}] \) is the solution set of the Schroeder equation on \( f \); \( [\exp, s] \) contains the slog; \( [s,\exp] \) contains tetration; in general \( [s,f] \) is the set of superfunctions of \( f \);
(2021 01 16) Generalized_superfunction_trick.pdf (Size: 307.64 KB / Downloads: 936)
Mother Law \(\sigma^+\circ 0=\sigma \circ \sigma^+ \)
\({\rm Grp}_{\rm pt} ({\rm RK}J,G)\cong \mathbb N{\rm Set}_{\rm pt} (J, \Sigma^G)\)
