Posts: 1,924
Threads: 415
Joined: Feb 2009
08/15/2015, 10:22 PM
(This post was last modified: 08/15/2015, 10:38 PM by tommy1729.)
I think
Let s = exp^[1/2]
For suffic Large positive real x
Exp(X + DC) > S( [1 + C / ( x - AC )] s(x) ) > exp(x + BC).
Were A , B , C and D are real constants.
And A,B,D depend on C.
Also suff Large depends on C too ;
X >> (a^2 + b^2 + c^2)^4
At least i think.
The inspiration and arguments came from fake function theory , hence no new thread.
Regards
Tommy1729
Posts: 1,924
Threads: 415
Joined: Feb 2009
Update on previous post
Im able to prove
For sufficiently Large x >> C :
S( [1 + C/x^2] s(x) ) = exp(x + o(1))
However this was without fake function theory and not so elegant imho.
o is little-o notation here.
Regards
Tommy1729
The master
Posts: 1,924
Threads: 415
Joined: Feb 2009
First i considered fake function theory.
Then An elementary argument.
Those 2 led to the last 2 Posts.
However a simple argument is
S(x [1 + 1/t(x)] ) > s(x) [1 + 1/t(x)].
Omg
So simple.
Probably wrote similar stuff before , so my apologies.
However
S( x [1 + 1/u(x)] ] < s(x) [1 + 1/t(x)]
Remains An issue and the Ideas from the last few posts remain intresting.
Regards
Tommy1729
Posts: 1,924
Threads: 415
Joined: Feb 2009
In the context of TPID 17
1) does this conjecture even hold ??
We have waay to little supporting examples.
We have the sqrt(n) factor in semi-exp and exp but for instance double exponential is not confirmed.
For instance for SUM x^n / n^(n^n).
Im Hoping you guys can help.
--- assuming its true ---
I noticed
Min ( f^2) = min ( f )^2
Yet (1 + f)^2 = 1 + 2f + f^2.
therefore
A_n = min ( f(x)/x^n )
Is improved by b_n = A_n + 2 sqrt A_n.
That is only a minor improvement.
I tried to use the same arguments and as you might have guessed
Replacing ^2 and sqrt by say ^3 and cuberoot.
However it seems to violate ...
I even hesitated to post this because intuition can get you in trouble here.
Clearly there is something missing here.
Yes i know , we approximate the LHS in
P(x) < min f
With a polynomial degree n , so if we take ^(n/2) we end Up with the largest naively possible root.
Because O(x^2)^(n/2) = O(x^n)
We cant ALLOW growth smaller then O (x^2) since this violates the conditions f>0,f ' > 0 , f '' > 0.
But that does not explain enough.
It does show the upperbound factor
< O (n/2)
But that is very close from O ( ln(n) sqrt(n) ).
All intuïtieve logic fails.
However i believe we can repeat the b_n argument and thus arrive at the improved
C_n = a_n + 2 sqrt a_n + 2 sqrt(a_n + 2 sqrt a_n) + ...
~~ a_n + 2 ln(n) sqrt(a_n).
But that is still far from the desired
C ( a_n ln(n+1) sqrt(n) )
Its getting weird , I know.
Regards
Tommy1729
Posts: 1,924
Threads: 415
Joined: Feb 2009
Ok trying to clarity the previous post.
G (x) ~ g(0) + f(x)^2 = g0 + g1 x + ...
(Taylor expansion)
F(x) = f1 x + f2 x^2 + ...
From this we can say gn = f1 f#n-1# + f2 f#n-2# + ... + (f#n/2#)^2.
However since (min [f / x^(n/2) ]) ^2 = min[f^2 / x^n] ,
gn is wrongly estimated as gn ~ (f#n/2#)^2.
Assuming
Ass 1
f#k# f#n-k# < ( f#n/2# )^2
( notice this depends on the convergeance speed of fn ! )
we thus get the improved estimate
g'n ~< (n/2) (f#n/2#)^2.
Hence a correcting factor upper bound : n/2.
---
Well that is the idea.
Handwaving informal and sketchy ... Yes i admit.
But still.
Issues ??
1) n is not even.
2) repeating the argument ... As in (q^2)^2. Leading to arbitrary correcting factors ??
3) similar to 2); replacing ^2 with other functions such as ^5 , also leading to arbitrary correcting factors.
Solutions to 1) 2) and 3) are considered but not formal.
I hope 1) 2) and 3) make clear what i meant with intuition failure.
Also the correcting factor n/2 is far from sqrt ( ln(n) n ).
---
Here i considered the n th Taylor polynomials with sqrt.
Clearly i Cannot meaningfully take anything beyond ^(2/n) since
( X^2 )^(n/2) = x^n.
---
Hope this clarifies a bit.
Sorry for the late reply , but this subject is tricky !
Since Ass 1 depends on converg speed im not even sure that TPID 17 is correct.
On the other hand it holds for semi-exp and exp and sheldon believes in a very similar variant.
I must be missing something.
The importance of the decending chain condition for the derivatives perhaps ??
---
Maybe i know someone who could help us here.
---
Pls inform me if its much simpler then i think but i guess it is not.
Regards
Tommy1729
Posts: 1,924
Threads: 415
Joined: Feb 2009
About issues 2) and 3) and in general ...
Suppose we want the correcting factors c_n for f(x).
Since a_n depends on the truncated fake Taylor polynomial of degree n ,
At best we can take ^(2/n) < as explained before >.
But there is another upperbound.
F(x)^q needs to satisfy the fundamental conditions
D^a [F(x)^q] > 0 for a E (0,1,2).
In particular a = 2.
This places An upper bound on q INDEP of n but DEP on the values and rate of descent of the a_n.
And this is the balance we look for
:
the faster a_n descends , the larger q is.
And vice versa.
There we cannot " repeat the argument " as much as we want , nor choose any m-th root we want ( or other function ).
This gives hope for proving results of type
Correcting factors ~< O ( ( ln(n) n)^gamma ).
gamma ~ 1/2 is then close to a proof of TPID 17.
I call Q = 1/q the power level of f(x).
Guess this clarifies alot.
Im not sure how this relates to sheldon's integrals , Hadamard products and zeration [ min,- algebra ] yet.
Although I have Some Ideas ...
Regards
Tommy1729
Posts: 1,924
Threads: 415
Joined: Feb 2009
09/09/2015, 12:17 PM
(This post was last modified: 09/09/2015, 12:19 PM by tommy1729.)
However Some functions have a power level of infinity.
And this makes it nontrivial to even decide if this strategy is helpful.
More investigation is neccessary.
Unfortunately i lack time.
For example exp(x) has a power level of infinity.
Exp(x)^a = exp(a x).
The analogue idea of using semi-logarithms instead of sqrt comes to mind.
Another idea is to write
F(x) = exp(a(x)) b(x)
With b(x) growing slower than exp.
Then repeat if necc with a(x) until we get functions a*(x),b(x) that grow slower then exp , and then use the power level tricks on them.
[ this assumes F(x) grows slower then some power tower exp^[k](x). ]
Regards
Tommy1729
Posts: 1,924
Threads: 415
Joined: Feb 2009
09/12/2015, 01:14 AM
(This post was last modified: 09/12/2015, 01:25 AM by tommy1729.)
To give An example that should work.
F(x) = Sum a_n x^n
With a_n = exp(-n^2)
To compute the fake a_n we consider
F_n(x) = Sum^n a_i x^i
Now we solve
(Alpha x^2)^[b] = a_n x^n
So alpha is close to a_n ^ (log_2(n-1))^(-1).
Or alpha ~ exp( - n^2 / log_2(n-1) )
And b ~ ln(n-1)/ln(2).
--
Max_x Integral_0^x exp(- t^2) exp(- (x-t)^2 ) dt =
Exp(- x^2/2 ) C.
Where C is a constant ( upper bound constant with respect to x ).
Therefore the correcting factor for taking sqrt is C.
And thus the correcting factor for taking r th root is C^log_2{r}.
Or r^( ln{C}\ln(2) ).
From the computation of b we get max{r} ~ 2^b.
Therefore D_n = (n-1)^( ln{C}\ln(2) )
Where D_n is An upper bound on the (final) correcting factor for a_n.
--
So we should have
Exp(- n^2) =< D_n min( f(x) / x^n ).
--
Something like that.
It looks a bit like using convolution and fourrier analysis , but its different.
Regards
Tommy1729
Posts: 1,924
Threads: 415
Joined: Feb 2009
09/14/2015, 01:30 AM
(This post was last modified: 09/14/2015, 01:38 AM by tommy1729.)
Let f(x) be a real-entire function with all derivatives > 0 and f(0) >= 0.
Let C be the Cauchy constant ( 1/ 2 pi i ).
The taylor coëfficiënts are given by the contour integral
[1]
C * \( \oint x^{-n-1}f(x) \; \; \)
The estimate from fake function theory,
Min (f(x) x^{-n}) can also be given by a contour integral
Let g(x,n) = f(x) / x^n.
then
[2]
Min (f(x) x^{-n}) = C * \( \oint g(x,n) g ' ' (x,n)/g ' (x,n) \)
So the correcting factors are given by
Cor(n) = [1]\[2] =
\( \frac{ \oint x^{-n-1}f(x) \; \; }{ \oint g(x,n) g ' ' (x,n)/g ' (x,n)} \)
So the question becomes to estimate , bound or simplify [1]\[2].
Not sure how to proceed here.
But now we have a reformulation in terms of more standard calculus ; in terms of (contour) integration.
I call this the " ratio formulation " and TPID 17 can be expressed in it.
Im aware I did not mention alot of related things such as the specification of the contours , numerical methods , Laplace etc etc.
Certainly special cases can be solved but a general idea is missing.
I was able to prove / disprove the expressibility in similar cases , but contour integration is a bit trickier then " ordinary " integrals.
Ideal would be if we could express this ratio as a single contour.
But im not sure if that is possible.
While considering that, the idea of
" contour derivative " [1]\[2]
Comes to mind.
For Some of you - or most - this was already clear I assume.
But for completeness I make this post.
Also Sheldon has similar ideas and I am not sure how exactly they relate ...
Regards
Tommy1729
Posts: 1,924
Threads: 415
Joined: Feb 2009
I think Mick did a good job posting the problem on MSE.
Here is the link :
http://math.stackexchange.com/questions/...r-dx-n-n-0
No votes or reactions yet.
Regards
Tommy1729