02/19/2023, 06:09 PM
(02/19/2023, 04:52 PM)Gottfried Wrote: Hello Ember, don't know whether this matters your intention: one of my first readings on tetration 2007 has been Lars Kindermann's doctoral thesis on Neural Network method to tetration. Maybe you know this or maybe that doesn't matter for your intention.... Anyway, the paper is online in case you're interested: I've been a complete greenhorn in tetration and understood nearly nothing that years ago, and so I cannot say, whether there was some relation to Kneser's method considered at all.
Wow. That's a bit of a long history, lol. After all, there are new results on the Universal approximation theorem that have only been obtained in the last few years, like https://arxiv.org/abs/2006.00625.
I would like to continue to illustrate the relationship between neural networks and Kneser.
The Universal approximation theorem claims that we can approximate any continuous function with a neural network, and the only complex tetration we need is smooth. The next thing to do is just to hammer the nail with the newest hammer.
As for the second route, I have already made it clear.
If there are intermediate variables that can be decomposed in the Kneser's method: getting estimates of these variables will speed up the iteration speed/valuation accuracy of the Kneser's method itself, then one can try to compute these intermediate variables using a neural network.
This gives us a chance to replicate the success of AlphaTensor. We need to have a deeper understanding of the Kneser's method.
Both routes necessarily cost a lot of CPU time, some GPU time and storage space, consume a lot of power and generate carbon emissions. And most valuable of all, human energy.

