Tetration by a Non-Integer
Does anyone think that tetration by a non-integer will ever be defined ... really properly?
Great mathematicians struggled with finding an implementation of the non-integer factorial for a long time to no avail ... and then eventually Leonhardt Euler devised a means of doing it, & by a sleight-of-mind that was just so slick & so simple in its essence ... and yet so radical! Is there any scope for another such sleight-of-mind as that, whereby someone might do similarly for tetration, or have they all been used-up? It's impossible to conceive of how there might be any truly new conceptual resource left of that kind. But it's actually tautological that that is so, because the kind I am talking about is precisely the kind that is essentially radically new, & of even how it might be thought hitherto unconceived!
But the attempts I have seen so far at defining tetration by general real number look to me for all the world like mere interpolation - some of them indeed very thorough & cunning & ingenious (insofar as I can follow them atall) - but lacking that spark of essential innovation that is evinced in Euler's definition of the gamma function.
Just incase it seems I have gotten lost in philosophy, I'll repeat the question: will there ever be a definition of tetration by general real number that resolves the matter as thoroughly as Euler's definition of the gamma-function resolved the matter of factorial of general real number?
hyperoperation
add a comment |
Does anyone think that tetration by a non-integer will ever be defined ... really properly?
Great mathematicians struggled with finding an implementation of the non-integer factorial for a long time to no avail ... and then eventually Leonhardt Euler devised a means of doing it, & by a sleight-of-mind that was just so slick & so simple in its essence ... and yet so radical! Is there any scope for another such sleight-of-mind as that, whereby someone might do similarly for tetration, or have they all been used-up? It's impossible to conceive of how there might be any truly new conceptual resource left of that kind. But it's actually tautological that that is so, because the kind I am talking about is precisely the kind that is essentially radically new, & of even how it might be thought hitherto unconceived!
But the attempts I have seen so far at defining tetration by general real number look to me for all the world like mere interpolation - some of them indeed very thorough & cunning & ingenious (insofar as I can follow them atall) - but lacking that spark of essential innovation that is evinced in Euler's definition of the gamma function.
Just incase it seems I have gotten lost in philosophy, I'll repeat the question: will there ever be a definition of tetration by general real number that resolves the matter as thoroughly as Euler's definition of the gamma-function resolved the matter of factorial of general real number?
hyperoperation
I am no expert on history, but googling yields the name "Leonhard Euler." I am unsure where you get the "t"...
– Mohammad Zuhair Khan
Nov 21 '18 at 16:25
I once read the appendix to TE Lawrence's Seven Pillars of Wisdom ... I think it was a bad influence on me!
– AmbretteOrrisey
Nov 21 '18 at 16:35
add a comment |
Does anyone think that tetration by a non-integer will ever be defined ... really properly?
Great mathematicians struggled with finding an implementation of the non-integer factorial for a long time to no avail ... and then eventually Leonhardt Euler devised a means of doing it, & by a sleight-of-mind that was just so slick & so simple in its essence ... and yet so radical! Is there any scope for another such sleight-of-mind as that, whereby someone might do similarly for tetration, or have they all been used-up? It's impossible to conceive of how there might be any truly new conceptual resource left of that kind. But it's actually tautological that that is so, because the kind I am talking about is precisely the kind that is essentially radically new, & of even how it might be thought hitherto unconceived!
But the attempts I have seen so far at defining tetration by general real number look to me for all the world like mere interpolation - some of them indeed very thorough & cunning & ingenious (insofar as I can follow them atall) - but lacking that spark of essential innovation that is evinced in Euler's definition of the gamma function.
Just incase it seems I have gotten lost in philosophy, I'll repeat the question: will there ever be a definition of tetration by general real number that resolves the matter as thoroughly as Euler's definition of the gamma-function resolved the matter of factorial of general real number?
hyperoperation
Does anyone think that tetration by a non-integer will ever be defined ... really properly?
Great mathematicians struggled with finding an implementation of the non-integer factorial for a long time to no avail ... and then eventually Leonhardt Euler devised a means of doing it, & by a sleight-of-mind that was just so slick & so simple in its essence ... and yet so radical! Is there any scope for another such sleight-of-mind as that, whereby someone might do similarly for tetration, or have they all been used-up? It's impossible to conceive of how there might be any truly new conceptual resource left of that kind. But it's actually tautological that that is so, because the kind I am talking about is precisely the kind that is essentially radically new, & of even how it might be thought hitherto unconceived!
But the attempts I have seen so far at defining tetration by general real number look to me for all the world like mere interpolation - some of them indeed very thorough & cunning & ingenious (insofar as I can follow them atall) - but lacking that spark of essential innovation that is evinced in Euler's definition of the gamma function.
Just incase it seems I have gotten lost in philosophy, I'll repeat the question: will there ever be a definition of tetration by general real number that resolves the matter as thoroughly as Euler's definition of the gamma-function resolved the matter of factorial of general real number?
hyperoperation
hyperoperation
asked Nov 21 '18 at 16:21
AmbretteOrrisey
57410
57410
I am no expert on history, but googling yields the name "Leonhard Euler." I am unsure where you get the "t"...
– Mohammad Zuhair Khan
Nov 21 '18 at 16:25
I once read the appendix to TE Lawrence's Seven Pillars of Wisdom ... I think it was a bad influence on me!
– AmbretteOrrisey
Nov 21 '18 at 16:35
add a comment |
I am no expert on history, but googling yields the name "Leonhard Euler." I am unsure where you get the "t"...
– Mohammad Zuhair Khan
Nov 21 '18 at 16:25
I once read the appendix to TE Lawrence's Seven Pillars of Wisdom ... I think it was a bad influence on me!
– AmbretteOrrisey
Nov 21 '18 at 16:35
I am no expert on history, but googling yields the name "Leonhard Euler." I am unsure where you get the "t"...
– Mohammad Zuhair Khan
Nov 21 '18 at 16:25
I am no expert on history, but googling yields the name "Leonhard Euler." I am unsure where you get the "t"...
– Mohammad Zuhair Khan
Nov 21 '18 at 16:25
I once read the appendix to TE Lawrence's Seven Pillars of Wisdom ... I think it was a bad influence on me!
– AmbretteOrrisey
Nov 21 '18 at 16:35
I once read the appendix to TE Lawrence's Seven Pillars of Wisdom ... I think it was a bad influence on me!
– AmbretteOrrisey
Nov 21 '18 at 16:35
add a comment |
4 Answers
4
active
oldest
votes
We're trying, but it's hard.
A better analogy than the Gamma function would be the way you can now define $x^y$ for real $y$. Why does $x^{3/2}$ make sense? Because I can solve $y^2=x^3$. (Admittedly any solution $y=y_0$ implies $y<y_0$ is a solution too, but we have a convention to get around that for $x>0$.)
So what would $^{3/2}x$ mean? Presumably, a solution of $y^y=x^{x^x}$. Unfortunately, the values of $y^y$ for $yin (0,,frac{1}{e})$ are repeated again for some $y>frac{1}{e}$ in a... not particularly simple way, so it's already getting confusing. There are similar headaches when trying to define $^{k/(2l)}x$ for $x>0,,k,,linmathbb{N},,2nmid k$.
I'm not sure whether you can even prove $^y x$ with $y$ irrational can be defined by continuity, i.e. whether we can prove any rational sequence $y_n$ with $lim_{ntoinfty}y_n=y$ gives the same $ntoinfty$ limit of $^{y_n}x$.
Having said all that, I bet we'll have made a lot of progress within 200 years (even if only in proving what we can't do).
I can't answer this as thoroughly as I would like to at the present moment - but one little item stands out - to me, at least, the most fundamental definition of $x^a$ for general real $a$ is that it is the solution of the differential equation $dy/dx =ay/x$ & $1^a = 1$ forall $a$. But I am very fond of that scheme of mathematics inwhich functions are essentially defined by differential equations - as being solutions of them - that as the axiom of what they are.
– AmbretteOrrisey
Nov 21 '18 at 16:43
add a comment |
The idea of E.Schroeder in the 19'th century for bases $b$ (allowing two real fixpoints for iterations, for instance $b=sqrt{2}$) , sometimes called "regular iteration" seems to me a real good one. It allows a meaningful expression for fractional iteration from some starting point $z_0$ towards some endpoint $z_h$ where $h$ means the (possibly fractional, even complex) iteration-(h)eight, such that $z_0$ is the initial value, $z_1 = b^{z_0}$ is the first (integer) iteration and so on.
However, that method needs conjugacy to be able to be applied to all real $z_0$ : one has to choose the appropriate fixpoint for shifting the power-series for the exponentiation with base $b$ and get an evaluatable analytical answer at all.
Unfortunately it has been observed, that in the cases $z_0$ where each fixpoint-conjugacy can be taken, the results of fractional iteration are different - even if only by some 1e-25 or so.
A basic problem for the general $b$ is the multivaluedness of the complex exponentiation/logarithmization, or say, the "clock-arithmetic" with respect to the $2 pi î$-term in the exponentiation - I have once seen an article (R.Corless & al.) on the "winding number" which tries to make sense of introduction of one more parameter for the complex numbers to overcome that "clock-arithmetic" to a real-arithmetic. But that was no real progress for the problem here.
So I think, similarly to the full workout of L. Euler about the multivaluedness of the logarithm and then the representation of the gamma-function as an infinite sum of partial products, we need some more idea here - the E. Schroeder-idea seems to me just like a small insular solution, however nice ever...
(just as a remark: you might consider the two common versions of the interpolation of the Fibonacci numbers $fib(n)$ to continuous functions of the index: there is one ansatz providing real numbers for real index, and another ansatz (in analogy perhaps to the Schroeder method) which gives complex numbers for real indexes but seems more smooth when seen overall in the complex plane. See a small essay about this at my homepage)
I presume that by "the first ansatz" you mean the φⁿ±φ⁻ⁿ thing. I have nod idea about the other, but I am very curious about it now, especially as you say it is an instance of the Schroeder method ... which sounds like it could do with 'tapering' by means of a nice familiar & relatively simple example for someone coaching themself in it from the beginning! Anyway - thank-you for the information ... but it dashed my hopes a bit when you began to speak of it being yet another insular effort. But I think (i) it does require fundamentally new thought (ii) it is there - yet to be found!
– AmbretteOrrisey
Dec 7 '18 at 20:36
@AmbretteOrrisey: thanks for your comment. Unfortunately I'm much busy this and next days and have no space to step in again. Let's see end of next week...
– Gottfried Helms
Dec 8 '18 at 9:12
Of course I don't expect you to dispense me a course on the Schroeder method! On the contrary, thankyou for taking the trouble to steer my attention in that direction. And I do often just throw ideas out without the expectation that those who catch them process or develop them for me!
– AmbretteOrrisey
Dec 8 '18 at 12:31
@AmbretteOrrisey: I've found my old discussion about the interpolation of the fibonacci numbers, I've added the link to my remark in my answer. First I thought I'd posted that as Q&A here in MSE but I did not know this place here and communicated via the sci.math-newsgroup in the usenet. Hope the essay is helpful/explanative about my remark.
– Gottfried Helms
Dec 9 '18 at 2:47
Looks ike you're giving me a course of instruction anyway! That's your work? I think I can discern your style of writing in it. I've only just looked at it - it'll take a while for that to mature. I see the 'other' form of the 'continuous-isation' of the fibonacci numbers - a spiral in the complex plane. Thanks for your ongoing attention ... please don't be tempted to make inroads into your time ... it wasn't meant as a subtle goad or anything when I said I don't expect a course of instruction, and plenty to be fornow.¶ Thanks for that direction - I shall assuredly enjoy delving into that.
– AmbretteOrrisey
Dec 9 '18 at 8:42
|
show 9 more comments
There is a remarkable expression I've found in this connection that might have some bearing on the matter for the iterates the Taylor series of $$operatorname{f}(x)to x^{operatorname{f}(x)}$$ when $xequiv e^z$, such that $$operatorname{f_0}(x)equiv1 ,$$$$operatorname{f_1}(x)equiv exp(z) ,$$$$operatorname{f_2}(x)equiv exp(zexp(z)) ,$$ etc. The coefficients for $k=0dots n$ are those of the Lambert W-function; but thereafter, for $k>n$ the coefficients are given by the following recursion. Let $a_{n,0}=1forall n$, $a_{0,k}=0$ for $k>0$, & thereafter $$a_{n,k}={1over n}sum_{j=1}^k ja_{n,k-j}a_{n-1,j-1} .$$ These coefficients are in a sense 'wasted' upthrough $k=n$, inthat they do not actually appear in the Taylor series; and yet they still serve the function of being necessary for the generation of the coefficients that do appear. It is fascinating to my mind, the way there is a kind of discontinuity in the series - the coeffiecients generated by this recursion 'peeling-away' one-at-a -time as $n$ is incremented, 'revealing' the coefficients of the Lambert W-function 'underneath'; and it is a well known result that the limit as the $$operatorname{f}(x)to x^{operatorname{f}(x)}$$ tends to $infty$ is indeed the Lambert W-function.
Whether this is susceptible of treatment by Schroeder's method I would not venture definitely to say at the present time, as, though I see in outline how that method can be applied to something like the recursion that gives the Fibonacci numbers, I am rather daunted by that discontinuity in the generation of the coefficients in this case; and I cannot see at a glance how it would be encoded.
Ambrette- you might look at my older discussion cited in my new answer here math.stackexchange.com/a/3056786/1714
– Gottfried Helms
Dec 30 '18 at 15:20
add a comment |
I just found some discussion about exactly your ansatz in your own answer, but without successful route to an end at, for instance, a half-iterate of the function. I'd posted this in the "tetration-forum" in about 2009
The final conclusion of that study was (see bottom of this answer): So for that approach: it looks as if we cannot express a half-iterate based on this type of powerseries. Pity.... Maybe we can find a workaround - change order of summation or something else, don't have an idea.
Now to my message itself (date-of-saving:16 Mar 2009):
Here I present three postings in sci.math. It seems, that the method is not well suited for the interpolation to fractional heights (as I hoped it would be). But - perhaps we can find a workaround. On the other hand: it is not needed that many different methods exist, so...
Also Ioannis (Galidakis) reminded me of the entry in mathworld,"powertower", where he already characterized this type of series. (http://mathworld.wolfram.com/PowerTower.html)
Here the current msgs aus sci.math: ( some edits in double-brackets [< >])
*subject: tetration: another family of powerseries for fractional iteration*
Maybe this is all known; I didn't see it so far. The idea was triggered by
the comments of V Jovovic in the OEIS concerning the below generating functions.
Consider the sequence of functions
T0(x) = 1, T1(x) = exp(x*1), T2(x) = exp(x*exp(x)), T_h(x) = exp(x*T_{h-1}(x)),...
They are also the generation-functions for the following sequence of powerseries:
T0: 1 + 0 + 0 + ....
T1: 1 + x + 1/2*x^2 + 1/6*x^3 + 1/24*x^4 + 1/120*x^5 + 1/720*x^6 + 1/5040*x^7 ...
T2: 1 + x + 3/2*x^2 + 10/6*x^3 + 41/24*x^4 + 196/120*x^5 + 1057/720*x^6 + 6322/5040*x^7 +...
T3: 1 + x + 3/2*x^2 + 16/6*x^3 + 101/24*x^4 + 756/120*x^5 + 6607/720*x^6 + 160504/5040*x^7 + ...
T4: 1 + x + 3/2*x^2 + 16/6*x^3 + 125/24*x^4 + 1176/120*x^5 + 12847/720*x^6 + 229384/5040*x^7 + ...
T5: 1 + x + 3/2*x^2 + 16/6*x^3 + 125/24*x^4 + 1296/120*x^5 + 16087/720*x^6 + 257104/5040*x^7 + ...
T6: 1 + x + 3/2*x^2 + 16/6*x^3 + 125/24*x^4 + 1296/120*x^5 + 16807/720*x^6 + 262144/5040*x^7 + ...
T7: 1 + x + 3/2*x^2 + 16/6*x^3 + 125/24*x^4 + 1296/120*x^5 + 16807/720*x^6 + 262144/5040*x^7 + ...
T8: 1 + x + 3/2*x^2 + 16/6*x^3 + 125/24*x^4 + 1296/120*x^5 + 16807/720*x^6 + 262144/5040*x^7 + ...
T9: 1 + x + 3/2*x^2 + 16/6*x^3 + 125/24*x^4 + 1296/120*x^5 + 16807/720*x^6 + 262144/5040*x^7 + ...
...
Too: 1 + x + 3/2*x^2 + 4^2/3!*x^3 + 5^3/4!*x^4 + 6^4/5!*x^5 + 7^5/6!*x^6 + 8^6/7!*x^7 + ... //limit h->inf
That means, if x = log(b), we have by this
T0(x) = 1
T1(x) = b = b^^1
T2(x) = b^b = b^^2
T3(x) = b^b^b = b^^3
...
Too(x) = ...^b^b = b^^oo
and for the limit h->inf we have with Too(x) the series for the h-function of b: Too(x) = h(b)
which is convergent for |x|<exp(-1)
[<...>]
The 2.nd msg:
> > (Galidakis replies) :
> > However, the recursive expression for the coefficients
> > given in (6) [<in mathworld, G.H.>] does not seem to allow that.
> >
> > If you can find a way to interpolate between those coefficients for non-natural
> > heights using your matrix method AND at the same time you manage to preserve the
> > functional equation F(x + 1) = e^{x*F(x)}, then, by Jove, you've got a nice
> > analytic solution to tetration :-)
Ok, let's give a start. Recall:
T0: 1 + 0 + 0 + ....
T1: 1 + x + 1/2*x^2 + 1/6*x^3 + 1/24*x^4 + 1/120*x^5 + 1/720*x^6 + 1/5040*x^7 ...
T2: 1 + x + 3/2*x^2 + 10/6*x^3 + 41/24*x^4 + 196/120*x^5 + 1057/720*x^6 + 6322/5040*x^7 +...
T3: 1 + x + 3/2*x^2 + 16/6*x^3 + 101/24*x^4 + 756/120*x^5 + 6607/720*x^6 + 160504/5040*x^7 + ...
T4: 1 + x + 3/2*x^2 + 16/6*x^3 + 125/24*x^4 + 1176/120*x^5 + 12847/720*x^6 + 229384/5040*x^7 + ...
T5: 1 + x + 3/2*x^2 + 16/6*x^3 + 125/24*x^4 + 1296/120*x^5 + 16087/720*x^6 + 257104/5040*x^7 + ...
T6: 1 + x + 3/2*x^2 + 16/6*x^3 + 125/24*x^4 + 1296/120*x^5 + 16807/720*x^6 + 262144/5040*x^7 + ...
T7: 1 + x + 3/2*x^2 + 16/6*x^3 + 125/24*x^4 + 1296/120*x^5 + 16807/720*x^6 + 262144/5040*x^7 + ...
T8: 1 + x + 3/2*x^2 + 16/6*x^3 + 125/24*x^4 + 1296/120*x^5 + 16807/720*x^6 + 262144/5040*x^7 + ...
T9: 1 + x + 3/2*x^2 + 16/6*x^3 + 125/24*x^4 + 1296/120*x^5 + 16807/720*x^6 + 262144/5040*x^7 + ...
...
Too: 1 + x + 3/2*x^2 + 4^2/3!*x^3 + 5^3/4!*x^4 + 6^4/5!*x^5 + 7^5/6!*x^6 + 8^6/7!*x^7 + ... //limit h->inf
We want to interpolate for coefficients of T0.5, means between rows T0 and T1.
I'll rewrite the powerseries without the powers of x. And since we do
the binomial composition of coefficients at like powers of x, we compose
the coefficients down a column; so the common denominator(the factorial) of a
column can be omitted for the scheme.
Thus I get for the original coefficients, only rescaled
T0: 1 0 0 0 0 0 0 0 ...
T1: 1 1 1 1 1 1 1 1 ...
T2: 1 1 3 10 41 196 1057 6322
T3: 1 1 3 16 101 756 6607 65794
T4: 1 1 3 16 125 1176 12847 160504
T5: 1 1 3 16 125 1296 16087 229384
T6: 1 1 3 16 125 1296 16807 257104
T7: 1 1 3 16 125 1296 16807 262144
T8: 1 1 3 16 125 1296 16807 262144
T9: 1 1 3 16 125 1296 16807 262144
...
The first binomial-composition along the columns gives
X0: 1 0 0 0 0 0 0 0 ...
X1: 0 1 1 1 1 1 1 1 ...
X2: 0 -1 1 8 39 194 1055 6320
X3: 0 1 -3 -11 -19 171 3439 46831
X4: 0 -1 5 8 -37 -676 -7243 -64744
X5: 0 1 -7 1 105 1021 7357 21589
X6: 0 -1 9 -16 -161 -1026 -3301 67304
X7: 0 1 -11 37 181 631 -3605 -168125
X8: 0 -1 13 -64 -141 104 10961 246224
X9: 0 1 -15 97 17 -999 -16007 -278711
... ...
The second binomial-composition (using h=0.5)
[< Table 5: this will be the reference-table for the composition of coefficients of T05 >]
Y0: 1 0 0 0 0 0 0 ...
Y1: 0 1/2 1/2 1/2 1/2 1/2 1/2 ...
Y2: 0 1/8 -1/8 -1 -39/8 -97/4 -1055/8
Y3: 0 1/16 -3/16 -11/16 -19/16 171/16 3439/16
Y4: 0 5/128 -25/128 -5/16 185/128 845/32 36215/128
Y5: 0 7/256 -49/256 7/256 735/256 7147/256 51499/256
Y6: 0 21/1024 -189/1024 21/64 3381/1024 10773/512 69321/1024
Y7: 0 33/2048 -363/2048 1221/2048 5973/2048 20823/2048 -118965/2048
Y8: 0 429/32768 -5577/32768 429/512 60489/32768 -5577/4096 -4702269/32768
Y9: 0 715/65536 -10725/65536 69355/65536 12155/65536 -714285/65536 -11445005/65536 ...
... ...
----------------------------------------------------------------------------------------------------
sum. s0 s1 s2 s3 ...
====================================================================================================
T0.5: c0 c1 c2 c3 ...
and T0.5(x) = c0 + c1*x + c2*x^2/2! + c3*x^/3! + ...
the interpolated coefficients c0,c1,c2,... for h=0.5 should then be computed by the
column-sums (and finally the rescaling by the omitted factorials).
The partial sums in the columns converge only badly if at all, so let's look,
whether we can find some analytic solution.
The denominators in the rows can be majorized by powers of 4, and all can then be divided by
2, so let's rewrite this
common scaling
Y0: 1/2 0 0 0 0 0 0 0 0 0 *2 /4^0
Y1: 0 1 1 1 1 1 1 1 1 1 *2 /4^1
Y2: 0 1 -1 -8 -39 -194 -1055 -6320 -41391 -293606 *2 /4^2
Y3: 0 2 -6 -22 -38 342 6878 93662 1219314 16331654 *2 /4^3
Y4: 0 5 -25 -40 185 3380 36215 323720 2128445 -5199340 *2 /4^4
Y5: 0 14 -98 14 1470 14294 102998 302246 -9722034 -332756410 *2 /4^5
Y6: 0 42 -378 672 6762 43092 138642 -2826768 -93176118 -1954258068 *2 /4^6
Y7: 0 132 -1452 4884 23892 83292 -475860 -22192500 -463551132 -7659247332 *2 /4^7
Y8: 0 429 -5577 27456 60489 -44616 -4702269 -105630096 -1778712507 -23047084632 *2 /4^8
Y9: 0 1430 -21450 138710 24310 -1428570 -22890010 -398556730 -5760084330 -51266562490 *2 /4^9
... ...
----------------------------------------------------------------------------------------------------
sum. s0 s1 s2 s3 ...
====================================================================================================
T0.5: c0 c1 c2 c3 ...
and T0.5(x) = c0 + c1*x + c2*x^2/2! + c3*x^/3! + ...
Let's look at the columnsums of the table; that sums, divided by the factorial, give the coefficients
c_k for the T0.5(x)-powerseries.
First, s0 = 1, (remember the scaling extracted to the rhs) ,
so c0 = 1
Next, s1. Here we recognize, that the numbers are the catalan-numbers, and, with the
current scaling have the generation-function 1- sqrt(1-z). Since we want to know
simply the sum, we set z=1 and get for the sum
s1 = 1- sqrt(1-1) = 1
so c1 =1
Next, s2. It becomes more difficult. We can add columns s2 and s1 to get a sequence,
which can formally be expressed as the derivative of the sqrt(1 - z)-function, where
possibly we need also a scaling at z, so likely something like
1 - sqrt(1 - a z)'
It looks, as if the series is divergent, too, so we'll have to see, whether this
operation (and the following, which surely are similar) can be justified/make sense
at all.
-----------------
I proceeded for the first few terms s2,s3,s4,s5... Things seem to come out uneasy... :( Now follows msg 3:
(...)
Formally composed by derivatives of sqrt(1-z) I get for the series s1,s2,s3,... the following
generating functions
s0: 1
s1: 1 - 1*sqrt(1-z)
s2: 3 - 3*sqrt(1-z) + 2*z*(sqrt(1-z)')
s3: 16 - 16*sqrt(1-z) + 15*z*(sqrt(1-z)') - 3*z^2*(sqrt(1-z)'')
s4: 125 - 125*sqrt(1-z) + 124*z*(sqrt(1-z)') - 42*z^2*(sqrt(1-z)'') + 4*z^3*(sqrt(1-z)''')
s5: 1296 - 1296*sqrt(1-z) + 1295*z*(sqrt(1-z)') - 550*z^2*(sqrt(1-z)'') + 90*z^3*(sqrt(1-z)''') - 5*z^4*(sqrt(1-z)'''')
...
which have to be evaluated at z=1 to give the value for the sums. Now the derivatives have
a vertical asymptote at z=1, so here are infinities everywhere...
Even more obvious, if I expand the derivatives into terms of sqrt(1-z) I get the following
explicite generating functions for the series of s0,s1,s2,...:
s0: 1
s1: 1 - sqrt(1-z) * ( 1 )
s2: 3 - sqrt(1-z)/(1-z)^1* ( 3 - 4/2*z)
s3: 16 - sqrt(1-z)/(1-z)^2* ( 16 - 49/2*z + 31/4*z^2 )
s4: 125 - sqrt(1-z)/(1-z)^3* ( 125 - 626/2*z + 962/4*z^2 - 408/8*z^3 )
s5: 1296 - sqrt(1-z)/(1-z)^4* (1296 - 9073/2*z + 22784/4*z^2 - 23462/8*z^3 + 7561/16*z^4)
where all except the first two grow unboundedly, if z->1
So for that approach: it looks as if we cannot express a half-iterate based on
this type of powerseries. Pity.... Maybe we can find a workaround - change order
of summation or something else, don't have an idea.
Another idea around?
(end of that msg to the tetration-forum)
Been a tad absent from here lately ... and I see you've been rather busy at my post in the meantime. I've actually been marshaling some thoughts on the inverse Ackermann function, particularly in connection with Davenport Schinzel sequences, and have a post about it nearly ripe. I think it will chime with what you have contributed here!
– AmbretteOrrisey
yesterday
@AmbretteOrrisey: you're welcome. And happy new year! Unfortunately I likely shall have no new ideas in all this, but of course would like it much if there comes out some connections with my own older stuff. With the inverse of the Ackermann it is perhaps useful to contact Mr. Daniel Geisler who is founding member of the tetration-forum and has also an account here in MSE or MO and is spuriously active on questions on tetration. Perhaps via email you might be able to install some helpful connection.
– Gottfried Helms
yesterday
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3007951%2ftetration-by-a-non-integer%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
4 Answers
4
active
oldest
votes
4 Answers
4
active
oldest
votes
active
oldest
votes
active
oldest
votes
We're trying, but it's hard.
A better analogy than the Gamma function would be the way you can now define $x^y$ for real $y$. Why does $x^{3/2}$ make sense? Because I can solve $y^2=x^3$. (Admittedly any solution $y=y_0$ implies $y<y_0$ is a solution too, but we have a convention to get around that for $x>0$.)
So what would $^{3/2}x$ mean? Presumably, a solution of $y^y=x^{x^x}$. Unfortunately, the values of $y^y$ for $yin (0,,frac{1}{e})$ are repeated again for some $y>frac{1}{e}$ in a... not particularly simple way, so it's already getting confusing. There are similar headaches when trying to define $^{k/(2l)}x$ for $x>0,,k,,linmathbb{N},,2nmid k$.
I'm not sure whether you can even prove $^y x$ with $y$ irrational can be defined by continuity, i.e. whether we can prove any rational sequence $y_n$ with $lim_{ntoinfty}y_n=y$ gives the same $ntoinfty$ limit of $^{y_n}x$.
Having said all that, I bet we'll have made a lot of progress within 200 years (even if only in proving what we can't do).
I can't answer this as thoroughly as I would like to at the present moment - but one little item stands out - to me, at least, the most fundamental definition of $x^a$ for general real $a$ is that it is the solution of the differential equation $dy/dx =ay/x$ & $1^a = 1$ forall $a$. But I am very fond of that scheme of mathematics inwhich functions are essentially defined by differential equations - as being solutions of them - that as the axiom of what they are.
– AmbretteOrrisey
Nov 21 '18 at 16:43
add a comment |
We're trying, but it's hard.
A better analogy than the Gamma function would be the way you can now define $x^y$ for real $y$. Why does $x^{3/2}$ make sense? Because I can solve $y^2=x^3$. (Admittedly any solution $y=y_0$ implies $y<y_0$ is a solution too, but we have a convention to get around that for $x>0$.)
So what would $^{3/2}x$ mean? Presumably, a solution of $y^y=x^{x^x}$. Unfortunately, the values of $y^y$ for $yin (0,,frac{1}{e})$ are repeated again for some $y>frac{1}{e}$ in a... not particularly simple way, so it's already getting confusing. There are similar headaches when trying to define $^{k/(2l)}x$ for $x>0,,k,,linmathbb{N},,2nmid k$.
I'm not sure whether you can even prove $^y x$ with $y$ irrational can be defined by continuity, i.e. whether we can prove any rational sequence $y_n$ with $lim_{ntoinfty}y_n=y$ gives the same $ntoinfty$ limit of $^{y_n}x$.
Having said all that, I bet we'll have made a lot of progress within 200 years (even if only in proving what we can't do).
I can't answer this as thoroughly as I would like to at the present moment - but one little item stands out - to me, at least, the most fundamental definition of $x^a$ for general real $a$ is that it is the solution of the differential equation $dy/dx =ay/x$ & $1^a = 1$ forall $a$. But I am very fond of that scheme of mathematics inwhich functions are essentially defined by differential equations - as being solutions of them - that as the axiom of what they are.
– AmbretteOrrisey
Nov 21 '18 at 16:43
add a comment |
We're trying, but it's hard.
A better analogy than the Gamma function would be the way you can now define $x^y$ for real $y$. Why does $x^{3/2}$ make sense? Because I can solve $y^2=x^3$. (Admittedly any solution $y=y_0$ implies $y<y_0$ is a solution too, but we have a convention to get around that for $x>0$.)
So what would $^{3/2}x$ mean? Presumably, a solution of $y^y=x^{x^x}$. Unfortunately, the values of $y^y$ for $yin (0,,frac{1}{e})$ are repeated again for some $y>frac{1}{e}$ in a... not particularly simple way, so it's already getting confusing. There are similar headaches when trying to define $^{k/(2l)}x$ for $x>0,,k,,linmathbb{N},,2nmid k$.
I'm not sure whether you can even prove $^y x$ with $y$ irrational can be defined by continuity, i.e. whether we can prove any rational sequence $y_n$ with $lim_{ntoinfty}y_n=y$ gives the same $ntoinfty$ limit of $^{y_n}x$.
Having said all that, I bet we'll have made a lot of progress within 200 years (even if only in proving what we can't do).
We're trying, but it's hard.
A better analogy than the Gamma function would be the way you can now define $x^y$ for real $y$. Why does $x^{3/2}$ make sense? Because I can solve $y^2=x^3$. (Admittedly any solution $y=y_0$ implies $y<y_0$ is a solution too, but we have a convention to get around that for $x>0$.)
So what would $^{3/2}x$ mean? Presumably, a solution of $y^y=x^{x^x}$. Unfortunately, the values of $y^y$ for $yin (0,,frac{1}{e})$ are repeated again for some $y>frac{1}{e}$ in a... not particularly simple way, so it's already getting confusing. There are similar headaches when trying to define $^{k/(2l)}x$ for $x>0,,k,,linmathbb{N},,2nmid k$.
I'm not sure whether you can even prove $^y x$ with $y$ irrational can be defined by continuity, i.e. whether we can prove any rational sequence $y_n$ with $lim_{ntoinfty}y_n=y$ gives the same $ntoinfty$ limit of $^{y_n}x$.
Having said all that, I bet we'll have made a lot of progress within 200 years (even if only in proving what we can't do).
answered Nov 21 '18 at 16:35
J.G.
23k22137
23k22137
I can't answer this as thoroughly as I would like to at the present moment - but one little item stands out - to me, at least, the most fundamental definition of $x^a$ for general real $a$ is that it is the solution of the differential equation $dy/dx =ay/x$ & $1^a = 1$ forall $a$. But I am very fond of that scheme of mathematics inwhich functions are essentially defined by differential equations - as being solutions of them - that as the axiom of what they are.
– AmbretteOrrisey
Nov 21 '18 at 16:43
add a comment |
I can't answer this as thoroughly as I would like to at the present moment - but one little item stands out - to me, at least, the most fundamental definition of $x^a$ for general real $a$ is that it is the solution of the differential equation $dy/dx =ay/x$ & $1^a = 1$ forall $a$. But I am very fond of that scheme of mathematics inwhich functions are essentially defined by differential equations - as being solutions of them - that as the axiom of what they are.
– AmbretteOrrisey
Nov 21 '18 at 16:43
I can't answer this as thoroughly as I would like to at the present moment - but one little item stands out - to me, at least, the most fundamental definition of $x^a$ for general real $a$ is that it is the solution of the differential equation $dy/dx =ay/x$ & $1^a = 1$ forall $a$. But I am very fond of that scheme of mathematics inwhich functions are essentially defined by differential equations - as being solutions of them - that as the axiom of what they are.
– AmbretteOrrisey
Nov 21 '18 at 16:43
I can't answer this as thoroughly as I would like to at the present moment - but one little item stands out - to me, at least, the most fundamental definition of $x^a$ for general real $a$ is that it is the solution of the differential equation $dy/dx =ay/x$ & $1^a = 1$ forall $a$. But I am very fond of that scheme of mathematics inwhich functions are essentially defined by differential equations - as being solutions of them - that as the axiom of what they are.
– AmbretteOrrisey
Nov 21 '18 at 16:43
add a comment |
The idea of E.Schroeder in the 19'th century for bases $b$ (allowing two real fixpoints for iterations, for instance $b=sqrt{2}$) , sometimes called "regular iteration" seems to me a real good one. It allows a meaningful expression for fractional iteration from some starting point $z_0$ towards some endpoint $z_h$ where $h$ means the (possibly fractional, even complex) iteration-(h)eight, such that $z_0$ is the initial value, $z_1 = b^{z_0}$ is the first (integer) iteration and so on.
However, that method needs conjugacy to be able to be applied to all real $z_0$ : one has to choose the appropriate fixpoint for shifting the power-series for the exponentiation with base $b$ and get an evaluatable analytical answer at all.
Unfortunately it has been observed, that in the cases $z_0$ where each fixpoint-conjugacy can be taken, the results of fractional iteration are different - even if only by some 1e-25 or so.
A basic problem for the general $b$ is the multivaluedness of the complex exponentiation/logarithmization, or say, the "clock-arithmetic" with respect to the $2 pi î$-term in the exponentiation - I have once seen an article (R.Corless & al.) on the "winding number" which tries to make sense of introduction of one more parameter for the complex numbers to overcome that "clock-arithmetic" to a real-arithmetic. But that was no real progress for the problem here.
So I think, similarly to the full workout of L. Euler about the multivaluedness of the logarithm and then the representation of the gamma-function as an infinite sum of partial products, we need some more idea here - the E. Schroeder-idea seems to me just like a small insular solution, however nice ever...
(just as a remark: you might consider the two common versions of the interpolation of the Fibonacci numbers $fib(n)$ to continuous functions of the index: there is one ansatz providing real numbers for real index, and another ansatz (in analogy perhaps to the Schroeder method) which gives complex numbers for real indexes but seems more smooth when seen overall in the complex plane. See a small essay about this at my homepage)
I presume that by "the first ansatz" you mean the φⁿ±φ⁻ⁿ thing. I have nod idea about the other, but I am very curious about it now, especially as you say it is an instance of the Schroeder method ... which sounds like it could do with 'tapering' by means of a nice familiar & relatively simple example for someone coaching themself in it from the beginning! Anyway - thank-you for the information ... but it dashed my hopes a bit when you began to speak of it being yet another insular effort. But I think (i) it does require fundamentally new thought (ii) it is there - yet to be found!
– AmbretteOrrisey
Dec 7 '18 at 20:36
@AmbretteOrrisey: thanks for your comment. Unfortunately I'm much busy this and next days and have no space to step in again. Let's see end of next week...
– Gottfried Helms
Dec 8 '18 at 9:12
Of course I don't expect you to dispense me a course on the Schroeder method! On the contrary, thankyou for taking the trouble to steer my attention in that direction. And I do often just throw ideas out without the expectation that those who catch them process or develop them for me!
– AmbretteOrrisey
Dec 8 '18 at 12:31
@AmbretteOrrisey: I've found my old discussion about the interpolation of the fibonacci numbers, I've added the link to my remark in my answer. First I thought I'd posted that as Q&A here in MSE but I did not know this place here and communicated via the sci.math-newsgroup in the usenet. Hope the essay is helpful/explanative about my remark.
– Gottfried Helms
Dec 9 '18 at 2:47
Looks ike you're giving me a course of instruction anyway! That's your work? I think I can discern your style of writing in it. I've only just looked at it - it'll take a while for that to mature. I see the 'other' form of the 'continuous-isation' of the fibonacci numbers - a spiral in the complex plane. Thanks for your ongoing attention ... please don't be tempted to make inroads into your time ... it wasn't meant as a subtle goad or anything when I said I don't expect a course of instruction, and plenty to be fornow.¶ Thanks for that direction - I shall assuredly enjoy delving into that.
– AmbretteOrrisey
Dec 9 '18 at 8:42
|
show 9 more comments
The idea of E.Schroeder in the 19'th century for bases $b$ (allowing two real fixpoints for iterations, for instance $b=sqrt{2}$) , sometimes called "regular iteration" seems to me a real good one. It allows a meaningful expression for fractional iteration from some starting point $z_0$ towards some endpoint $z_h$ where $h$ means the (possibly fractional, even complex) iteration-(h)eight, such that $z_0$ is the initial value, $z_1 = b^{z_0}$ is the first (integer) iteration and so on.
However, that method needs conjugacy to be able to be applied to all real $z_0$ : one has to choose the appropriate fixpoint for shifting the power-series for the exponentiation with base $b$ and get an evaluatable analytical answer at all.
Unfortunately it has been observed, that in the cases $z_0$ where each fixpoint-conjugacy can be taken, the results of fractional iteration are different - even if only by some 1e-25 or so.
A basic problem for the general $b$ is the multivaluedness of the complex exponentiation/logarithmization, or say, the "clock-arithmetic" with respect to the $2 pi î$-term in the exponentiation - I have once seen an article (R.Corless & al.) on the "winding number" which tries to make sense of introduction of one more parameter for the complex numbers to overcome that "clock-arithmetic" to a real-arithmetic. But that was no real progress for the problem here.
So I think, similarly to the full workout of L. Euler about the multivaluedness of the logarithm and then the representation of the gamma-function as an infinite sum of partial products, we need some more idea here - the E. Schroeder-idea seems to me just like a small insular solution, however nice ever...
(just as a remark: you might consider the two common versions of the interpolation of the Fibonacci numbers $fib(n)$ to continuous functions of the index: there is one ansatz providing real numbers for real index, and another ansatz (in analogy perhaps to the Schroeder method) which gives complex numbers for real indexes but seems more smooth when seen overall in the complex plane. See a small essay about this at my homepage)
I presume that by "the first ansatz" you mean the φⁿ±φ⁻ⁿ thing. I have nod idea about the other, but I am very curious about it now, especially as you say it is an instance of the Schroeder method ... which sounds like it could do with 'tapering' by means of a nice familiar & relatively simple example for someone coaching themself in it from the beginning! Anyway - thank-you for the information ... but it dashed my hopes a bit when you began to speak of it being yet another insular effort. But I think (i) it does require fundamentally new thought (ii) it is there - yet to be found!
– AmbretteOrrisey
Dec 7 '18 at 20:36
@AmbretteOrrisey: thanks for your comment. Unfortunately I'm much busy this and next days and have no space to step in again. Let's see end of next week...
– Gottfried Helms
Dec 8 '18 at 9:12
Of course I don't expect you to dispense me a course on the Schroeder method! On the contrary, thankyou for taking the trouble to steer my attention in that direction. And I do often just throw ideas out without the expectation that those who catch them process or develop them for me!
– AmbretteOrrisey
Dec 8 '18 at 12:31
@AmbretteOrrisey: I've found my old discussion about the interpolation of the fibonacci numbers, I've added the link to my remark in my answer. First I thought I'd posted that as Q&A here in MSE but I did not know this place here and communicated via the sci.math-newsgroup in the usenet. Hope the essay is helpful/explanative about my remark.
– Gottfried Helms
Dec 9 '18 at 2:47
Looks ike you're giving me a course of instruction anyway! That's your work? I think I can discern your style of writing in it. I've only just looked at it - it'll take a while for that to mature. I see the 'other' form of the 'continuous-isation' of the fibonacci numbers - a spiral in the complex plane. Thanks for your ongoing attention ... please don't be tempted to make inroads into your time ... it wasn't meant as a subtle goad or anything when I said I don't expect a course of instruction, and plenty to be fornow.¶ Thanks for that direction - I shall assuredly enjoy delving into that.
– AmbretteOrrisey
Dec 9 '18 at 8:42
|
show 9 more comments
The idea of E.Schroeder in the 19'th century for bases $b$ (allowing two real fixpoints for iterations, for instance $b=sqrt{2}$) , sometimes called "regular iteration" seems to me a real good one. It allows a meaningful expression for fractional iteration from some starting point $z_0$ towards some endpoint $z_h$ where $h$ means the (possibly fractional, even complex) iteration-(h)eight, such that $z_0$ is the initial value, $z_1 = b^{z_0}$ is the first (integer) iteration and so on.
However, that method needs conjugacy to be able to be applied to all real $z_0$ : one has to choose the appropriate fixpoint for shifting the power-series for the exponentiation with base $b$ and get an evaluatable analytical answer at all.
Unfortunately it has been observed, that in the cases $z_0$ where each fixpoint-conjugacy can be taken, the results of fractional iteration are different - even if only by some 1e-25 or so.
A basic problem for the general $b$ is the multivaluedness of the complex exponentiation/logarithmization, or say, the "clock-arithmetic" with respect to the $2 pi î$-term in the exponentiation - I have once seen an article (R.Corless & al.) on the "winding number" which tries to make sense of introduction of one more parameter for the complex numbers to overcome that "clock-arithmetic" to a real-arithmetic. But that was no real progress for the problem here.
So I think, similarly to the full workout of L. Euler about the multivaluedness of the logarithm and then the representation of the gamma-function as an infinite sum of partial products, we need some more idea here - the E. Schroeder-idea seems to me just like a small insular solution, however nice ever...
(just as a remark: you might consider the two common versions of the interpolation of the Fibonacci numbers $fib(n)$ to continuous functions of the index: there is one ansatz providing real numbers for real index, and another ansatz (in analogy perhaps to the Schroeder method) which gives complex numbers for real indexes but seems more smooth when seen overall in the complex plane. See a small essay about this at my homepage)
The idea of E.Schroeder in the 19'th century for bases $b$ (allowing two real fixpoints for iterations, for instance $b=sqrt{2}$) , sometimes called "regular iteration" seems to me a real good one. It allows a meaningful expression for fractional iteration from some starting point $z_0$ towards some endpoint $z_h$ where $h$ means the (possibly fractional, even complex) iteration-(h)eight, such that $z_0$ is the initial value, $z_1 = b^{z_0}$ is the first (integer) iteration and so on.
However, that method needs conjugacy to be able to be applied to all real $z_0$ : one has to choose the appropriate fixpoint for shifting the power-series for the exponentiation with base $b$ and get an evaluatable analytical answer at all.
Unfortunately it has been observed, that in the cases $z_0$ where each fixpoint-conjugacy can be taken, the results of fractional iteration are different - even if only by some 1e-25 or so.
A basic problem for the general $b$ is the multivaluedness of the complex exponentiation/logarithmization, or say, the "clock-arithmetic" with respect to the $2 pi î$-term in the exponentiation - I have once seen an article (R.Corless & al.) on the "winding number" which tries to make sense of introduction of one more parameter for the complex numbers to overcome that "clock-arithmetic" to a real-arithmetic. But that was no real progress for the problem here.
So I think, similarly to the full workout of L. Euler about the multivaluedness of the logarithm and then the representation of the gamma-function as an infinite sum of partial products, we need some more idea here - the E. Schroeder-idea seems to me just like a small insular solution, however nice ever...
(just as a remark: you might consider the two common versions of the interpolation of the Fibonacci numbers $fib(n)$ to continuous functions of the index: there is one ansatz providing real numbers for real index, and another ansatz (in analogy perhaps to the Schroeder method) which gives complex numbers for real indexes but seems more smooth when seen overall in the complex plane. See a small essay about this at my homepage)
edited Dec 9 '18 at 2:49
answered Dec 6 '18 at 12:28
Gottfried Helms
23.2k24398
23.2k24398
I presume that by "the first ansatz" you mean the φⁿ±φ⁻ⁿ thing. I have nod idea about the other, but I am very curious about it now, especially as you say it is an instance of the Schroeder method ... which sounds like it could do with 'tapering' by means of a nice familiar & relatively simple example for someone coaching themself in it from the beginning! Anyway - thank-you for the information ... but it dashed my hopes a bit when you began to speak of it being yet another insular effort. But I think (i) it does require fundamentally new thought (ii) it is there - yet to be found!
– AmbretteOrrisey
Dec 7 '18 at 20:36
@AmbretteOrrisey: thanks for your comment. Unfortunately I'm much busy this and next days and have no space to step in again. Let's see end of next week...
– Gottfried Helms
Dec 8 '18 at 9:12
Of course I don't expect you to dispense me a course on the Schroeder method! On the contrary, thankyou for taking the trouble to steer my attention in that direction. And I do often just throw ideas out without the expectation that those who catch them process or develop them for me!
– AmbretteOrrisey
Dec 8 '18 at 12:31
@AmbretteOrrisey: I've found my old discussion about the interpolation of the fibonacci numbers, I've added the link to my remark in my answer. First I thought I'd posted that as Q&A here in MSE but I did not know this place here and communicated via the sci.math-newsgroup in the usenet. Hope the essay is helpful/explanative about my remark.
– Gottfried Helms
Dec 9 '18 at 2:47
Looks ike you're giving me a course of instruction anyway! That's your work? I think I can discern your style of writing in it. I've only just looked at it - it'll take a while for that to mature. I see the 'other' form of the 'continuous-isation' of the fibonacci numbers - a spiral in the complex plane. Thanks for your ongoing attention ... please don't be tempted to make inroads into your time ... it wasn't meant as a subtle goad or anything when I said I don't expect a course of instruction, and plenty to be fornow.¶ Thanks for that direction - I shall assuredly enjoy delving into that.
– AmbretteOrrisey
Dec 9 '18 at 8:42
|
show 9 more comments
I presume that by "the first ansatz" you mean the φⁿ±φ⁻ⁿ thing. I have nod idea about the other, but I am very curious about it now, especially as you say it is an instance of the Schroeder method ... which sounds like it could do with 'tapering' by means of a nice familiar & relatively simple example for someone coaching themself in it from the beginning! Anyway - thank-you for the information ... but it dashed my hopes a bit when you began to speak of it being yet another insular effort. But I think (i) it does require fundamentally new thought (ii) it is there - yet to be found!
– AmbretteOrrisey
Dec 7 '18 at 20:36
@AmbretteOrrisey: thanks for your comment. Unfortunately I'm much busy this and next days and have no space to step in again. Let's see end of next week...
– Gottfried Helms
Dec 8 '18 at 9:12
Of course I don't expect you to dispense me a course on the Schroeder method! On the contrary, thankyou for taking the trouble to steer my attention in that direction. And I do often just throw ideas out without the expectation that those who catch them process or develop them for me!
– AmbretteOrrisey
Dec 8 '18 at 12:31
@AmbretteOrrisey: I've found my old discussion about the interpolation of the fibonacci numbers, I've added the link to my remark in my answer. First I thought I'd posted that as Q&A here in MSE but I did not know this place here and communicated via the sci.math-newsgroup in the usenet. Hope the essay is helpful/explanative about my remark.
– Gottfried Helms
Dec 9 '18 at 2:47
Looks ike you're giving me a course of instruction anyway! That's your work? I think I can discern your style of writing in it. I've only just looked at it - it'll take a while for that to mature. I see the 'other' form of the 'continuous-isation' of the fibonacci numbers - a spiral in the complex plane. Thanks for your ongoing attention ... please don't be tempted to make inroads into your time ... it wasn't meant as a subtle goad or anything when I said I don't expect a course of instruction, and plenty to be fornow.¶ Thanks for that direction - I shall assuredly enjoy delving into that.
– AmbretteOrrisey
Dec 9 '18 at 8:42
I presume that by "the first ansatz" you mean the φⁿ±φ⁻ⁿ thing. I have nod idea about the other, but I am very curious about it now, especially as you say it is an instance of the Schroeder method ... which sounds like it could do with 'tapering' by means of a nice familiar & relatively simple example for someone coaching themself in it from the beginning! Anyway - thank-you for the information ... but it dashed my hopes a bit when you began to speak of it being yet another insular effort. But I think (i) it does require fundamentally new thought (ii) it is there - yet to be found!
– AmbretteOrrisey
Dec 7 '18 at 20:36
I presume that by "the first ansatz" you mean the φⁿ±φ⁻ⁿ thing. I have nod idea about the other, but I am very curious about it now, especially as you say it is an instance of the Schroeder method ... which sounds like it could do with 'tapering' by means of a nice familiar & relatively simple example for someone coaching themself in it from the beginning! Anyway - thank-you for the information ... but it dashed my hopes a bit when you began to speak of it being yet another insular effort. But I think (i) it does require fundamentally new thought (ii) it is there - yet to be found!
– AmbretteOrrisey
Dec 7 '18 at 20:36
@AmbretteOrrisey: thanks for your comment. Unfortunately I'm much busy this and next days and have no space to step in again. Let's see end of next week...
– Gottfried Helms
Dec 8 '18 at 9:12
@AmbretteOrrisey: thanks for your comment. Unfortunately I'm much busy this and next days and have no space to step in again. Let's see end of next week...
– Gottfried Helms
Dec 8 '18 at 9:12
Of course I don't expect you to dispense me a course on the Schroeder method! On the contrary, thankyou for taking the trouble to steer my attention in that direction. And I do often just throw ideas out without the expectation that those who catch them process or develop them for me!
– AmbretteOrrisey
Dec 8 '18 at 12:31
Of course I don't expect you to dispense me a course on the Schroeder method! On the contrary, thankyou for taking the trouble to steer my attention in that direction. And I do often just throw ideas out without the expectation that those who catch them process or develop them for me!
– AmbretteOrrisey
Dec 8 '18 at 12:31
@AmbretteOrrisey: I've found my old discussion about the interpolation of the fibonacci numbers, I've added the link to my remark in my answer. First I thought I'd posted that as Q&A here in MSE but I did not know this place here and communicated via the sci.math-newsgroup in the usenet. Hope the essay is helpful/explanative about my remark.
– Gottfried Helms
Dec 9 '18 at 2:47
@AmbretteOrrisey: I've found my old discussion about the interpolation of the fibonacci numbers, I've added the link to my remark in my answer. First I thought I'd posted that as Q&A here in MSE but I did not know this place here and communicated via the sci.math-newsgroup in the usenet. Hope the essay is helpful/explanative about my remark.
– Gottfried Helms
Dec 9 '18 at 2:47
Looks ike you're giving me a course of instruction anyway! That's your work? I think I can discern your style of writing in it. I've only just looked at it - it'll take a while for that to mature. I see the 'other' form of the 'continuous-isation' of the fibonacci numbers - a spiral in the complex plane. Thanks for your ongoing attention ... please don't be tempted to make inroads into your time ... it wasn't meant as a subtle goad or anything when I said I don't expect a course of instruction, and plenty to be fornow.¶ Thanks for that direction - I shall assuredly enjoy delving into that.
– AmbretteOrrisey
Dec 9 '18 at 8:42
Looks ike you're giving me a course of instruction anyway! That's your work? I think I can discern your style of writing in it. I've only just looked at it - it'll take a while for that to mature. I see the 'other' form of the 'continuous-isation' of the fibonacci numbers - a spiral in the complex plane. Thanks for your ongoing attention ... please don't be tempted to make inroads into your time ... it wasn't meant as a subtle goad or anything when I said I don't expect a course of instruction, and plenty to be fornow.¶ Thanks for that direction - I shall assuredly enjoy delving into that.
– AmbretteOrrisey
Dec 9 '18 at 8:42
|
show 9 more comments
There is a remarkable expression I've found in this connection that might have some bearing on the matter for the iterates the Taylor series of $$operatorname{f}(x)to x^{operatorname{f}(x)}$$ when $xequiv e^z$, such that $$operatorname{f_0}(x)equiv1 ,$$$$operatorname{f_1}(x)equiv exp(z) ,$$$$operatorname{f_2}(x)equiv exp(zexp(z)) ,$$ etc. The coefficients for $k=0dots n$ are those of the Lambert W-function; but thereafter, for $k>n$ the coefficients are given by the following recursion. Let $a_{n,0}=1forall n$, $a_{0,k}=0$ for $k>0$, & thereafter $$a_{n,k}={1over n}sum_{j=1}^k ja_{n,k-j}a_{n-1,j-1} .$$ These coefficients are in a sense 'wasted' upthrough $k=n$, inthat they do not actually appear in the Taylor series; and yet they still serve the function of being necessary for the generation of the coefficients that do appear. It is fascinating to my mind, the way there is a kind of discontinuity in the series - the coeffiecients generated by this recursion 'peeling-away' one-at-a -time as $n$ is incremented, 'revealing' the coefficients of the Lambert W-function 'underneath'; and it is a well known result that the limit as the $$operatorname{f}(x)to x^{operatorname{f}(x)}$$ tends to $infty$ is indeed the Lambert W-function.
Whether this is susceptible of treatment by Schroeder's method I would not venture definitely to say at the present time, as, though I see in outline how that method can be applied to something like the recursion that gives the Fibonacci numbers, I am rather daunted by that discontinuity in the generation of the coefficients in this case; and I cannot see at a glance how it would be encoded.
Ambrette- you might look at my older discussion cited in my new answer here math.stackexchange.com/a/3056786/1714
– Gottfried Helms
Dec 30 '18 at 15:20
add a comment |
There is a remarkable expression I've found in this connection that might have some bearing on the matter for the iterates the Taylor series of $$operatorname{f}(x)to x^{operatorname{f}(x)}$$ when $xequiv e^z$, such that $$operatorname{f_0}(x)equiv1 ,$$$$operatorname{f_1}(x)equiv exp(z) ,$$$$operatorname{f_2}(x)equiv exp(zexp(z)) ,$$ etc. The coefficients for $k=0dots n$ are those of the Lambert W-function; but thereafter, for $k>n$ the coefficients are given by the following recursion. Let $a_{n,0}=1forall n$, $a_{0,k}=0$ for $k>0$, & thereafter $$a_{n,k}={1over n}sum_{j=1}^k ja_{n,k-j}a_{n-1,j-1} .$$ These coefficients are in a sense 'wasted' upthrough $k=n$, inthat they do not actually appear in the Taylor series; and yet they still serve the function of being necessary for the generation of the coefficients that do appear. It is fascinating to my mind, the way there is a kind of discontinuity in the series - the coeffiecients generated by this recursion 'peeling-away' one-at-a -time as $n$ is incremented, 'revealing' the coefficients of the Lambert W-function 'underneath'; and it is a well known result that the limit as the $$operatorname{f}(x)to x^{operatorname{f}(x)}$$ tends to $infty$ is indeed the Lambert W-function.
Whether this is susceptible of treatment by Schroeder's method I would not venture definitely to say at the present time, as, though I see in outline how that method can be applied to something like the recursion that gives the Fibonacci numbers, I am rather daunted by that discontinuity in the generation of the coefficients in this case; and I cannot see at a glance how it would be encoded.
Ambrette- you might look at my older discussion cited in my new answer here math.stackexchange.com/a/3056786/1714
– Gottfried Helms
Dec 30 '18 at 15:20
add a comment |
There is a remarkable expression I've found in this connection that might have some bearing on the matter for the iterates the Taylor series of $$operatorname{f}(x)to x^{operatorname{f}(x)}$$ when $xequiv e^z$, such that $$operatorname{f_0}(x)equiv1 ,$$$$operatorname{f_1}(x)equiv exp(z) ,$$$$operatorname{f_2}(x)equiv exp(zexp(z)) ,$$ etc. The coefficients for $k=0dots n$ are those of the Lambert W-function; but thereafter, for $k>n$ the coefficients are given by the following recursion. Let $a_{n,0}=1forall n$, $a_{0,k}=0$ for $k>0$, & thereafter $$a_{n,k}={1over n}sum_{j=1}^k ja_{n,k-j}a_{n-1,j-1} .$$ These coefficients are in a sense 'wasted' upthrough $k=n$, inthat they do not actually appear in the Taylor series; and yet they still serve the function of being necessary for the generation of the coefficients that do appear. It is fascinating to my mind, the way there is a kind of discontinuity in the series - the coeffiecients generated by this recursion 'peeling-away' one-at-a -time as $n$ is incremented, 'revealing' the coefficients of the Lambert W-function 'underneath'; and it is a well known result that the limit as the $$operatorname{f}(x)to x^{operatorname{f}(x)}$$ tends to $infty$ is indeed the Lambert W-function.
Whether this is susceptible of treatment by Schroeder's method I would not venture definitely to say at the present time, as, though I see in outline how that method can be applied to something like the recursion that gives the Fibonacci numbers, I am rather daunted by that discontinuity in the generation of the coefficients in this case; and I cannot see at a glance how it would be encoded.
There is a remarkable expression I've found in this connection that might have some bearing on the matter for the iterates the Taylor series of $$operatorname{f}(x)to x^{operatorname{f}(x)}$$ when $xequiv e^z$, such that $$operatorname{f_0}(x)equiv1 ,$$$$operatorname{f_1}(x)equiv exp(z) ,$$$$operatorname{f_2}(x)equiv exp(zexp(z)) ,$$ etc. The coefficients for $k=0dots n$ are those of the Lambert W-function; but thereafter, for $k>n$ the coefficients are given by the following recursion. Let $a_{n,0}=1forall n$, $a_{0,k}=0$ for $k>0$, & thereafter $$a_{n,k}={1over n}sum_{j=1}^k ja_{n,k-j}a_{n-1,j-1} .$$ These coefficients are in a sense 'wasted' upthrough $k=n$, inthat they do not actually appear in the Taylor series; and yet they still serve the function of being necessary for the generation of the coefficients that do appear. It is fascinating to my mind, the way there is a kind of discontinuity in the series - the coeffiecients generated by this recursion 'peeling-away' one-at-a -time as $n$ is incremented, 'revealing' the coefficients of the Lambert W-function 'underneath'; and it is a well known result that the limit as the $$operatorname{f}(x)to x^{operatorname{f}(x)}$$ tends to $infty$ is indeed the Lambert W-function.
Whether this is susceptible of treatment by Schroeder's method I would not venture definitely to say at the present time, as, though I see in outline how that method can be applied to something like the recursion that gives the Fibonacci numbers, I am rather daunted by that discontinuity in the generation of the coefficients in this case; and I cannot see at a glance how it would be encoded.
answered Dec 9 '18 at 10:04
AmbretteOrrisey
57410
57410
Ambrette- you might look at my older discussion cited in my new answer here math.stackexchange.com/a/3056786/1714
– Gottfried Helms
Dec 30 '18 at 15:20
add a comment |
Ambrette- you might look at my older discussion cited in my new answer here math.stackexchange.com/a/3056786/1714
– Gottfried Helms
Dec 30 '18 at 15:20
Ambrette- you might look at my older discussion cited in my new answer here math.stackexchange.com/a/3056786/1714
– Gottfried Helms
Dec 30 '18 at 15:20
Ambrette- you might look at my older discussion cited in my new answer here math.stackexchange.com/a/3056786/1714
– Gottfried Helms
Dec 30 '18 at 15:20
add a comment |
I just found some discussion about exactly your ansatz in your own answer, but without successful route to an end at, for instance, a half-iterate of the function. I'd posted this in the "tetration-forum" in about 2009
The final conclusion of that study was (see bottom of this answer): So for that approach: it looks as if we cannot express a half-iterate based on this type of powerseries. Pity.... Maybe we can find a workaround - change order of summation or something else, don't have an idea.
Now to my message itself (date-of-saving:16 Mar 2009):
Here I present three postings in sci.math. It seems, that the method is not well suited for the interpolation to fractional heights (as I hoped it would be). But - perhaps we can find a workaround. On the other hand: it is not needed that many different methods exist, so...
Also Ioannis (Galidakis) reminded me of the entry in mathworld,"powertower", where he already characterized this type of series. (http://mathworld.wolfram.com/PowerTower.html)
Here the current msgs aus sci.math: ( some edits in double-brackets [< >])
*subject: tetration: another family of powerseries for fractional iteration*
Maybe this is all known; I didn't see it so far. The idea was triggered by
the comments of V Jovovic in the OEIS concerning the below generating functions.
Consider the sequence of functions
T0(x) = 1, T1(x) = exp(x*1), T2(x) = exp(x*exp(x)), T_h(x) = exp(x*T_{h-1}(x)),...
They are also the generation-functions for the following sequence of powerseries:
T0: 1 + 0 + 0 + ....
T1: 1 + x + 1/2*x^2 + 1/6*x^3 + 1/24*x^4 + 1/120*x^5 + 1/720*x^6 + 1/5040*x^7 ...
T2: 1 + x + 3/2*x^2 + 10/6*x^3 + 41/24*x^4 + 196/120*x^5 + 1057/720*x^6 + 6322/5040*x^7 +...
T3: 1 + x + 3/2*x^2 + 16/6*x^3 + 101/24*x^4 + 756/120*x^5 + 6607/720*x^6 + 160504/5040*x^7 + ...
T4: 1 + x + 3/2*x^2 + 16/6*x^3 + 125/24*x^4 + 1176/120*x^5 + 12847/720*x^6 + 229384/5040*x^7 + ...
T5: 1 + x + 3/2*x^2 + 16/6*x^3 + 125/24*x^4 + 1296/120*x^5 + 16087/720*x^6 + 257104/5040*x^7 + ...
T6: 1 + x + 3/2*x^2 + 16/6*x^3 + 125/24*x^4 + 1296/120*x^5 + 16807/720*x^6 + 262144/5040*x^7 + ...
T7: 1 + x + 3/2*x^2 + 16/6*x^3 + 125/24*x^4 + 1296/120*x^5 + 16807/720*x^6 + 262144/5040*x^7 + ...
T8: 1 + x + 3/2*x^2 + 16/6*x^3 + 125/24*x^4 + 1296/120*x^5 + 16807/720*x^6 + 262144/5040*x^7 + ...
T9: 1 + x + 3/2*x^2 + 16/6*x^3 + 125/24*x^4 + 1296/120*x^5 + 16807/720*x^6 + 262144/5040*x^7 + ...
...
Too: 1 + x + 3/2*x^2 + 4^2/3!*x^3 + 5^3/4!*x^4 + 6^4/5!*x^5 + 7^5/6!*x^6 + 8^6/7!*x^7 + ... //limit h->inf
That means, if x = log(b), we have by this
T0(x) = 1
T1(x) = b = b^^1
T2(x) = b^b = b^^2
T3(x) = b^b^b = b^^3
...
Too(x) = ...^b^b = b^^oo
and for the limit h->inf we have with Too(x) the series for the h-function of b: Too(x) = h(b)
which is convergent for |x|<exp(-1)
[<...>]
The 2.nd msg:
> > (Galidakis replies) :
> > However, the recursive expression for the coefficients
> > given in (6) [<in mathworld, G.H.>] does not seem to allow that.
> >
> > If you can find a way to interpolate between those coefficients for non-natural
> > heights using your matrix method AND at the same time you manage to preserve the
> > functional equation F(x + 1) = e^{x*F(x)}, then, by Jove, you've got a nice
> > analytic solution to tetration :-)
Ok, let's give a start. Recall:
T0: 1 + 0 + 0 + ....
T1: 1 + x + 1/2*x^2 + 1/6*x^3 + 1/24*x^4 + 1/120*x^5 + 1/720*x^6 + 1/5040*x^7 ...
T2: 1 + x + 3/2*x^2 + 10/6*x^3 + 41/24*x^4 + 196/120*x^5 + 1057/720*x^6 + 6322/5040*x^7 +...
T3: 1 + x + 3/2*x^2 + 16/6*x^3 + 101/24*x^4 + 756/120*x^5 + 6607/720*x^6 + 160504/5040*x^7 + ...
T4: 1 + x + 3/2*x^2 + 16/6*x^3 + 125/24*x^4 + 1176/120*x^5 + 12847/720*x^6 + 229384/5040*x^7 + ...
T5: 1 + x + 3/2*x^2 + 16/6*x^3 + 125/24*x^4 + 1296/120*x^5 + 16087/720*x^6 + 257104/5040*x^7 + ...
T6: 1 + x + 3/2*x^2 + 16/6*x^3 + 125/24*x^4 + 1296/120*x^5 + 16807/720*x^6 + 262144/5040*x^7 + ...
T7: 1 + x + 3/2*x^2 + 16/6*x^3 + 125/24*x^4 + 1296/120*x^5 + 16807/720*x^6 + 262144/5040*x^7 + ...
T8: 1 + x + 3/2*x^2 + 16/6*x^3 + 125/24*x^4 + 1296/120*x^5 + 16807/720*x^6 + 262144/5040*x^7 + ...
T9: 1 + x + 3/2*x^2 + 16/6*x^3 + 125/24*x^4 + 1296/120*x^5 + 16807/720*x^6 + 262144/5040*x^7 + ...
...
Too: 1 + x + 3/2*x^2 + 4^2/3!*x^3 + 5^3/4!*x^4 + 6^4/5!*x^5 + 7^5/6!*x^6 + 8^6/7!*x^7 + ... //limit h->inf
We want to interpolate for coefficients of T0.5, means between rows T0 and T1.
I'll rewrite the powerseries without the powers of x. And since we do
the binomial composition of coefficients at like powers of x, we compose
the coefficients down a column; so the common denominator(the factorial) of a
column can be omitted for the scheme.
Thus I get for the original coefficients, only rescaled
T0: 1 0 0 0 0 0 0 0 ...
T1: 1 1 1 1 1 1 1 1 ...
T2: 1 1 3 10 41 196 1057 6322
T3: 1 1 3 16 101 756 6607 65794
T4: 1 1 3 16 125 1176 12847 160504
T5: 1 1 3 16 125 1296 16087 229384
T6: 1 1 3 16 125 1296 16807 257104
T7: 1 1 3 16 125 1296 16807 262144
T8: 1 1 3 16 125 1296 16807 262144
T9: 1 1 3 16 125 1296 16807 262144
...
The first binomial-composition along the columns gives
X0: 1 0 0 0 0 0 0 0 ...
X1: 0 1 1 1 1 1 1 1 ...
X2: 0 -1 1 8 39 194 1055 6320
X3: 0 1 -3 -11 -19 171 3439 46831
X4: 0 -1 5 8 -37 -676 -7243 -64744
X5: 0 1 -7 1 105 1021 7357 21589
X6: 0 -1 9 -16 -161 -1026 -3301 67304
X7: 0 1 -11 37 181 631 -3605 -168125
X8: 0 -1 13 -64 -141 104 10961 246224
X9: 0 1 -15 97 17 -999 -16007 -278711
... ...
The second binomial-composition (using h=0.5)
[< Table 5: this will be the reference-table for the composition of coefficients of T05 >]
Y0: 1 0 0 0 0 0 0 ...
Y1: 0 1/2 1/2 1/2 1/2 1/2 1/2 ...
Y2: 0 1/8 -1/8 -1 -39/8 -97/4 -1055/8
Y3: 0 1/16 -3/16 -11/16 -19/16 171/16 3439/16
Y4: 0 5/128 -25/128 -5/16 185/128 845/32 36215/128
Y5: 0 7/256 -49/256 7/256 735/256 7147/256 51499/256
Y6: 0 21/1024 -189/1024 21/64 3381/1024 10773/512 69321/1024
Y7: 0 33/2048 -363/2048 1221/2048 5973/2048 20823/2048 -118965/2048
Y8: 0 429/32768 -5577/32768 429/512 60489/32768 -5577/4096 -4702269/32768
Y9: 0 715/65536 -10725/65536 69355/65536 12155/65536 -714285/65536 -11445005/65536 ...
... ...
----------------------------------------------------------------------------------------------------
sum. s0 s1 s2 s3 ...
====================================================================================================
T0.5: c0 c1 c2 c3 ...
and T0.5(x) = c0 + c1*x + c2*x^2/2! + c3*x^/3! + ...
the interpolated coefficients c0,c1,c2,... for h=0.5 should then be computed by the
column-sums (and finally the rescaling by the omitted factorials).
The partial sums in the columns converge only badly if at all, so let's look,
whether we can find some analytic solution.
The denominators in the rows can be majorized by powers of 4, and all can then be divided by
2, so let's rewrite this
common scaling
Y0: 1/2 0 0 0 0 0 0 0 0 0 *2 /4^0
Y1: 0 1 1 1 1 1 1 1 1 1 *2 /4^1
Y2: 0 1 -1 -8 -39 -194 -1055 -6320 -41391 -293606 *2 /4^2
Y3: 0 2 -6 -22 -38 342 6878 93662 1219314 16331654 *2 /4^3
Y4: 0 5 -25 -40 185 3380 36215 323720 2128445 -5199340 *2 /4^4
Y5: 0 14 -98 14 1470 14294 102998 302246 -9722034 -332756410 *2 /4^5
Y6: 0 42 -378 672 6762 43092 138642 -2826768 -93176118 -1954258068 *2 /4^6
Y7: 0 132 -1452 4884 23892 83292 -475860 -22192500 -463551132 -7659247332 *2 /4^7
Y8: 0 429 -5577 27456 60489 -44616 -4702269 -105630096 -1778712507 -23047084632 *2 /4^8
Y9: 0 1430 -21450 138710 24310 -1428570 -22890010 -398556730 -5760084330 -51266562490 *2 /4^9
... ...
----------------------------------------------------------------------------------------------------
sum. s0 s1 s2 s3 ...
====================================================================================================
T0.5: c0 c1 c2 c3 ...
and T0.5(x) = c0 + c1*x + c2*x^2/2! + c3*x^/3! + ...
Let's look at the columnsums of the table; that sums, divided by the factorial, give the coefficients
c_k for the T0.5(x)-powerseries.
First, s0 = 1, (remember the scaling extracted to the rhs) ,
so c0 = 1
Next, s1. Here we recognize, that the numbers are the catalan-numbers, and, with the
current scaling have the generation-function 1- sqrt(1-z). Since we want to know
simply the sum, we set z=1 and get for the sum
s1 = 1- sqrt(1-1) = 1
so c1 =1
Next, s2. It becomes more difficult. We can add columns s2 and s1 to get a sequence,
which can formally be expressed as the derivative of the sqrt(1 - z)-function, where
possibly we need also a scaling at z, so likely something like
1 - sqrt(1 - a z)'
It looks, as if the series is divergent, too, so we'll have to see, whether this
operation (and the following, which surely are similar) can be justified/make sense
at all.
-----------------
I proceeded for the first few terms s2,s3,s4,s5... Things seem to come out uneasy... :( Now follows msg 3:
(...)
Formally composed by derivatives of sqrt(1-z) I get for the series s1,s2,s3,... the following
generating functions
s0: 1
s1: 1 - 1*sqrt(1-z)
s2: 3 - 3*sqrt(1-z) + 2*z*(sqrt(1-z)')
s3: 16 - 16*sqrt(1-z) + 15*z*(sqrt(1-z)') - 3*z^2*(sqrt(1-z)'')
s4: 125 - 125*sqrt(1-z) + 124*z*(sqrt(1-z)') - 42*z^2*(sqrt(1-z)'') + 4*z^3*(sqrt(1-z)''')
s5: 1296 - 1296*sqrt(1-z) + 1295*z*(sqrt(1-z)') - 550*z^2*(sqrt(1-z)'') + 90*z^3*(sqrt(1-z)''') - 5*z^4*(sqrt(1-z)'''')
...
which have to be evaluated at z=1 to give the value for the sums. Now the derivatives have
a vertical asymptote at z=1, so here are infinities everywhere...
Even more obvious, if I expand the derivatives into terms of sqrt(1-z) I get the following
explicite generating functions for the series of s0,s1,s2,...:
s0: 1
s1: 1 - sqrt(1-z) * ( 1 )
s2: 3 - sqrt(1-z)/(1-z)^1* ( 3 - 4/2*z)
s3: 16 - sqrt(1-z)/(1-z)^2* ( 16 - 49/2*z + 31/4*z^2 )
s4: 125 - sqrt(1-z)/(1-z)^3* ( 125 - 626/2*z + 962/4*z^2 - 408/8*z^3 )
s5: 1296 - sqrt(1-z)/(1-z)^4* (1296 - 9073/2*z + 22784/4*z^2 - 23462/8*z^3 + 7561/16*z^4)
where all except the first two grow unboundedly, if z->1
So for that approach: it looks as if we cannot express a half-iterate based on
this type of powerseries. Pity.... Maybe we can find a workaround - change order
of summation or something else, don't have an idea.
Another idea around?
(end of that msg to the tetration-forum)
Been a tad absent from here lately ... and I see you've been rather busy at my post in the meantime. I've actually been marshaling some thoughts on the inverse Ackermann function, particularly in connection with Davenport Schinzel sequences, and have a post about it nearly ripe. I think it will chime with what you have contributed here!
– AmbretteOrrisey
yesterday
@AmbretteOrrisey: you're welcome. And happy new year! Unfortunately I likely shall have no new ideas in all this, but of course would like it much if there comes out some connections with my own older stuff. With the inverse of the Ackermann it is perhaps useful to contact Mr. Daniel Geisler who is founding member of the tetration-forum and has also an account here in MSE or MO and is spuriously active on questions on tetration. Perhaps via email you might be able to install some helpful connection.
– Gottfried Helms
yesterday
add a comment |
I just found some discussion about exactly your ansatz in your own answer, but without successful route to an end at, for instance, a half-iterate of the function. I'd posted this in the "tetration-forum" in about 2009
The final conclusion of that study was (see bottom of this answer): So for that approach: it looks as if we cannot express a half-iterate based on this type of powerseries. Pity.... Maybe we can find a workaround - change order of summation or something else, don't have an idea.
Now to my message itself (date-of-saving:16 Mar 2009):
Here I present three postings in sci.math. It seems, that the method is not well suited for the interpolation to fractional heights (as I hoped it would be). But - perhaps we can find a workaround. On the other hand: it is not needed that many different methods exist, so...
Also Ioannis (Galidakis) reminded me of the entry in mathworld,"powertower", where he already characterized this type of series. (http://mathworld.wolfram.com/PowerTower.html)
Here the current msgs aus sci.math: ( some edits in double-brackets [< >])
*subject: tetration: another family of powerseries for fractional iteration*
Maybe this is all known; I didn't see it so far. The idea was triggered by
the comments of V Jovovic in the OEIS concerning the below generating functions.
Consider the sequence of functions
T0(x) = 1, T1(x) = exp(x*1), T2(x) = exp(x*exp(x)), T_h(x) = exp(x*T_{h-1}(x)),...
They are also the generation-functions for the following sequence of powerseries:
T0: 1 + 0 + 0 + ....
T1: 1 + x + 1/2*x^2 + 1/6*x^3 + 1/24*x^4 + 1/120*x^5 + 1/720*x^6 + 1/5040*x^7 ...
T2: 1 + x + 3/2*x^2 + 10/6*x^3 + 41/24*x^4 + 196/120*x^5 + 1057/720*x^6 + 6322/5040*x^7 +...
T3: 1 + x + 3/2*x^2 + 16/6*x^3 + 101/24*x^4 + 756/120*x^5 + 6607/720*x^6 + 160504/5040*x^7 + ...
T4: 1 + x + 3/2*x^2 + 16/6*x^3 + 125/24*x^4 + 1176/120*x^5 + 12847/720*x^6 + 229384/5040*x^7 + ...
T5: 1 + x + 3/2*x^2 + 16/6*x^3 + 125/24*x^4 + 1296/120*x^5 + 16087/720*x^6 + 257104/5040*x^7 + ...
T6: 1 + x + 3/2*x^2 + 16/6*x^3 + 125/24*x^4 + 1296/120*x^5 + 16807/720*x^6 + 262144/5040*x^7 + ...
T7: 1 + x + 3/2*x^2 + 16/6*x^3 + 125/24*x^4 + 1296/120*x^5 + 16807/720*x^6 + 262144/5040*x^7 + ...
T8: 1 + x + 3/2*x^2 + 16/6*x^3 + 125/24*x^4 + 1296/120*x^5 + 16807/720*x^6 + 262144/5040*x^7 + ...
T9: 1 + x + 3/2*x^2 + 16/6*x^3 + 125/24*x^4 + 1296/120*x^5 + 16807/720*x^6 + 262144/5040*x^7 + ...
...
Too: 1 + x + 3/2*x^2 + 4^2/3!*x^3 + 5^3/4!*x^4 + 6^4/5!*x^5 + 7^5/6!*x^6 + 8^6/7!*x^7 + ... //limit h->inf
That means, if x = log(b), we have by this
T0(x) = 1
T1(x) = b = b^^1
T2(x) = b^b = b^^2
T3(x) = b^b^b = b^^3
...
Too(x) = ...^b^b = b^^oo
and for the limit h->inf we have with Too(x) the series for the h-function of b: Too(x) = h(b)
which is convergent for |x|<exp(-1)
[<...>]
The 2.nd msg:
> > (Galidakis replies) :
> > However, the recursive expression for the coefficients
> > given in (6) [<in mathworld, G.H.>] does not seem to allow that.
> >
> > If you can find a way to interpolate between those coefficients for non-natural
> > heights using your matrix method AND at the same time you manage to preserve the
> > functional equation F(x + 1) = e^{x*F(x)}, then, by Jove, you've got a nice
> > analytic solution to tetration :-)
Ok, let's give a start. Recall:
T0: 1 + 0 + 0 + ....
T1: 1 + x + 1/2*x^2 + 1/6*x^3 + 1/24*x^4 + 1/120*x^5 + 1/720*x^6 + 1/5040*x^7 ...
T2: 1 + x + 3/2*x^2 + 10/6*x^3 + 41/24*x^4 + 196/120*x^5 + 1057/720*x^6 + 6322/5040*x^7 +...
T3: 1 + x + 3/2*x^2 + 16/6*x^3 + 101/24*x^4 + 756/120*x^5 + 6607/720*x^6 + 160504/5040*x^7 + ...
T4: 1 + x + 3/2*x^2 + 16/6*x^3 + 125/24*x^4 + 1176/120*x^5 + 12847/720*x^6 + 229384/5040*x^7 + ...
T5: 1 + x + 3/2*x^2 + 16/6*x^3 + 125/24*x^4 + 1296/120*x^5 + 16087/720*x^6 + 257104/5040*x^7 + ...
T6: 1 + x + 3/2*x^2 + 16/6*x^3 + 125/24*x^4 + 1296/120*x^5 + 16807/720*x^6 + 262144/5040*x^7 + ...
T7: 1 + x + 3/2*x^2 + 16/6*x^3 + 125/24*x^4 + 1296/120*x^5 + 16807/720*x^6 + 262144/5040*x^7 + ...
T8: 1 + x + 3/2*x^2 + 16/6*x^3 + 125/24*x^4 + 1296/120*x^5 + 16807/720*x^6 + 262144/5040*x^7 + ...
T9: 1 + x + 3/2*x^2 + 16/6*x^3 + 125/24*x^4 + 1296/120*x^5 + 16807/720*x^6 + 262144/5040*x^7 + ...
...
Too: 1 + x + 3/2*x^2 + 4^2/3!*x^3 + 5^3/4!*x^4 + 6^4/5!*x^5 + 7^5/6!*x^6 + 8^6/7!*x^7 + ... //limit h->inf
We want to interpolate for coefficients of T0.5, means between rows T0 and T1.
I'll rewrite the powerseries without the powers of x. And since we do
the binomial composition of coefficients at like powers of x, we compose
the coefficients down a column; so the common denominator(the factorial) of a
column can be omitted for the scheme.
Thus I get for the original coefficients, only rescaled
T0: 1 0 0 0 0 0 0 0 ...
T1: 1 1 1 1 1 1 1 1 ...
T2: 1 1 3 10 41 196 1057 6322
T3: 1 1 3 16 101 756 6607 65794
T4: 1 1 3 16 125 1176 12847 160504
T5: 1 1 3 16 125 1296 16087 229384
T6: 1 1 3 16 125 1296 16807 257104
T7: 1 1 3 16 125 1296 16807 262144
T8: 1 1 3 16 125 1296 16807 262144
T9: 1 1 3 16 125 1296 16807 262144
...
The first binomial-composition along the columns gives
X0: 1 0 0 0 0 0 0 0 ...
X1: 0 1 1 1 1 1 1 1 ...
X2: 0 -1 1 8 39 194 1055 6320
X3: 0 1 -3 -11 -19 171 3439 46831
X4: 0 -1 5 8 -37 -676 -7243 -64744
X5: 0 1 -7 1 105 1021 7357 21589
X6: 0 -1 9 -16 -161 -1026 -3301 67304
X7: 0 1 -11 37 181 631 -3605 -168125
X8: 0 -1 13 -64 -141 104 10961 246224
X9: 0 1 -15 97 17 -999 -16007 -278711
... ...
The second binomial-composition (using h=0.5)
[< Table 5: this will be the reference-table for the composition of coefficients of T05 >]
Y0: 1 0 0 0 0 0 0 ...
Y1: 0 1/2 1/2 1/2 1/2 1/2 1/2 ...
Y2: 0 1/8 -1/8 -1 -39/8 -97/4 -1055/8
Y3: 0 1/16 -3/16 -11/16 -19/16 171/16 3439/16
Y4: 0 5/128 -25/128 -5/16 185/128 845/32 36215/128
Y5: 0 7/256 -49/256 7/256 735/256 7147/256 51499/256
Y6: 0 21/1024 -189/1024 21/64 3381/1024 10773/512 69321/1024
Y7: 0 33/2048 -363/2048 1221/2048 5973/2048 20823/2048 -118965/2048
Y8: 0 429/32768 -5577/32768 429/512 60489/32768 -5577/4096 -4702269/32768
Y9: 0 715/65536 -10725/65536 69355/65536 12155/65536 -714285/65536 -11445005/65536 ...
... ...
----------------------------------------------------------------------------------------------------
sum. s0 s1 s2 s3 ...
====================================================================================================
T0.5: c0 c1 c2 c3 ...
and T0.5(x) = c0 + c1*x + c2*x^2/2! + c3*x^/3! + ...
the interpolated coefficients c0,c1,c2,... for h=0.5 should then be computed by the
column-sums (and finally the rescaling by the omitted factorials).
The partial sums in the columns converge only badly if at all, so let's look,
whether we can find some analytic solution.
The denominators in the rows can be majorized by powers of 4, and all can then be divided by
2, so let's rewrite this
common scaling
Y0: 1/2 0 0 0 0 0 0 0 0 0 *2 /4^0
Y1: 0 1 1 1 1 1 1 1 1 1 *2 /4^1
Y2: 0 1 -1 -8 -39 -194 -1055 -6320 -41391 -293606 *2 /4^2
Y3: 0 2 -6 -22 -38 342 6878 93662 1219314 16331654 *2 /4^3
Y4: 0 5 -25 -40 185 3380 36215 323720 2128445 -5199340 *2 /4^4
Y5: 0 14 -98 14 1470 14294 102998 302246 -9722034 -332756410 *2 /4^5
Y6: 0 42 -378 672 6762 43092 138642 -2826768 -93176118 -1954258068 *2 /4^6
Y7: 0 132 -1452 4884 23892 83292 -475860 -22192500 -463551132 -7659247332 *2 /4^7
Y8: 0 429 -5577 27456 60489 -44616 -4702269 -105630096 -1778712507 -23047084632 *2 /4^8
Y9: 0 1430 -21450 138710 24310 -1428570 -22890010 -398556730 -5760084330 -51266562490 *2 /4^9
... ...
----------------------------------------------------------------------------------------------------
sum. s0 s1 s2 s3 ...
====================================================================================================
T0.5: c0 c1 c2 c3 ...
and T0.5(x) = c0 + c1*x + c2*x^2/2! + c3*x^/3! + ...
Let's look at the columnsums of the table; that sums, divided by the factorial, give the coefficients
c_k for the T0.5(x)-powerseries.
First, s0 = 1, (remember the scaling extracted to the rhs) ,
so c0 = 1
Next, s1. Here we recognize, that the numbers are the catalan-numbers, and, with the
current scaling have the generation-function 1- sqrt(1-z). Since we want to know
simply the sum, we set z=1 and get for the sum
s1 = 1- sqrt(1-1) = 1
so c1 =1
Next, s2. It becomes more difficult. We can add columns s2 and s1 to get a sequence,
which can formally be expressed as the derivative of the sqrt(1 - z)-function, where
possibly we need also a scaling at z, so likely something like
1 - sqrt(1 - a z)'
It looks, as if the series is divergent, too, so we'll have to see, whether this
operation (and the following, which surely are similar) can be justified/make sense
at all.
-----------------
I proceeded for the first few terms s2,s3,s4,s5... Things seem to come out uneasy... :( Now follows msg 3:
(...)
Formally composed by derivatives of sqrt(1-z) I get for the series s1,s2,s3,... the following
generating functions
s0: 1
s1: 1 - 1*sqrt(1-z)
s2: 3 - 3*sqrt(1-z) + 2*z*(sqrt(1-z)')
s3: 16 - 16*sqrt(1-z) + 15*z*(sqrt(1-z)') - 3*z^2*(sqrt(1-z)'')
s4: 125 - 125*sqrt(1-z) + 124*z*(sqrt(1-z)') - 42*z^2*(sqrt(1-z)'') + 4*z^3*(sqrt(1-z)''')
s5: 1296 - 1296*sqrt(1-z) + 1295*z*(sqrt(1-z)') - 550*z^2*(sqrt(1-z)'') + 90*z^3*(sqrt(1-z)''') - 5*z^4*(sqrt(1-z)'''')
...
which have to be evaluated at z=1 to give the value for the sums. Now the derivatives have
a vertical asymptote at z=1, so here are infinities everywhere...
Even more obvious, if I expand the derivatives into terms of sqrt(1-z) I get the following
explicite generating functions for the series of s0,s1,s2,...:
s0: 1
s1: 1 - sqrt(1-z) * ( 1 )
s2: 3 - sqrt(1-z)/(1-z)^1* ( 3 - 4/2*z)
s3: 16 - sqrt(1-z)/(1-z)^2* ( 16 - 49/2*z + 31/4*z^2 )
s4: 125 - sqrt(1-z)/(1-z)^3* ( 125 - 626/2*z + 962/4*z^2 - 408/8*z^3 )
s5: 1296 - sqrt(1-z)/(1-z)^4* (1296 - 9073/2*z + 22784/4*z^2 - 23462/8*z^3 + 7561/16*z^4)
where all except the first two grow unboundedly, if z->1
So for that approach: it looks as if we cannot express a half-iterate based on
this type of powerseries. Pity.... Maybe we can find a workaround - change order
of summation or something else, don't have an idea.
Another idea around?
(end of that msg to the tetration-forum)
Been a tad absent from here lately ... and I see you've been rather busy at my post in the meantime. I've actually been marshaling some thoughts on the inverse Ackermann function, particularly in connection with Davenport Schinzel sequences, and have a post about it nearly ripe. I think it will chime with what you have contributed here!
– AmbretteOrrisey
yesterday
@AmbretteOrrisey: you're welcome. And happy new year! Unfortunately I likely shall have no new ideas in all this, but of course would like it much if there comes out some connections with my own older stuff. With the inverse of the Ackermann it is perhaps useful to contact Mr. Daniel Geisler who is founding member of the tetration-forum and has also an account here in MSE or MO and is spuriously active on questions on tetration. Perhaps via email you might be able to install some helpful connection.
– Gottfried Helms
yesterday
add a comment |
I just found some discussion about exactly your ansatz in your own answer, but without successful route to an end at, for instance, a half-iterate of the function. I'd posted this in the "tetration-forum" in about 2009
The final conclusion of that study was (see bottom of this answer): So for that approach: it looks as if we cannot express a half-iterate based on this type of powerseries. Pity.... Maybe we can find a workaround - change order of summation or something else, don't have an idea.
Now to my message itself (date-of-saving:16 Mar 2009):
Here I present three postings in sci.math. It seems, that the method is not well suited for the interpolation to fractional heights (as I hoped it would be). But - perhaps we can find a workaround. On the other hand: it is not needed that many different methods exist, so...
Also Ioannis (Galidakis) reminded me of the entry in mathworld,"powertower", where he already characterized this type of series. (http://mathworld.wolfram.com/PowerTower.html)
Here the current msgs aus sci.math: ( some edits in double-brackets [< >])
*subject: tetration: another family of powerseries for fractional iteration*
Maybe this is all known; I didn't see it so far. The idea was triggered by
the comments of V Jovovic in the OEIS concerning the below generating functions.
Consider the sequence of functions
T0(x) = 1, T1(x) = exp(x*1), T2(x) = exp(x*exp(x)), T_h(x) = exp(x*T_{h-1}(x)),...
They are also the generation-functions for the following sequence of powerseries:
T0: 1 + 0 + 0 + ....
T1: 1 + x + 1/2*x^2 + 1/6*x^3 + 1/24*x^4 + 1/120*x^5 + 1/720*x^6 + 1/5040*x^7 ...
T2: 1 + x + 3/2*x^2 + 10/6*x^3 + 41/24*x^4 + 196/120*x^5 + 1057/720*x^6 + 6322/5040*x^7 +...
T3: 1 + x + 3/2*x^2 + 16/6*x^3 + 101/24*x^4 + 756/120*x^5 + 6607/720*x^6 + 160504/5040*x^7 + ...
T4: 1 + x + 3/2*x^2 + 16/6*x^3 + 125/24*x^4 + 1176/120*x^5 + 12847/720*x^6 + 229384/5040*x^7 + ...
T5: 1 + x + 3/2*x^2 + 16/6*x^3 + 125/24*x^4 + 1296/120*x^5 + 16087/720*x^6 + 257104/5040*x^7 + ...
T6: 1 + x + 3/2*x^2 + 16/6*x^3 + 125/24*x^4 + 1296/120*x^5 + 16807/720*x^6 + 262144/5040*x^7 + ...
T7: 1 + x + 3/2*x^2 + 16/6*x^3 + 125/24*x^4 + 1296/120*x^5 + 16807/720*x^6 + 262144/5040*x^7 + ...
T8: 1 + x + 3/2*x^2 + 16/6*x^3 + 125/24*x^4 + 1296/120*x^5 + 16807/720*x^6 + 262144/5040*x^7 + ...
T9: 1 + x + 3/2*x^2 + 16/6*x^3 + 125/24*x^4 + 1296/120*x^5 + 16807/720*x^6 + 262144/5040*x^7 + ...
...
Too: 1 + x + 3/2*x^2 + 4^2/3!*x^3 + 5^3/4!*x^4 + 6^4/5!*x^5 + 7^5/6!*x^6 + 8^6/7!*x^7 + ... //limit h->inf
That means, if x = log(b), we have by this
T0(x) = 1
T1(x) = b = b^^1
T2(x) = b^b = b^^2
T3(x) = b^b^b = b^^3
...
Too(x) = ...^b^b = b^^oo
and for the limit h->inf we have with Too(x) the series for the h-function of b: Too(x) = h(b)
which is convergent for |x|<exp(-1)
[<...>]
The 2.nd msg:
> > (Galidakis replies) :
> > However, the recursive expression for the coefficients
> > given in (6) [<in mathworld, G.H.>] does not seem to allow that.
> >
> > If you can find a way to interpolate between those coefficients for non-natural
> > heights using your matrix method AND at the same time you manage to preserve the
> > functional equation F(x + 1) = e^{x*F(x)}, then, by Jove, you've got a nice
> > analytic solution to tetration :-)
Ok, let's give a start. Recall:
T0: 1 + 0 + 0 + ....
T1: 1 + x + 1/2*x^2 + 1/6*x^3 + 1/24*x^4 + 1/120*x^5 + 1/720*x^6 + 1/5040*x^7 ...
T2: 1 + x + 3/2*x^2 + 10/6*x^3 + 41/24*x^4 + 196/120*x^5 + 1057/720*x^6 + 6322/5040*x^7 +...
T3: 1 + x + 3/2*x^2 + 16/6*x^3 + 101/24*x^4 + 756/120*x^5 + 6607/720*x^6 + 160504/5040*x^7 + ...
T4: 1 + x + 3/2*x^2 + 16/6*x^3 + 125/24*x^4 + 1176/120*x^5 + 12847/720*x^6 + 229384/5040*x^7 + ...
T5: 1 + x + 3/2*x^2 + 16/6*x^3 + 125/24*x^4 + 1296/120*x^5 + 16087/720*x^6 + 257104/5040*x^7 + ...
T6: 1 + x + 3/2*x^2 + 16/6*x^3 + 125/24*x^4 + 1296/120*x^5 + 16807/720*x^6 + 262144/5040*x^7 + ...
T7: 1 + x + 3/2*x^2 + 16/6*x^3 + 125/24*x^4 + 1296/120*x^5 + 16807/720*x^6 + 262144/5040*x^7 + ...
T8: 1 + x + 3/2*x^2 + 16/6*x^3 + 125/24*x^4 + 1296/120*x^5 + 16807/720*x^6 + 262144/5040*x^7 + ...
T9: 1 + x + 3/2*x^2 + 16/6*x^3 + 125/24*x^4 + 1296/120*x^5 + 16807/720*x^6 + 262144/5040*x^7 + ...
...
Too: 1 + x + 3/2*x^2 + 4^2/3!*x^3 + 5^3/4!*x^4 + 6^4/5!*x^5 + 7^5/6!*x^6 + 8^6/7!*x^7 + ... //limit h->inf
We want to interpolate for coefficients of T0.5, means between rows T0 and T1.
I'll rewrite the powerseries without the powers of x. And since we do
the binomial composition of coefficients at like powers of x, we compose
the coefficients down a column; so the common denominator(the factorial) of a
column can be omitted for the scheme.
Thus I get for the original coefficients, only rescaled
T0: 1 0 0 0 0 0 0 0 ...
T1: 1 1 1 1 1 1 1 1 ...
T2: 1 1 3 10 41 196 1057 6322
T3: 1 1 3 16 101 756 6607 65794
T4: 1 1 3 16 125 1176 12847 160504
T5: 1 1 3 16 125 1296 16087 229384
T6: 1 1 3 16 125 1296 16807 257104
T7: 1 1 3 16 125 1296 16807 262144
T8: 1 1 3 16 125 1296 16807 262144
T9: 1 1 3 16 125 1296 16807 262144
...
The first binomial-composition along the columns gives
X0: 1 0 0 0 0 0 0 0 ...
X1: 0 1 1 1 1 1 1 1 ...
X2: 0 -1 1 8 39 194 1055 6320
X3: 0 1 -3 -11 -19 171 3439 46831
X4: 0 -1 5 8 -37 -676 -7243 -64744
X5: 0 1 -7 1 105 1021 7357 21589
X6: 0 -1 9 -16 -161 -1026 -3301 67304
X7: 0 1 -11 37 181 631 -3605 -168125
X8: 0 -1 13 -64 -141 104 10961 246224
X9: 0 1 -15 97 17 -999 -16007 -278711
... ...
The second binomial-composition (using h=0.5)
[< Table 5: this will be the reference-table for the composition of coefficients of T05 >]
Y0: 1 0 0 0 0 0 0 ...
Y1: 0 1/2 1/2 1/2 1/2 1/2 1/2 ...
Y2: 0 1/8 -1/8 -1 -39/8 -97/4 -1055/8
Y3: 0 1/16 -3/16 -11/16 -19/16 171/16 3439/16
Y4: 0 5/128 -25/128 -5/16 185/128 845/32 36215/128
Y5: 0 7/256 -49/256 7/256 735/256 7147/256 51499/256
Y6: 0 21/1024 -189/1024 21/64 3381/1024 10773/512 69321/1024
Y7: 0 33/2048 -363/2048 1221/2048 5973/2048 20823/2048 -118965/2048
Y8: 0 429/32768 -5577/32768 429/512 60489/32768 -5577/4096 -4702269/32768
Y9: 0 715/65536 -10725/65536 69355/65536 12155/65536 -714285/65536 -11445005/65536 ...
... ...
----------------------------------------------------------------------------------------------------
sum. s0 s1 s2 s3 ...
====================================================================================================
T0.5: c0 c1 c2 c3 ...
and T0.5(x) = c0 + c1*x + c2*x^2/2! + c3*x^/3! + ...
the interpolated coefficients c0,c1,c2,... for h=0.5 should then be computed by the
column-sums (and finally the rescaling by the omitted factorials).
The partial sums in the columns converge only badly if at all, so let's look,
whether we can find some analytic solution.
The denominators in the rows can be majorized by powers of 4, and all can then be divided by
2, so let's rewrite this
common scaling
Y0: 1/2 0 0 0 0 0 0 0 0 0 *2 /4^0
Y1: 0 1 1 1 1 1 1 1 1 1 *2 /4^1
Y2: 0 1 -1 -8 -39 -194 -1055 -6320 -41391 -293606 *2 /4^2
Y3: 0 2 -6 -22 -38 342 6878 93662 1219314 16331654 *2 /4^3
Y4: 0 5 -25 -40 185 3380 36215 323720 2128445 -5199340 *2 /4^4
Y5: 0 14 -98 14 1470 14294 102998 302246 -9722034 -332756410 *2 /4^5
Y6: 0 42 -378 672 6762 43092 138642 -2826768 -93176118 -1954258068 *2 /4^6
Y7: 0 132 -1452 4884 23892 83292 -475860 -22192500 -463551132 -7659247332 *2 /4^7
Y8: 0 429 -5577 27456 60489 -44616 -4702269 -105630096 -1778712507 -23047084632 *2 /4^8
Y9: 0 1430 -21450 138710 24310 -1428570 -22890010 -398556730 -5760084330 -51266562490 *2 /4^9
... ...
----------------------------------------------------------------------------------------------------
sum. s0 s1 s2 s3 ...
====================================================================================================
T0.5: c0 c1 c2 c3 ...
and T0.5(x) = c0 + c1*x + c2*x^2/2! + c3*x^/3! + ...
Let's look at the columnsums of the table; that sums, divided by the factorial, give the coefficients
c_k for the T0.5(x)-powerseries.
First, s0 = 1, (remember the scaling extracted to the rhs) ,
so c0 = 1
Next, s1. Here we recognize, that the numbers are the catalan-numbers, and, with the
current scaling have the generation-function 1- sqrt(1-z). Since we want to know
simply the sum, we set z=1 and get for the sum
s1 = 1- sqrt(1-1) = 1
so c1 =1
Next, s2. It becomes more difficult. We can add columns s2 and s1 to get a sequence,
which can formally be expressed as the derivative of the sqrt(1 - z)-function, where
possibly we need also a scaling at z, so likely something like
1 - sqrt(1 - a z)'
It looks, as if the series is divergent, too, so we'll have to see, whether this
operation (and the following, which surely are similar) can be justified/make sense
at all.
-----------------
I proceeded for the first few terms s2,s3,s4,s5... Things seem to come out uneasy... :( Now follows msg 3:
(...)
Formally composed by derivatives of sqrt(1-z) I get for the series s1,s2,s3,... the following
generating functions
s0: 1
s1: 1 - 1*sqrt(1-z)
s2: 3 - 3*sqrt(1-z) + 2*z*(sqrt(1-z)')
s3: 16 - 16*sqrt(1-z) + 15*z*(sqrt(1-z)') - 3*z^2*(sqrt(1-z)'')
s4: 125 - 125*sqrt(1-z) + 124*z*(sqrt(1-z)') - 42*z^2*(sqrt(1-z)'') + 4*z^3*(sqrt(1-z)''')
s5: 1296 - 1296*sqrt(1-z) + 1295*z*(sqrt(1-z)') - 550*z^2*(sqrt(1-z)'') + 90*z^3*(sqrt(1-z)''') - 5*z^4*(sqrt(1-z)'''')
...
which have to be evaluated at z=1 to give the value for the sums. Now the derivatives have
a vertical asymptote at z=1, so here are infinities everywhere...
Even more obvious, if I expand the derivatives into terms of sqrt(1-z) I get the following
explicite generating functions for the series of s0,s1,s2,...:
s0: 1
s1: 1 - sqrt(1-z) * ( 1 )
s2: 3 - sqrt(1-z)/(1-z)^1* ( 3 - 4/2*z)
s3: 16 - sqrt(1-z)/(1-z)^2* ( 16 - 49/2*z + 31/4*z^2 )
s4: 125 - sqrt(1-z)/(1-z)^3* ( 125 - 626/2*z + 962/4*z^2 - 408/8*z^3 )
s5: 1296 - sqrt(1-z)/(1-z)^4* (1296 - 9073/2*z + 22784/4*z^2 - 23462/8*z^3 + 7561/16*z^4)
where all except the first two grow unboundedly, if z->1
So for that approach: it looks as if we cannot express a half-iterate based on
this type of powerseries. Pity.... Maybe we can find a workaround - change order
of summation or something else, don't have an idea.
Another idea around?
(end of that msg to the tetration-forum)
I just found some discussion about exactly your ansatz in your own answer, but without successful route to an end at, for instance, a half-iterate of the function. I'd posted this in the "tetration-forum" in about 2009
The final conclusion of that study was (see bottom of this answer): So for that approach: it looks as if we cannot express a half-iterate based on this type of powerseries. Pity.... Maybe we can find a workaround - change order of summation or something else, don't have an idea.
Now to my message itself (date-of-saving:16 Mar 2009):
Here I present three postings in sci.math. It seems, that the method is not well suited for the interpolation to fractional heights (as I hoped it would be). But - perhaps we can find a workaround. On the other hand: it is not needed that many different methods exist, so...
Also Ioannis (Galidakis) reminded me of the entry in mathworld,"powertower", where he already characterized this type of series. (http://mathworld.wolfram.com/PowerTower.html)
Here the current msgs aus sci.math: ( some edits in double-brackets [< >])
*subject: tetration: another family of powerseries for fractional iteration*
Maybe this is all known; I didn't see it so far. The idea was triggered by
the comments of V Jovovic in the OEIS concerning the below generating functions.
Consider the sequence of functions
T0(x) = 1, T1(x) = exp(x*1), T2(x) = exp(x*exp(x)), T_h(x) = exp(x*T_{h-1}(x)),...
They are also the generation-functions for the following sequence of powerseries:
T0: 1 + 0 + 0 + ....
T1: 1 + x + 1/2*x^2 + 1/6*x^3 + 1/24*x^4 + 1/120*x^5 + 1/720*x^6 + 1/5040*x^7 ...
T2: 1 + x + 3/2*x^2 + 10/6*x^3 + 41/24*x^4 + 196/120*x^5 + 1057/720*x^6 + 6322/5040*x^7 +...
T3: 1 + x + 3/2*x^2 + 16/6*x^3 + 101/24*x^4 + 756/120*x^5 + 6607/720*x^6 + 160504/5040*x^7 + ...
T4: 1 + x + 3/2*x^2 + 16/6*x^3 + 125/24*x^4 + 1176/120*x^5 + 12847/720*x^6 + 229384/5040*x^7 + ...
T5: 1 + x + 3/2*x^2 + 16/6*x^3 + 125/24*x^4 + 1296/120*x^5 + 16087/720*x^6 + 257104/5040*x^7 + ...
T6: 1 + x + 3/2*x^2 + 16/6*x^3 + 125/24*x^4 + 1296/120*x^5 + 16807/720*x^6 + 262144/5040*x^7 + ...
T7: 1 + x + 3/2*x^2 + 16/6*x^3 + 125/24*x^4 + 1296/120*x^5 + 16807/720*x^6 + 262144/5040*x^7 + ...
T8: 1 + x + 3/2*x^2 + 16/6*x^3 + 125/24*x^4 + 1296/120*x^5 + 16807/720*x^6 + 262144/5040*x^7 + ...
T9: 1 + x + 3/2*x^2 + 16/6*x^3 + 125/24*x^4 + 1296/120*x^5 + 16807/720*x^6 + 262144/5040*x^7 + ...
...
Too: 1 + x + 3/2*x^2 + 4^2/3!*x^3 + 5^3/4!*x^4 + 6^4/5!*x^5 + 7^5/6!*x^6 + 8^6/7!*x^7 + ... //limit h->inf
That means, if x = log(b), we have by this
T0(x) = 1
T1(x) = b = b^^1
T2(x) = b^b = b^^2
T3(x) = b^b^b = b^^3
...
Too(x) = ...^b^b = b^^oo
and for the limit h->inf we have with Too(x) the series for the h-function of b: Too(x) = h(b)
which is convergent for |x|<exp(-1)
[<...>]
The 2.nd msg:
> > (Galidakis replies) :
> > However, the recursive expression for the coefficients
> > given in (6) [<in mathworld, G.H.>] does not seem to allow that.
> >
> > If you can find a way to interpolate between those coefficients for non-natural
> > heights using your matrix method AND at the same time you manage to preserve the
> > functional equation F(x + 1) = e^{x*F(x)}, then, by Jove, you've got a nice
> > analytic solution to tetration :-)
Ok, let's give a start. Recall:
T0: 1 + 0 + 0 + ....
T1: 1 + x + 1/2*x^2 + 1/6*x^3 + 1/24*x^4 + 1/120*x^5 + 1/720*x^6 + 1/5040*x^7 ...
T2: 1 + x + 3/2*x^2 + 10/6*x^3 + 41/24*x^4 + 196/120*x^5 + 1057/720*x^6 + 6322/5040*x^7 +...
T3: 1 + x + 3/2*x^2 + 16/6*x^3 + 101/24*x^4 + 756/120*x^5 + 6607/720*x^6 + 160504/5040*x^7 + ...
T4: 1 + x + 3/2*x^2 + 16/6*x^3 + 125/24*x^4 + 1176/120*x^5 + 12847/720*x^6 + 229384/5040*x^7 + ...
T5: 1 + x + 3/2*x^2 + 16/6*x^3 + 125/24*x^4 + 1296/120*x^5 + 16087/720*x^6 + 257104/5040*x^7 + ...
T6: 1 + x + 3/2*x^2 + 16/6*x^3 + 125/24*x^4 + 1296/120*x^5 + 16807/720*x^6 + 262144/5040*x^7 + ...
T7: 1 + x + 3/2*x^2 + 16/6*x^3 + 125/24*x^4 + 1296/120*x^5 + 16807/720*x^6 + 262144/5040*x^7 + ...
T8: 1 + x + 3/2*x^2 + 16/6*x^3 + 125/24*x^4 + 1296/120*x^5 + 16807/720*x^6 + 262144/5040*x^7 + ...
T9: 1 + x + 3/2*x^2 + 16/6*x^3 + 125/24*x^4 + 1296/120*x^5 + 16807/720*x^6 + 262144/5040*x^7 + ...
...
Too: 1 + x + 3/2*x^2 + 4^2/3!*x^3 + 5^3/4!*x^4 + 6^4/5!*x^5 + 7^5/6!*x^6 + 8^6/7!*x^7 + ... //limit h->inf
We want to interpolate for coefficients of T0.5, means between rows T0 and T1.
I'll rewrite the powerseries without the powers of x. And since we do
the binomial composition of coefficients at like powers of x, we compose
the coefficients down a column; so the common denominator(the factorial) of a
column can be omitted for the scheme.
Thus I get for the original coefficients, only rescaled
T0: 1 0 0 0 0 0 0 0 ...
T1: 1 1 1 1 1 1 1 1 ...
T2: 1 1 3 10 41 196 1057 6322
T3: 1 1 3 16 101 756 6607 65794
T4: 1 1 3 16 125 1176 12847 160504
T5: 1 1 3 16 125 1296 16087 229384
T6: 1 1 3 16 125 1296 16807 257104
T7: 1 1 3 16 125 1296 16807 262144
T8: 1 1 3 16 125 1296 16807 262144
T9: 1 1 3 16 125 1296 16807 262144
...
The first binomial-composition along the columns gives
X0: 1 0 0 0 0 0 0 0 ...
X1: 0 1 1 1 1 1 1 1 ...
X2: 0 -1 1 8 39 194 1055 6320
X3: 0 1 -3 -11 -19 171 3439 46831
X4: 0 -1 5 8 -37 -676 -7243 -64744
X5: 0 1 -7 1 105 1021 7357 21589
X6: 0 -1 9 -16 -161 -1026 -3301 67304
X7: 0 1 -11 37 181 631 -3605 -168125
X8: 0 -1 13 -64 -141 104 10961 246224
X9: 0 1 -15 97 17 -999 -16007 -278711
... ...
The second binomial-composition (using h=0.5)
[< Table 5: this will be the reference-table for the composition of coefficients of T05 >]
Y0: 1 0 0 0 0 0 0 ...
Y1: 0 1/2 1/2 1/2 1/2 1/2 1/2 ...
Y2: 0 1/8 -1/8 -1 -39/8 -97/4 -1055/8
Y3: 0 1/16 -3/16 -11/16 -19/16 171/16 3439/16
Y4: 0 5/128 -25/128 -5/16 185/128 845/32 36215/128
Y5: 0 7/256 -49/256 7/256 735/256 7147/256 51499/256
Y6: 0 21/1024 -189/1024 21/64 3381/1024 10773/512 69321/1024
Y7: 0 33/2048 -363/2048 1221/2048 5973/2048 20823/2048 -118965/2048
Y8: 0 429/32768 -5577/32768 429/512 60489/32768 -5577/4096 -4702269/32768
Y9: 0 715/65536 -10725/65536 69355/65536 12155/65536 -714285/65536 -11445005/65536 ...
... ...
----------------------------------------------------------------------------------------------------
sum. s0 s1 s2 s3 ...
====================================================================================================
T0.5: c0 c1 c2 c3 ...
and T0.5(x) = c0 + c1*x + c2*x^2/2! + c3*x^/3! + ...
the interpolated coefficients c0,c1,c2,... for h=0.5 should then be computed by the
column-sums (and finally the rescaling by the omitted factorials).
The partial sums in the columns converge only badly if at all, so let's look,
whether we can find some analytic solution.
The denominators in the rows can be majorized by powers of 4, and all can then be divided by
2, so let's rewrite this
common scaling
Y0: 1/2 0 0 0 0 0 0 0 0 0 *2 /4^0
Y1: 0 1 1 1 1 1 1 1 1 1 *2 /4^1
Y2: 0 1 -1 -8 -39 -194 -1055 -6320 -41391 -293606 *2 /4^2
Y3: 0 2 -6 -22 -38 342 6878 93662 1219314 16331654 *2 /4^3
Y4: 0 5 -25 -40 185 3380 36215 323720 2128445 -5199340 *2 /4^4
Y5: 0 14 -98 14 1470 14294 102998 302246 -9722034 -332756410 *2 /4^5
Y6: 0 42 -378 672 6762 43092 138642 -2826768 -93176118 -1954258068 *2 /4^6
Y7: 0 132 -1452 4884 23892 83292 -475860 -22192500 -463551132 -7659247332 *2 /4^7
Y8: 0 429 -5577 27456 60489 -44616 -4702269 -105630096 -1778712507 -23047084632 *2 /4^8
Y9: 0 1430 -21450 138710 24310 -1428570 -22890010 -398556730 -5760084330 -51266562490 *2 /4^9
... ...
----------------------------------------------------------------------------------------------------
sum. s0 s1 s2 s3 ...
====================================================================================================
T0.5: c0 c1 c2 c3 ...
and T0.5(x) = c0 + c1*x + c2*x^2/2! + c3*x^/3! + ...
Let's look at the columnsums of the table; that sums, divided by the factorial, give the coefficients
c_k for the T0.5(x)-powerseries.
First, s0 = 1, (remember the scaling extracted to the rhs) ,
so c0 = 1
Next, s1. Here we recognize, that the numbers are the catalan-numbers, and, with the
current scaling have the generation-function 1- sqrt(1-z). Since we want to know
simply the sum, we set z=1 and get for the sum
s1 = 1- sqrt(1-1) = 1
so c1 =1
Next, s2. It becomes more difficult. We can add columns s2 and s1 to get a sequence,
which can formally be expressed as the derivative of the sqrt(1 - z)-function, where
possibly we need also a scaling at z, so likely something like
1 - sqrt(1 - a z)'
It looks, as if the series is divergent, too, so we'll have to see, whether this
operation (and the following, which surely are similar) can be justified/make sense
at all.
-----------------
I proceeded for the first few terms s2,s3,s4,s5... Things seem to come out uneasy... :( Now follows msg 3:
(...)
Formally composed by derivatives of sqrt(1-z) I get for the series s1,s2,s3,... the following
generating functions
s0: 1
s1: 1 - 1*sqrt(1-z)
s2: 3 - 3*sqrt(1-z) + 2*z*(sqrt(1-z)')
s3: 16 - 16*sqrt(1-z) + 15*z*(sqrt(1-z)') - 3*z^2*(sqrt(1-z)'')
s4: 125 - 125*sqrt(1-z) + 124*z*(sqrt(1-z)') - 42*z^2*(sqrt(1-z)'') + 4*z^3*(sqrt(1-z)''')
s5: 1296 - 1296*sqrt(1-z) + 1295*z*(sqrt(1-z)') - 550*z^2*(sqrt(1-z)'') + 90*z^3*(sqrt(1-z)''') - 5*z^4*(sqrt(1-z)'''')
...
which have to be evaluated at z=1 to give the value for the sums. Now the derivatives have
a vertical asymptote at z=1, so here are infinities everywhere...
Even more obvious, if I expand the derivatives into terms of sqrt(1-z) I get the following
explicite generating functions for the series of s0,s1,s2,...:
s0: 1
s1: 1 - sqrt(1-z) * ( 1 )
s2: 3 - sqrt(1-z)/(1-z)^1* ( 3 - 4/2*z)
s3: 16 - sqrt(1-z)/(1-z)^2* ( 16 - 49/2*z + 31/4*z^2 )
s4: 125 - sqrt(1-z)/(1-z)^3* ( 125 - 626/2*z + 962/4*z^2 - 408/8*z^3 )
s5: 1296 - sqrt(1-z)/(1-z)^4* (1296 - 9073/2*z + 22784/4*z^2 - 23462/8*z^3 + 7561/16*z^4)
where all except the first two grow unboundedly, if z->1
So for that approach: it looks as if we cannot express a half-iterate based on
this type of powerseries. Pity.... Maybe we can find a workaround - change order
of summation or something else, don't have an idea.
Another idea around?
(end of that msg to the tetration-forum)
edited Dec 30 '18 at 15:19
answered Dec 30 '18 at 12:28
Gottfried Helms
23.2k24398
23.2k24398
Been a tad absent from here lately ... and I see you've been rather busy at my post in the meantime. I've actually been marshaling some thoughts on the inverse Ackermann function, particularly in connection with Davenport Schinzel sequences, and have a post about it nearly ripe. I think it will chime with what you have contributed here!
– AmbretteOrrisey
yesterday
@AmbretteOrrisey: you're welcome. And happy new year! Unfortunately I likely shall have no new ideas in all this, but of course would like it much if there comes out some connections with my own older stuff. With the inverse of the Ackermann it is perhaps useful to contact Mr. Daniel Geisler who is founding member of the tetration-forum and has also an account here in MSE or MO and is spuriously active on questions on tetration. Perhaps via email you might be able to install some helpful connection.
– Gottfried Helms
yesterday
add a comment |
Been a tad absent from here lately ... and I see you've been rather busy at my post in the meantime. I've actually been marshaling some thoughts on the inverse Ackermann function, particularly in connection with Davenport Schinzel sequences, and have a post about it nearly ripe. I think it will chime with what you have contributed here!
– AmbretteOrrisey
yesterday
@AmbretteOrrisey: you're welcome. And happy new year! Unfortunately I likely shall have no new ideas in all this, but of course would like it much if there comes out some connections with my own older stuff. With the inverse of the Ackermann it is perhaps useful to contact Mr. Daniel Geisler who is founding member of the tetration-forum and has also an account here in MSE or MO and is spuriously active on questions on tetration. Perhaps via email you might be able to install some helpful connection.
– Gottfried Helms
yesterday
Been a tad absent from here lately ... and I see you've been rather busy at my post in the meantime. I've actually been marshaling some thoughts on the inverse Ackermann function, particularly in connection with Davenport Schinzel sequences, and have a post about it nearly ripe. I think it will chime with what you have contributed here!
– AmbretteOrrisey
yesterday
Been a tad absent from here lately ... and I see you've been rather busy at my post in the meantime. I've actually been marshaling some thoughts on the inverse Ackermann function, particularly in connection with Davenport Schinzel sequences, and have a post about it nearly ripe. I think it will chime with what you have contributed here!
– AmbretteOrrisey
yesterday
@AmbretteOrrisey: you're welcome. And happy new year! Unfortunately I likely shall have no new ideas in all this, but of course would like it much if there comes out some connections with my own older stuff. With the inverse of the Ackermann it is perhaps useful to contact Mr. Daniel Geisler who is founding member of the tetration-forum and has also an account here in MSE or MO and is spuriously active on questions on tetration. Perhaps via email you might be able to install some helpful connection.
– Gottfried Helms
yesterday
@AmbretteOrrisey: you're welcome. And happy new year! Unfortunately I likely shall have no new ideas in all this, but of course would like it much if there comes out some connections with my own older stuff. With the inverse of the Ackermann it is perhaps useful to contact Mr. Daniel Geisler who is founding member of the tetration-forum and has also an account here in MSE or MO and is spuriously active on questions on tetration. Perhaps via email you might be able to install some helpful connection.
– Gottfried Helms
yesterday
add a comment |
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3007951%2ftetration-by-a-non-integer%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
I am no expert on history, but googling yields the name "Leonhard Euler." I am unsure where you get the "t"...
– Mohammad Zuhair Khan
Nov 21 '18 at 16:25
I once read the appendix to TE Lawrence's Seven Pillars of Wisdom ... I think it was a bad influence on me!
– AmbretteOrrisey
Nov 21 '18 at 16:35