Tensor power series
#1
I think a nice way to discuss the power series idea where x is a vector is within the context of tensor calculus, or multilinear algebra as its called now. Roughly speaking, tensors are just multi-dimensional arrays that have additional properties. Here is a brief overview of my understanding of tensors.

\( X = x^i \) is a contravariant 1-tensor (or vector or column vector), this means that it transforms contravariantly (whatever that means), and it has 1 dimension of indeces. As a mixed tensor, this is also called a (1,0)-tensor or a \( \left(\begin{tabular}{c}1 \\ 0\end{tabular}\right) \)-tensor.

\( f_i \) is a covariant 1-tensor (or covector or row vector), which means that it is like a function that takes a vector and gives a scalar (0-tensor) as output. As a mixed tensor, this is also called a (0,1)-tensor or a \( \left(\begin{tabular}{c}0 \\ 1\end{tabular}\right) \)-tensor.

\( F = f^j_i \) is a mixed variance 2-tensor, or a vector valued function (a matrix), meaning it is both covariant (i) and contravariant (j). This is a combination of both descriptions taken above, because (like a matrix) it is a function that sends a vector (i.e. covariance) to a vector (i.e. contravariance). Although I have not seen anything to support this view of covariance/contravariance, it seems like it doesn't contradict any description either. So one way to think of it is that "contravariant" means "gives a vector" and "covariant" means "takes a vector". This is called a 2-tensor, but to distinguish between each kind of index, this can also be called a (1,1)-tensor.

\( {\nabla}_i \) is a covariant 1-tensor (known as the gradient), but its different in that it is also an operator, which takes a function (covector), and gives a covector. In order to replace the concept of a derivative (or the Jacobian matrix) we will not be using it as an operator directly, but we will be multiplying it with the tensor product \( \otimes \) which means it will operate very much like the Jacobian when used on a (1,1)-tensor.

\( 0^i \) is a contravariant 1-tensor, because it is like a vector, that holds all zeros.

\( F(X) = \sum_i f^j_i(x^i) \) is the application of a 2-tensor function to a 1-tensor, which is actually a form of tensor contraction or matrix multiplication in this case (but this is only true for linear functions, non-linear functions cannot be written this way). Notice that I don't use the Einstein notation (also known as index notation) because I think it is confusing when you can't see the summation \( \Sigma \). For linear functions, this is the power series of that function. However, what we want to do is generalize this so that this works for non-linear functions as well.
Reply


Messages In This Thread
Tensor power series - by andydude - 05/13/2008, 07:58 AM
RE: Tensor power series - by andydude - 05/13/2008, 07:59 AM
RE: Tensor power series - by andydude - 05/13/2008, 08:11 AM
RE: Tensor power series - by andydude - 05/14/2008, 06:18 AM
RE: Tensor power series - by Gottfried - 05/20/2008, 08:39 PM
RE: Tensor power series - by andydude - 05/22/2008, 12:58 AM
RE: Tensor power series - by andydude - 05/22/2008, 04:11 AM
RE: Tensor power series - by andydude - 05/22/2008, 04:36 AM
RE: Tensor power series - by bo198214 - 05/24/2008, 10:10 AM
RE: Tensor power series - by andydude - 06/04/2008, 08:08 AM

Possibly Related Threads…
Thread Author Replies Views Last Post
  Divergent Series and Analytical Continuation (LONG post) Caleb 54 58,071 03/18/2023, 04:05 AM
Last Post: JmsNxn
  Discussion on "tetra-eta-series" (2007) in MO Gottfried 40 41,938 02/22/2023, 08:58 PM
Last Post: tommy1729
  Functional power Xorter 3 9,304 07/11/2022, 06:03 AM
Last Post: Catullus
Question Tetration Asymptotic Series Catullus 18 22,668 07/05/2022, 01:29 AM
Last Post: JmsNxn
Question Formula for the Taylor Series for Tetration Catullus 8 13,863 06/12/2022, 07:32 AM
Last Post: JmsNxn
  Calculating the residues of \(\beta\); Laurent series; and Mittag-Leffler JmsNxn 0 3,760 10/29/2021, 11:44 PM
Last Post: JmsNxn
  Trying to find a fast converging series of normalization constants; plus a recap JmsNxn 0 3,605 10/26/2021, 02:12 AM
Last Post: JmsNxn
  Reducing beta tetration to an asymptotic series, and a pull back JmsNxn 2 6,938 07/22/2021, 03:37 AM
Last Post: JmsNxn
  Perhaps a new series for log^0.5(x) Gottfried 3 10,826 03/21/2020, 08:28 AM
Last Post: Daniel
  A Notation Question (raising the highest value in pow-tower to a different power) Micah 8 26,935 02/18/2019, 10:34 PM
Last Post: Micah



Users browsing this thread: 1 Guest(s)