MathGroup Archive 2008

[Date Index] [Thread Index] [Author Index]

Search the Archive

Re: Tensor Contraction

  • To: mathgroup at smc.vnet.net
  • Subject: [mg84554] Re: Tensor Contraction
  • From: "David Park" <djmpark at comcast.net>
  • Date: Thu, 3 Jan 2008 20:28:46 -0500 (EST)
  • References: <flie0h$fhm$1@smc.vnet.net>

The following works your two examples using the Tensorial package from my 
website below. (Package cost: $100.) Tensorial has the capability to convert 
index expressions to 'dot mode' array expressions. Evaluation of indexed 
expressions with summations and array expansions is fast enough in many 
common cases but for a high number of dimensions, high order tensors and 
many contractions the 'dot mode' will definitely be faster. The Tensorial 
package has an extensive tutorial on converting between indexed expressions 
and array expressions. Doing array evaluations does involve a certain amount 
of bookkeeping. We just have to remain 'level' headed! I will show how each 
of your examples is evaluated by each method. Unfortunately, I can't paste 
in the tensor output so I will just show the input statements to indicate 
how the calculations are performed.

Needs["TensorCalculus4V6`Tensorial`"]

The following defines tensor shortcuts for the A and B tensors of order 4 
and 2.

DefineTensorShortcuts[{A, 4}, {B, 2}]

The first case:
To enter tensors with shortcuts we append 'u's and 'd's to the tensor label 
to indicate up and down indices. The following writes the tensor product and 
then expands it to an array, doing all the Einstein summations along the 
way. The index method is always slower but in this case it is fast enough.

Auudd[\[Mu],\[Nu],\[Sigma],\[Rho]] Bud[\[Rho],\[Mu]]
result1=%//ToArrayValues[]

The following uses the Tensorial dot mode routines.
1) We write the product as a dot product in some given order, which is 
arbitrary but some orders are more convenient than others.
2) We expand each tensor to a Mathematica array. This is always done in the 
sort order of the indices. So in this case we specified permutations to get 
the order we want for the dot product.
3) We perform the dot product, contracting \[Mu].
4) We use the Tensorial ContractArray routine to contract \[Sigma], which is 
in the 3rd and 4th levels.

Auudd[\[Mu], \[Nu], \[Sigma], \[Rho]]  Bud[\[Rho], \[Mu]]
% // DotTensorFactors[{1, 2}]
% // ExpandDotArray[Tensor[A, __], {4, 1, 3, 2}] //
 ExpandDotArray[Tensor[B, __], {1, 2}]
% // DotOperate[1, Dot]
result2 = ContractArray[%, {3, 4}]

We check the results.

result1==result2//Simplify
True

The second case:

We perform the evaluation using index summation. It takes 8.4 seconds on my 
computer.

Auudd[\[Mu], \[Nu], \[Beta], \[Alpha]]
 Auudd[\[Rho], \[Sigma], \[Nu], \[Gamma]] Bdd[\[Mu], \[Sigma]]
(result3 = % // ToArrayValues[]; // Timing)

We perform the same procedure as with the first example except that now the 
transpositions and array contraction was done as part of the DotOperate 
command.

Auudd[\[Mu], \[Nu], \[Beta], \[Alpha]]
 Auudd[\[Rho], \[Sigma], \[Nu], \[Gamma]] Bdd[\[Mu], \[Sigma]]
% // DotTensorFactors[{3, 1, 2}]
% // ExpandDotArray[Tensor[__]]
% // DotOperate[1,
   Function[{B, A},
    Transpose[B, {2, 1}].Transpose[A, {2, 3, 1, 4}]]];
result4 = % //
    DotOperate[1,
     Function[{X, A},
      ContractArray[X.Transpose[A, {2, 1, 3, 4}], {1, 6}]]] // First;

Again, the results check.

ExpandAll[result3] == ExpandAll[result4]
True

If we were going to do this quite often we could consolidate the commands.

matA = Auudd[\[Mu], \[Nu], \[Beta], \[Alpha]] // ToArrayValues[];
matB = Bdd[\[Mu], \[Sigma]] // ToArrayValues[];

It now took 0 seconds

(result5 =
    ContractArray[
     Transpose[matB, {2, 1}].Transpose[matA, {2, 3, 1, 4}].Transpose[
       matA, {2, 1, 3, 4}], {1, 6}];) // Timing

result3 == result5 // Simplify
True


-- 
David Park
djmpark at comcast.net
http://home.comcast.net/~djmpark/


"Josh Burkart" <jburkart at ucdavis.edu> wrote in message 
news:flie0h$fhm$1 at smc.vnet.net...
> Hello,
>
> I am having problems doing tensor contractions. I don't know of a standard 
> way to do them in Mathematica, so I've been resorting to two methods. The 
> first is clumsy and difficult for me to work with, and the second takes 
> forever for some reason that escapes me and is also somewhat clumsy.
>
> The first is simply to use Dot[], which automatically contracts the last 
> index of one tensor with the first index of another. This then involves 
> judicious application of Transpose[], to get the right indices into place 
> and then to put them back where they're supposed to be after applying 
> Dot[]. It can also involve several applications of Dot[], depending on the 
> number of contractions. I dislike this method since I can't really 
> visualize what's happening; my (and I think the standard, in physics 
> anyway) notation for tensor contraction is using the Einstein convention, 
> for example something to the effect of (in latex)
>
> A^{\mu\nu}_{\sigma\rho} B^{\rho}_{\mu},
>
> meaning contraction over \mu and \rho, leaving a two-index tensor with the 
> ^\nu and _\sigma indices. This notation doesn't really "gel with" the 
> Dot[] Transpose[] method.
>
> The other method I've been using is to nest Sum[] inside of Table[], where 
> the arguments of Sum[] are the indices to be summed over and the arguments 
> of Table[] are the indices meant to remain in the resultant tensor. This 
> tends to take a very long time, even when all I'm dealing with are 
> (+/-)1's and 0's inside the tensor. It seems to me like it should be doing 
> the same amount of calculation as the first method, but maybe I'm wrong 
> and it's doing something superfluous a million times or something? For 
> example, say I have two tensors which I name dtt and osp, 4-index and 
> 2-index, respectively, and I want to perform the following contraction:
>
> dtt^{\mu\nu}_{\beta\alpha} dtt^{\rho\sigma}_{\nu\gamma} osp_{\mu\sigma}.
>
> This involves contraction/summation over the \mu, \nu, and \sigma indices, 
> and results in a 4-index tensor with just the \rho, \gamma, \alpha, and 
> \beta indices. Below is some code I've been using to do this contraction. 
> The preliminary three functions which generate the tensors need not be 
> scrutinized; each index goes from 1 to M inclusive; the last input, "F = 
> ...", is where the contraction is being performed. On my system the 
> contraction takes about 35 seconds, which is extremely excessive, given 
> that the Dot[] Transpose[] method takes only a fraction of a second.
>
> ------
> OrthosymplecticForm[p_Integer, Q_Integer?EvenQ] :=
>  Join[PadRight[
>    Join[PadLeft[-IdentityMatrix[p], {p, 2 p}],
>     PadRight[IdentityMatrix[p], {p, 2 p}]], {2 p, 2 p + Q}],
>   PadLeft[Join[PadLeft[IdentityMatrix[Q/2], {Q/2, Q}],
>     PadRight[IdentityMatrix[Q/2], {Q/2, Q}]], {Q, 2 p + Q}]];
>
> DoubleTauFunction[a_, b_, c_, d_, p_, Q_] :=
>  If[a != b || c != d, 0, If[a > 2 p && c > 2 p, -1, 1]];
>
> DoubleTauTensor[p_, Q_] :=
>  Table[DoubleTauFunction[a, b, c, d, p, Q], {a, 1, 2 p + Q}, {b, 1,
>    2 p + Q}, {c, 1, 2 p + Q}, {d, 1, 2 p + Q}];
>
> pp = 3; QQ = 4;
> M = 2 pp + QQ;
> dtt = DoubleTauTensor[pp, QQ];
> osp = OrthosymplecticForm[pp, QQ];
>
> F = Table[
>   Sum[dtt[[\[Mu], \[Beta], \[Nu], \[Alpha]]]*
>     dtt[[\[Rho], \[Nu], \[Sigma], \[Gamma]]]*
>     osp[[\[Mu], \[Sigma]]], {\[Mu], 1, M}, {\[Nu], 1, M}, {\[Sigma],
>     1, M}], {\[Rho], 1, M}, {\[Gamma], 1, M}, {\[Alpha], 1,
>    M}, {\[Beta], 1, M}];
> ------
>
> So again, my questions are: 1) why does the Table[Sum[]] method take so 
> long, and 2) are there better ways to do this? I've checked out some 
> tensor calculus packages but haven't been too impressed. Also, more 
> generally, when calculating using Mathematica, especially when doing 
> numerics, I often notice that certain ways of doing things take absolutely 
> inordinate amounts of time, while others take less time. I have little 
> sense of which functions in Mathematica are most efficient for particular 
> applications--any thing I should know? Any advice would be appreciated.
>
> Thanks, Josh B.
> 



  • Prev by Date: Re: Tensor Contraction
  • Next by Date: Re: bug --
  • Previous by thread: Re: Tensor Contraction
  • Next by thread: 3D curve spline