I have been reading a few books, over the last two years, in order to try to understand Einstein’s Relativity. I will talk about these books elsewhere, but Pavel Grinfeld’s book on Tensor Analysis is useful to help one build a strong foundation in the underlying Mathemtics for Relativity. I came across Prof Grinfeld’s work when I was looking on YouTube for **Tensor Calculus**. (Click on the link to see a list of Pavel’s YouTube Videos on Tensors)

**Please click on the big picture of Pavel Grinfeld’s book to go to the AMAZON website. Thank You.**

There are one or two third-party published solutions and lists of errata to Prof Grinfeld’s book. David Sulon provides many worked solutions. Alex J. Yuffa lists several errata in the book. He also provides a link to other errata by George Katanics.

My purpose, in writing these posts, is to help guide the reader through some parts of the book and help the reader (who might be a student or self-study person) gain an understanding in an intuitive way. And also to explain how the book is structured and suggest how best to read it, in terms of what can be learnt in each chapter and what ought to be deferred until later chapters.

Formally, the book does not enter into Tensor Calculus proper until chapter 6. However I found myself having to work fairly hard in Chapter 4. And it is at that point that I decided to write these posts. 😉

#### SECOND ORDER PARTIAL DIFFERENTIATION OF A FUNCTION OF A FUNCTION

I begin with Exercise 34 and, starting from equation (4.22), I want to find

Later in this post, I will compare the results of this second-order partial differentiation, with its tensor form, given in Equation (4.44).

There are several things to bear in mind before we actually carry out this second order partial differentiation of the function f :~

Therefore, just looking at the first term, and working with the sub-function a, I get

Now I will do all three terms of (4.22), w.r.t. a, and w.r.t. b and w.r.t. c to get, on breaking the equation down to three parts, in a, b and c, respectively,

So,

Exercise 34 also asks us to evaluate

and

The second of these is straight forward: you just replace with in the working above. The double differentiation of function f with the two dependent variables, and , I will leave until after I have talked about the Tensor Form. For now, I will just write out the results for and :

and

Note the order of the denominators in

This is important when we come to look at the Tensor Form. Look what happens if we do the differentiation the other way around:

Overall, if you compare **all** the terms, between and you will find that the two expressions are indeed equal. However their order is different. In fact the nine terms in square brakets (each multiplied by the term outside the square bracket to the right), if you now consider it as a sort of 3 x 3 matrix, are the transpose of one and other. The point is, if you do it the wrong way around, it kind of messes things up when you then try to find the Tensor form.

#### COMPARISON WITH THE TENSOR FORM

On page 42 Pavel presents the tensor form as, Equation (4.44),

How does he get this? Well, let us begin by having A^{1} = A, A^{2} = B and A^{3} = C and also and . Then we re-write the equations for , and , using these “indexed” terms:

,

and

What we need to do is compare these three equations with Pavel’s equation (4.44), which contains the summation convention for repeated indices.

Firstly note that are not repeated, in any of the terms, i.e. they occur once in each term. So whatever are given as on the left of the equal sign, that’s what they are on the right.

What are repeated are the i and j indices. The way to look at this is to think of the i^{th} term, for i = 1 to 3, and then sum over j, for j = 1 to 3, for that term. So for the first line of the first of the equations, we have , and i = 1. So we sum over j.

Then, for the second line we have i = 2 and again sum over j. All over again for that term. Finally, for i = 3, again sum over j. For the next equation we have , and we carry out the same process all over again from the start, i.e. hold i = 1 and sum the three j-terms, then hold i = 2 and sum those three j-terms, then hold i = 3 and sum those three j-terms. For the third equation, we have , and go through the whole thing again.

If you look at Equation (4.44) you see that both terms contain i but only the first term contains j. So to get the i^{th} term you have to sum over j, each time, i.e. for each i-term.

Just an aside:~

I will come back to this idea, of order of summation, in a *future post* when I look at the third-order differential

Basically if a repeated indice does not occur in all terms, such as j in Eq (4.44), then that term must be summed, for all values of j, for each i-term, i.e. the term that does occur in all terms.

For the third-order differential I will introduce A^{k} in the tensor expression. Only some of the terms, in that tensor expression, contain j but they all contain both i and k. So, again, the terms containing j will need to be summed for each i and k term. Nine terms in all.

As all terms contain both i and k then it doesn’t matter too much which order they are done in, but as I said earlier, it is best to stick to convention and do the i-terms first and then the k-terms.

Anyway, the point is that all the information in all the three equations, above, are contained in Pavel’s equation (4.44). So if you can get your head around working with tensors then you can expedite your workings out in mathematical problems.

Just a few points here:~

^{i}, rather than A, B, C. In cartesian coordinates you will use x

^{1}, x

^{2}and x

^{3}, rather than x, y and z. If you study relativity you will find that you are working in x

^{i}, for i = 1, 2 and 3, for the three space dimensions, and i = 4 for the time dimension.

^{i}in the denominators whilst in Equation (4.22) he has lower case a, b and c. In my opinion, I know that f is a function of a, b and c and that a, b and c are all functions of and so I really actually find it a little confusing trying to remember to use capitals or lower case and would be quite happy to chose one or the other and then stick to it. In George Katanics‘ list of errata, he says

*“Page 42, equation (4.44): All “A”s in denominator should be “a”.”*

#### DIFFERENTIATION USING THE TENSOR-FORM ONLY

Just updating this post, it occurred to me that we can differentiate

(which is essentially Pavel’s Eqns (4.35 and (4.36)) again, to give

where I have blatantly put a capital A^{j} in the denominator, by way of introducing it. We see then that just by introducing A^{j} and using the standard rules of differentiation, including differentiation of a product and chain-rule, for the differentiation of a function of a function, we more easily arrive at Equation (4.44) than by working the whole thing out longhand, as I have done above.

Good luck with it all. I will try to look at the tensor expression for

in the next post (*Tensor Analysis – Grinfeld – Chapter 4 – Triple Derivative*). This is exercise 38 in Pavel’s book. Best wishes. John 🙂

*This post contains affiliate links, meaning, if you click through an affiliate link and choose to make a purchase, I may make a commission, at no additional cost to you. Thank you.*

Do you have this in printable format?