One can only suffer tensor index manipulation for so long. A team at Hardvard NLP group has proposed what they call Named Tensors, which use string names instead of tensor dimension indices. An implementation proposal is available in their github repository - https://github.com/harvardnlp/namedtensor. (See the links for examples.)

The phrase "named tensor", I think, is a misnomer - it is not the tensor that is being named, but rather its dimensions, and so "named tensor dimensions" is perhaps more accurate. The article does call it "named dimensions" too, but the API is "NamedTensor". I suppose we can grant an interpretation as "Tensor with names".

Using names instead of integer indices better captures a fundamental symmetry in tensor representations of data and calculations - the symmetry that interchanging indices end-to-end does not alter the model. "End-to-end" here means that all tensor indices are modified accordingly. The point is that that you can choose any index ordering and arrive at the same model behaviour and performance. The additional point is that you really shouldn't need to bother. Explicitly recognizing this symmetry gives us clarity when expressing "dot products", for example, where name identity makes clear which dimensions are being reduced.

Perhaps the most telling line in the namedtensor repo's README is the following -

The following functions are removed from the stdlib:‌‌
view, expand, squeeze, unsqueeze, transpose

If you can address tensor dimensions using names, none of those concepts are needed. They are all unified into a single "named indexing" primitive. Such reduction in required operations is usually a good indicator that some actual simplification has occurred.

Quoting from Machine learning systems are stuck in a rut by Barnham and Isard (emphasis mine),

Named dimensions improve readability by making it easier to determine how dimensions in the code correspond to the semantic dimensions described in, .e.g., a research paper. We believe their impact could be even greater in improving code modularity, as named dimensions would enable a language to move away from fixing an order on the dimensions of a given tensor, which in turn would make function lifting more convenient…

The improvement in modularity that they're talking about is that when you want to abstract an operation such as "dot product of two tensors along two compatible dimensions", you can do so without committing early to the shape of the tensor that you'll operate on. When using indices, you might have to express it as inner_product(i,j)(T1,T2). Using names though, you can enforce compatibility with a single name like inner_product("width")(T1,T2). Furthermore, the latter expression captures the symmetry of the inner product operation since you can supply the tensors in either order and get the same result which is not dependent on index ordering ... which wouldn't work in the integer indexed case.

So .. adopt and make your tensor code more readable, possibly with fewer bugs, and maybe more generalizable too.

Work in PyTorch is also moving to adopt this. Issue 4164 (raised in 2017) has the following comment by Soumith Chintala (PyTorch creator) -

Aside

Interestingly, this is related to a choice I made in my toy quantum simulator - where I address qubits using names to get more reusable higher order operators than if the operators had to be defined using indices. Quantum states of multi-particle systems are also often worked out using tensors and the state of two independent/non-interacting systems is the tensor outer product of the states of the two separate systems. Similar to the "named tensor" proposal, in this case I exploit the symmetry that the physics and your predictions do not change depending on the order which you decide to use for your tensor outer products. It is a bit tiresome to move away from the index approach for some of the maths, but there is no reason for your code to be subject to the same grind.‌‌‌