Saturday, August 06, 2016

Dimensionful Matrices

Introduction

Programming languages and libraries for numerical work tend not to place a lot of emphasis on the types of their data. For example Matlab, R, Octave, Fortran, and Numpy (but not the now defunct Fortress) all tend to treat their data as plain numbers meaning that any time you have a temperature and a mass, say, there is nothing to prevent you adding them.


I've been wondering how much dimensions (in the sense of dimensional analysis) and units could help with numerical programming. As I pointed out on G+ recently (which is where I post shorter stuff these days), you don't have to limit dimensions to the standard ones of length, mass, time, dollars and so on. Any scale invariance in the equations you're working with can be exploited as a dimension giving you a property that can be statically checked by a compiler.


There are quite a few libraries to statically check dimensions and units now. For example Boost.Units for C++, units for Haskell and even quantities for Idris.


A matrix that breaks things

Even if a language supports dimensions, it's typical to define objects like vectors and matrices as homogeneous containers of quantities. But have a look at the Wikipedia page on the metric tensor. There is a matrix



which has the curious property that 3 entries on the diagonal seem to be dimensionless while the first entry is a squared velocity with dimension . This will break many libraries that support units. An obvious workaround is to switch to use natural units, which is much the same as abandoning the usefulness of dimensions. But there's another way, even if it may be tricky to set up with existing languages.


Heterogeneous vectors and matrices

According to a common convention in physics, a 4-vector has dimensions where I'm using the convention that we can represent the units of a vector or matrix simply as a vector or matrix of dimensions, and here is time and is length. The metric tensor is used like this: (where I'm using the Einstein summation convention so the 's and 's are summed over). If we think of having units of length squared (it is a pseudo-Riemannian metric after all) then it makes sense to think of having dimensions given by



We can write this more succinctly as



where is the usual outer product.


I'll use the notation to mean is of type . So, for example, . I'll also use pointwise notation for types such as and .


Now I can give some general rules. If is a matrix, and are vectors, and is a scalar, then only makes sense if . Similarly the "inner product" only makes sense if .


Generic vectors and matrices

Although these kinds of types might be useful if you're dealing with the kind of heterogeneous matrices that appear in relativity, there's another reason they might be useful. If you write code (in the imaginary language that supports these structures and understands dimensions and units) to be as generic as possible in the types of the vector and matrix entries, failures to type check will point out parts of the code where there are hidden assumptions, or even errors, about scaling. For example, consider a routine to find the inverse of a 3 by 3 matrix. Writing this generically as possible means we should write it to operate on a matrix of type , say. The result should have type . If this type checks when used with a suitably powerful type checker then it means that if we replace the units for type A, say, with units twice as large, it should have no effect on the result, taking into account those units. In this case, it means that if we multiply the numbers of the first row of the input by 0.5 then the numbers of the first column of the output should get multiplied by 2. In fact this is a basic property of matrix inverses. In other words, this mathematical property of matrix inverses is guaranteed by a type system that can handle units and heterogeneous matrices. It would be impossible to write a matrix inverter that type checks and fails to have this property. Unfortunately it's still possible to write a matrix inverter that type checks and is incorrect some other way. Nonetheless this kind of type system would put a very big constraint on the code and is likely to eliminate many sources of error.


An example, briefly sketched

I thought I'd look at an actual example of a matrix inverter to see what would happen if I used a type checker like the one I've described. I looked at the conjugate gradient method. At the Wikipedia page, note the line



This would immediately fail to type check because if is of generic vector type then isn't the same type as so they can't be added. I won't go into any of the details but the easiest way to patch up this code to make it type check is to introduce a new matrix of type and besides using it to make this inner product work (replacing the numerator by ) we also use anywhere in the code we need to convert a vector of type to a vector of type . If you try to do this as sparingly as possible you'll end up with a modified algorithm. But at first this seems weird. Why should this matrix inverse routine rely on someone passing in a second matrix to make it type check? And what is this new algorithm anyway? Well scroll down the Wikipedia page and you get to the preconditioned conjugate gradient algorithm. The extra matrix we need to pass in is the preconditioner. This second algorithm would type check. Preconditioned conjugate gradient, with a suitable preconditioner, generally performs better than pure conjugate gradient. So in this case we're getting slightly more than a check on our code's correctness. The type checker for our imaginary language would give a hint on how to make the code perform better. There's a reason for this. The original conjugate gradient algorithm is implicitly making a choice of units that sets scales along the axes. These determine the course taken by the algorithm. It's not at all clear that picking these scalings randomly (which is in effect what you're doing if you throw a random problem at the algorithm) is any good. It's better to pick a preconditioner adapted to the scale of the problem and the type checker is hinting (or would be if it existed) that you need to do this. Compare with the gradient descent algorithm whose scaling problems are better known.


But which language?

I guess both Agda and Idris could be made to implement what I've described. However, I've a hunch it might not be easy to use in practice.

7 comments:

  1. George Hart has written a book on this topic called "Multidimensional Analysis".
    You can find a summary and and an article on his website.

    http://www.georgehart.com/research/multanal.html
    http://www.georgehart.com/research/tdm.ps

    ReplyDelete
  2. The outer product view also corresponds nicely to the view of matrices as linear transformations between two vector spaces.

    I love the analysis of Conjugate Gradient – when working on Fortress we loved to trot this one out (internally, too). If we'd realized that getting the dimensions right corresponded to preconditioning, it would have made discussing dimensioned matrices with the supercomputing crowd so much easier. It's still a challenge to turn this into a sparse matrix representation without doing horrifying things with dependent types, though.

    ReplyDelete
  3. We've tried a few times to implement this in Haskell on top of the dimensional library. It actually works really well in the "forward" direction (determining the dimensions of the results when all input dimensions are known), but can't check any complicated uses that are polymorphic in dimension. I am working on a type checker plugin for GHC to rectify that, but there are a few complications and so my attention got diverted to lower-hanging fruit.

    ReplyDelete
  4. I once tried, and failed, to work out how to do dimensioned clifford algebra in the particular case of momentum. I've left the incorrect blog post [with prepended mea culpa] at http://grampsgrumps.blogspot.com.au/2014/11/multivector-momentum.html in the hope of inspiring someone. I have a feeling that understanding how to do that correctly would be a useful step.

    ReplyDelete
  5. if we were to represent the vectors as hlists in haskell (or in scala via shapeless) we would be able to express these multidimensional arrays. we could then have interpreters which may "compile" them down to some target (for the ones where the types conformed).

    ReplyDelete
  6. If you don't know already, you might want to see what F# does: https://blogs.msdn.microsoft.com/andrewkennedy/2008/08/29/units-of-measure-in-f-part-one-introducing-units/

    Though, no matrices, AFAIK.

    ReplyDelete
  7. R effort here: https://cran.r-project.org/web/packages/units/index.html

    ReplyDelete