RSS

Writing tests

I have been conceptualizing and writing a test file for the three internal matrices. Doing that involves thinking what all functions should the matrices support. My approach is to write the tests before I make the tests work. I will write a complete test file, get it approved, edited by the community and then proceed to edit the class files to make the tests pass.

I’m currently on a week long vacation with my family, so I’m not doing much work. I plan to pick up speed when I return on the 24th.

 
Leave a comment

Posted by on July 18, 2011 in Sympy

 

Structuring the module and commit history

This previous week, I finished writing the LU decomposition for LILMatrix and started up writing a Matrix_ class. The Matrix_ class is a high level class which uses the three matrices DOKMatrix, LILMatrix, and DenseMatrix as internals.

While I was refactoring the Matrix class to making it more modular and removing high level functionalities, I faced some problems that I’d rather discuss with the list first.

I will be writing to the list to discuss what the public API of the matrix module should be. Only when that is done can we proceed with the structuring.

I’m proud to say that the algorithmic part of my project is over. I’m working on cleaning my commit history, writing docstrings, tests and cleaning up the code.

 
3 Comments

Posted by on July 11, 2011 in Sympy

 

Some more benchmarks

I wrote LU factorization for the LILMatrix. I’m able to factorize matrices of sizes uptill 1000 * 100 with sparsity < 0.1, and as LU factorization features early exit for singular matrices, this will solve the problems faced by Aaron in the integration module. Sparse LU performs 50 times better for matrices of size 100 * 100 with sparsity around 0.2 .

In [1]: A = randInvLILMatrix(100,5)

In [2]: A.sparsity()
Out[2]: 0.0819

In [3]: A = randInvLILMatrix(100,10)

In [4]: A.sparsity()
Out[4]: 0.1705

In [5]: %timeit L, U, P = A.LU_sparse()
10 loops, best of 3: 138 ms per loop

In [6]: B = A.toMatrix()

In [7]: %timeit L, U, P = B.LUdecomposition()
1 loops, best of 3: 5.02 s per loop

In [8]: A = randInvLILMatrix(100,5)

In [9]: A.sparsity()
Out[9]: 0.0831

In [10]: %timeit L, U, P = A.LU_sparse()
10 loops, best of 3: 86.5 ms per loop

In [11]: B = A.toMatrix()

In [12]: %timeit L, U, P = B.LUdecomposition()
1 loops, best of 3: 4.6 s per loop

In [13]: A = randInvLILMatrix(1000,10)

In [14]: %timeit L, U, P = A.LU_sparse()
1 loops, best of 3: 8.86 s per loop

In [15]: A.sparsity()
Out[15]: 0.017837

In [16]: B = A.toMatrix()

In [17]: %time L, U, P = B.LUdecomposition()

The last command is taking more than 5 minutes to execute. I grew impatient and started to write this blog. :)

Almost all the algorithmic part of my project is finished, except for nullspaces. But LILMatrix’s RREF works nicely so I guess nullspaces would be trivial.

As discussed in IRC, I will now start work on making a Matrix_ class (transitional name). The Matrix_ class will have a smart constructor which will decide which of the three matrices (dense, dok, lil) to use for its internal representation. The current Matrix and my DOKMatrix and LILMatrix class will be made low-level, that is not usable by the user directly. Much error-checks will be removed from them. The Matrix_ class would be the user-level class containing all the methods which the user needs. They will basically perform error checks and then call the required method in the internal representation. Matrix_ will also handle the seamless interconversion between the representations. For example, if a matrix object has LILMatrix representation, but the user calls .cholesky which is a method of DOKMatrix, then the object would be converted to DOKMatrix implicitly and .cholesky called.

In a few days, I will add a blog post detailing more about how I plan to write this interface.
I welcome any design ideas you might have.

 
Leave a comment

Posted by on July 2, 2011 in Sympy

 

Strategy to set up the structure of the new Matrix module

This would be my strategy to set up the structure of the new Matrix module.

1. [New code] Add code that I have written to SparseMatrix.

2. [Removing code duplication] Observe which functions can be used by both Matrix and SparseMatrix and put them in a super Matrix class, call it _Matrix for now. _Matrix will derive from object. Remove those functions from Matrix and SparseMatrix(if they’re there).

3. [SparseMatrix and Matrix derive from a common super class _Matrix]Make SparseMatrix derive from _Matrix rather than Matrix. This will make a lot of tests fail probably. Fix those tests. If that test was written for a SparseMatrix object using a dense algorithm, then that test is not good, and should be removed. User should use the dense matrix for dense algorithms. If a test is for a functionality that dense matrix shares, then that function could be put in the super _Matrix class. _Matrix class can be thought of as a collection of functions that do not depend on the representation of the matrix. It can also define some interfaces that a a subclass should have.

4. [Structuring] Put the three classes in separate files, and also split the test file. Change imports accordingly.

5. [Renaming] Rename Matrix to DenseMatrix, _Matrix to Matrix, SparseMatrix to DOKMatrix. This might break tests all over sympy, but solution is most likely to be trivial. Fix those tests. Because of the renaming Matrix(…) constructor won’t work. A small getaway as of now would be to write a __new__method which will divert the construction to the DenseMatrix constructor. (This will be changed later.)

6. [More tests] Now that SparseMatrix is a respected working separate class in its own right, write tests for old and new functionality.

At this point, We have DOKMatrix(Matrix), DenseMatrix(Matrix), and Matrix(object) in three separate files, with tests written in separate files for each. Matrix(…) is working and returns a DenseMatrix object. isinstance(A, Matrix) also works. Only after this is complete, I will add another file lilmatrix.py which will have the LILMatrix class deriving from Matrix.

7. [LILMatrix] Add LILMatrix code that I have written. Write tests for it.

8. [Misc] Write miscellaneous functions and ensure all operations can be done on both the sparse matrices even if that algorithm has not been written.

( Note: One major example is that LILMatrix does not support an efficient matrix multiplication algorithm, If a user calls LILmatrix * LILmatrix, both the matrices will be converted to a DOKMatrix and then multiplied and then return the product DOKMatrix. Note that LILMatrix * LILMatrix –> DOKMatrix. But I don’t think that is a problem since even if the user assumes the product to be a LILMatrix and carries out an operation of LILMatrix, if the DOKMatrix does not have that operation, then it will be converted back to LILMatrix and that operation performed. In short, Matrix interconversions would be implicit. Of course, the user will also be given explicit conversion functions like .to_dokmatrix.)

At this point three matrix representations will be have been implemented along with tests.

9. [Domainifying] Add the `domain` kwarg to relevant constructors and functions. Write some more tests to check for matrix functionality over some domains from polys like QQ, FF(q), etc. Write some random matrix generator function over such domains. A small experiment I did indicated that this is reasonably easy, but generates some random bugs. Fixing bugs depends on changing Polys code and for this reason might be slow.

10.[Putting it all together] Write the Matrix constructor, which will take into account 1. the domain specified and 2. the sparsity of the data passed to it. If the domain is not explicitly specified, I would use construct_domain to set the domain. The user can also explicitly state that he wants his matrix to have no domain i.e. domain=object. If the given data is sparse enough, the data will be passed to one of the sparse matrices.

11+. Uncharted.

Many of these steps can also be carried out in parallel, but those that cannot reply on the completion of the previous steps. I will need the help of the community here to review each step and tell me when according to them, a step has been ‘completed’, so that I can confidently move to the next step. Lack of this is causing me to be in a slight state of confusion regarding what should I work on next.

Things have also been a little slow lately due to the above problem and also due to my lack of experience with git. I have never contributed to open source before, and have no expertise in handling large collection of commits. I’m learning fast though with the help of Vinzent.

I hope to get much of this done by mid-term evaluation. But I think (I’m not sure, though) all these steps 1 – 10 will take some 10 – 15 days more than 15th July.

Suggestions are most welcome.

 

 
3 Comments

Posted by on June 29, 2011 in Sympy

 

Week 5

The last few days, I have been copy-pasting stuff from my working branches to a branch to make a pull request. I encountered a few bugs one which was due to the vagueness in python’s boolean expressions.
I used `if j` somewhere to check if j is not None, but j could also take values in the integers, and j == 0 was making the if check evaluate to False. I’ve not done much work this week, other than figuring out how to manage a large number of commits in git.

Once I add all the stuff to SparseMatrix, before adding the LILMatrix, I will have to make a few abstract classes. My plan is to let Matrix be the superclass, and make DenseMatrix (currently Matrix) DOKMatrix (SparseMatrix) and LILMatrix derive from it. The Matrix class would contain non-algorithmic and utility code which doesn’t depend on the representation of the matrix. This is also so that one can use isinstance(A, Matrix) to mean any of the matrices.

For being backward compatible, the constructor Matrix(…) will still work as it is now. According to me, it should go through the data once, and see if it is sparse enough, and then delegate the construction to DenseMatrix, or one of the sparse matrices. So, all matrices that are now used will be called DenseMatrix instead of Matrix, but that is hardly a change, since isinstance(A, Matrix) would still work. We would also need this preprocessing my the Matrix constructor because of the domain of the matrix. If the user does not explicitly specify the domain of the matrix, the Matrix constructor would run a modified construct_domain to see which domain does the elements belong to.

And there is Matrix.domain too which needs some work. But I think I will only be able to do that properly, when the class hierarchy is set.

LILMatrix currently employs only partial pivoting in gaussian elimination and rref. Vinzent is not too happy about it, as it partial pivoting does not take care of the sparsity of the matrix, and for particular matrices, the sparse matrix might completely change to a dense matrix. I’m searching for pivoting strategies that takes care of the sparsity of the matrix which can also work for symbolic matrices.

This paper [1] deals with sparse symbolic structure prediction of LU factors of a matrix, but only through partial pivoting.

This paper [2] contains some literature about complete pivoting, i.e. using column ordering to minimize sparsity of the LU factors. I will search for a more explicit algorithm which implements gaussian elimination with complete pivoting.

My thoughts on this are, that if a user wants to solve Ax = b for non-singular matrix A, then he should you use the cholesky decomposition methods of DOKMatrix. However for RREF, and nullspaces of singular matrix, he should use LILMatrix’s gauss methods with partial pivoting. For fast multiplication, he should use DOKMatrix. LILMatrix currently has no algorithms for multiplication and probably will not, as column operations on a LILMatrix is not efficient at all. It will hoever divert multiplication to DOKMatrix.

I will write another blog in 1-2 days detailing more about my design ideas for the matrix module.

[1] gauss.cs.ucsb.edu/publication/GrigoriGilbertCosnardStructHall.pdf

[2] http://www.cs.uwaterloo.ca/research/tr/1984/CS-84-43.pdf

 
Leave a comment

Posted by on June 28, 2011 in Sympy

 

The Frac class

Polys work marvelously. They are speedy, cover a vast domain of polynomials, can take in functions like cos(x), 1/x as generators. So, Polys are extended easy to domains which are not exactly polynomial mathematically. Sympy is sympy much thanks to polys. But it is not complete.

Due to the vast powers that the Expr/Poly class has provided us with, large expressions are easy to form and common in calculations, resulting in expression blowup. Factorizing a matrix with simple elements like x, x+1, x**2 in it give factor matrices with very large expressions, all of which, when simplified give simple and small expressions. It is evident that sympy lacks somewhere.

What is lacks is expressions under division. The polys are smart enough to handle addition, subtraction and division, but for division, it doesn’t use any of its computation power. Thus the fundamental for operations are not complete in the true sense.

Many algorithms, especially linear algebra, assume that the elements it is operating on belong to a field, that is division is possible. Sympy expr’s will perform brilliantly if division is made clean.

Look here.

In [13]: a
Out[13]: x

In [14]: a=(a+1)**-1

In [15]: a
Out[15]:
  1
─────
x + 1

In [16]: a=(a+1)**-1

In [17]: a
Out[17]:
    1
─────────
      1
1 + ─────
    x + 1

In [18]: a=(a+1)**-1

In [19]: a
Out[19]:
      1
─────────────
        1
1 + ─────────
          1
    1 + ─────
        x + 1

In [20]: a=(a+1)**-1

In [21]: a
Out[21]:
        1
─────────────────
          1
1 + ─────────────
            1
    1 + ─────────
              1
        1 + ─────
            x + 1

In [22]: a=(a+1)**-1

In [23]: a
Out[23]:
          1
─────────────────────
            1
1 + ─────────────────
              1
    1 + ─────────────
                1
        1 + ─────────
                  1
            1 + ─────
                x + 1

In [24]: a.simplify()
Out[24]:
3⋅x + 5
───────
5⋅x + 8

The big expression on 23 simplified to the simplest Rational Function. Why wasn’t it simplified automatically ?

According to me, a should be simplified automatically, the very first time in Out[17]. Out[17] should be

x + 1
─────
x + 2

This would be easy with a Frac class.

The reason why I’m pitching so much for a Frac class is because, when I knew that sympy polys can take in cos(x), 1/x as gens, I knew that if we implement the Frac class, almost all of symbolic needs are done for. The fundamental four operators, +,-,*,/ will be implemented in the true sense in sympy. Of course things like sqrt will still not be supported by the Frac class, but it is often to see things like

  ⎽⎽⎽
╲╱ x  + x
─────────
  3/2
 x    + 1

which can be treated as a Frac with generator x**1/2.

Sympy will be able to operate on common expressions like

 4⋅sin(x) + 5⋅cos(x) + 4
─────────────────────────
11⋅sin(x) + 2⋅cos(x) + 12

Currently, if the above expression is a, then

In [49]: (a+1)**-1
Out[49]:
              1
─────────────────────────────
 4⋅sin(x) + 5⋅cos(x) + 4
───────────────────────── + 1
11⋅sin(x) + 2⋅cos(x) + 12

which is just sad.

Hence, IMHO, the Frac class would be a great step forward for sympy, and would be a great asset of sympy.

 
4 Comments

Posted by on June 23, 2011 in Sympy

 

A review of phase 1

Phase 1 of my project involved writing sparse matrix algorithms. For that purpose, I have written two classes, the DOKMatrix class and the LILMatrix class. DOKMatrix is good for add, sub, mult and easy fast editing of matrices. It currently has sparse cholesky decomposition, LDL decomposition algorithms implemented, and a fast matrix multiplication algorithm. But these decompositions only work for symmetric matrices. So I was multiplying by A.T and solving the equation A.T * A * x = A.T * B. There is an overhead of two matrix multiplications here. Further, this method could only give the absolute value of the determinant of a general matrix.

Enter LIL. LIL is a list of list of elements. It has N lists which store the N rows of a matrix. A row is a list which stores only the non-zero elements of that row in the form of (j, value) tuple pair. Row operations in LILMatrix are very fast and intuitive. This enable me to write sparsified gaussian elimination and reduced row echelon form of a matrix. This completes the bare essentials of sparse matrix algorithms !

Some benchmarks:

LDL decomposition on DOKMatrix

In [5]: A = randInvDOKMatrix(10,2); A = A.T * A
# creates a banded 10*10 matrix, with diagonals till 2nd diagonal full.

In [6]: %timeit A._LDL_sparse() # On python floats
1000 loops, best of 3: 894 us per loop

In [7]: A.applyfunc(S)

In [8]: %timeit A._LDL_sparse() # On sympy Integers and Rationals.
100 loops, best of 3: 2.34 ms per loop

In [9]: A.sparsity()
Out[9]: 0.42
In [11]: A = randInvDOKMatrix(100, 10); A = A.T * A 
# For a 100 * 100 Matrix, with a band of 2*10 full diagonals

In [12]: %timeit A._LDL_sparse()
1 loops, best of 3: 221 ms per loop

In [13]: A.applyfunc(S)

In [14]: %timeit A._LDL_sparse()
1 loops, best of 3: 712 ms per loop

In [15]: A.sparsity()
Out[15]: 0.3316

Reduction to Upper Triangular Matrix using Gaussian Elimination on LILMatrix

In [20]: %timeit A.gauss_col()
1000 loops, best of 3: 280 us per loop

In [21]: A = randInvLILMatrix(10,2)

In [22]: %timeit A.gauss_col()
1000 loops, best of 3: 259 us per loop

In [23]: A.applyfunc(S)

In [24]: %timeit A.gauss_col()
1000 loops, best of 3: 744 us per loop

In [25]: A.sparsity()
Out[25]: 0.26

In [26]: A = randInvLILMatrix(100,10)

In [27]: %timeit A.gauss_col()
10 loops, best of 3: 77.2 ms per loop

In [28]: A.applyfunc(S)

In [29]: %timeit A.gauss_col()
1 loops, best of 3: 241 ms per loop

In [30]: A.sparsity()
Out[30]: 0.1691

Finding inverse of a LILMatrix using RREF

In [8]: A = randInvLILMatrix(10,2)

In [9]: %timeit A.inv_rref()
1000 loops, best of 3: 1.75 ms per loop

In [10]: A.applyfunc(S)

In [11]: %timeit A.inv_rref()
100 loops, best of 3: 9.04 ms per loop

In [12]: A.sparsity()
Out[12]: 0.26

In [13]: A = randInvLILMatrix(100,10)

In [14]: %timeit A.inv_rref()
1 loops, best of 3: 851 ms per loop

In [15]: A.applyfunc(S)

In [16]: %timeit A.inv_rref()
1 loops, best of 3: 10.6 s per loop

In [17]: A.sparsity()
Out[17]: 0.1704
Dense Matrix speed (for comparison)

In [18]: B = A.toMatrix()
In [19]: %time C = B.inv()
CPU times: user 175.02 s, sys: 3.73 s, total: 178.75 s
Wall time: 186.35 s

Solving Ax=B by RREFing on a augmented matrix.

In [5]: A = randLILMatrix(10,11, sparsity=0.6)

In [6]: %timeit A.rref()
1000 loops, best of 3: 1.01 ms per loop

In [7]: A.applyfunc(S)

In [8]: %timeit A.rref()
100 loops, best of 3: 2.67 ms per loop

In [9]: A = randLILMatrix(100,101, sparsity=0.6)

In [10]: %timeit A.rref()
1 loops, best of 3: 999 ms per loop

In [11]: A = randLILMatrix(100,101, sparsity=0.3)

In [12]: %timeit A.rref()
1 loops, best of 3: 787 ms per loop

In [13]: A.applyfunc(S)

In [14]: %timeit A.rref()
1 loops, best of 3: 9.04 s per loop

In [18]: A.sparsity()
Out[18]: 0.304158415842

 

 
5 Comments

Posted by on June 22, 2011 in Sympy

 
 
Follow

Get every new post delivered to your Inbox.