[Date Index]
[Thread Index]
[Author Index]
Re: A function to do incomplete LU decomposition with a drop tolerance?
*To*: mathgroup at smc.vnet.net
*Subject*: [mg123353] Re: A function to do incomplete LU decomposition with a drop tolerance?
*From*: Oliver Ruebenkoenig <ruebenko at wolfram.com>
*Date*: Fri, 2 Dec 2011 07:23:23 -0500 (EST)
*Delivered-to*: l-mathgroup@mail-archive0.wolfram.com
On Wed, 30 Nov 2011, Nasser M. Abbasi wrote:
> On 11/30/2011 5:51 AM, Oliver Ruebenkoenig wrote:
>>
>>
>> On Wed, 30 Nov 2011, Nasser M. Abbasi wrote:
>>
>>>
>>> Version 8.0.4
>>>
>>> I need to use ILU as a preconditioner for a conjugate gradient solver
>>> I am writing in Mathematica. But looking around, I am not able to
>>> find one in Mathematica. Only reference I found is here
>>>
>>> http://reference.wolfram.com/mathematica/tutorial/SomeNotesOnInternalImplementation.html
>>>
>>> Where it say in the above the following:
>>>
>>> "For sparse arrays, LinearSolve uses UMFPACK multifrontal direct solver
>>> methods and with Method uses Krylov iterative methods preconditioned
>>> by an incomplete LU factorization"
>>>
>>> But how does a user access this ILU from Mathematica?
>>>
>>> I do have the A matrix (sparse).
>>
>
>
>> Here are some ideas:
>>
>> AbsoluteTiming[
>> solutionGMRES =
>> LinearSolve[stiffness, load,
>> Method -> {"Krylov", Method -> "GMRES", "Preconditioner" -> None,
>> Tolerance -> tol, MaxIterations -> 100}];]
>
> <SNIP more nice examples>
>
> Thanks Oliver for the examples, they are useful.
>
> I have 2 follow up questions:
>
> Q1. Are these documented somewhere? looking in the documentation center, not
> able to find more information. Only mention of "Krylov" is one line
> in the help for LinearSolve.
>
> http://reference.wolfram.com/mathematica/ref/LinearSolve.html
>
> For example, how does one know what ILU0,ILU1,ILUT,ILUTP etc.. and
> learn more about the API? I am sure there is a document somewhere
> that goes into details, but I am not finding it now.
>
>>
>> What you will need however, is something different. If you want to go for
>> iterative solves you'll need bandwidth reordering.
>>
>> Needs["GraphUtilities`"]
>> {r, c} = MinimumBandwidthOrdering[mat["PatternArray"]];
>>
>> MatrixPlot[ mat[[r, c]] ]
>>
>
> I am not sure I understand this last part. How does the above fit
> into the examples you shown before? How/when do I use that? When I tried:
>
> {row, col} = MinimumBandwidthOrdering[stiffnessMatrix["PatternArray"]];
> sol = LinearSolve[stiffnessMatrix[[row, col]], loadVector,
> Method -> {"Krylov", Method -> "GMRES", "Preconditioner" -> None,
> Tolerance -> 0.001, MaxIterations -> 100}];
>
> Then the solution did not look right, compared to just writing
>
> sol = LinearSolve[stiffnessMatrix, loadVector,
> Method -> {"Krylov", Method -> "GMRES", "Preconditioner" -> None,
> Tolerance -> 0.001, MaxIterations -> 100}];
Right, there are two options you have. One way is to reorder the solution
once more. sol[[ Ordering[r] ]] should do this.
I am going to talk about the second option in a second. This reordering is
important since then iterative solvers can potentially find a solution
quicker and need less memory to find the solution. This depends largely
how your matrix was generated. For example mesh based algorithms (FVM,
FEM) will assemble the stiffness matrix according to the incidents they
get from the meshing algorithm. Quite often those incidences are not
optimal - they produce sparse arrays that have nonzero entries quite
unstructured. However, the bandwidth reordering will make a thinner sparse
array which can be beneficial for the iterative solver.
Now, the second option you have is the following. Once you have your
incidents from the mesh generation you assemble a Pattern SparseArray (no
values)
e.g.
sa = SparseArray[{{1, 2}, {2, 3}, {3, 2}} -> _, {5, 5}]
this will cost some time to construct. Memory should be OK, since no
values are stored. You give this matrix to the bandwidth reordering. From
that you get so to speak a mapping from the original incidents to the
optimal once. Say you have n incidents then the mapping is Range[n] -> r
where r is result from the bandwidth reordering. Now, you replace in the
original incidents the old values with the new ones. And then use these
incidents for the matrix assembly. This will result in a thin
sparse matrix.
Depending on where your matrix comes from (if you have control over the
creation or not) either option 1 or option 2 are more useful.
Concerning the options for LinearSolve, this a collection from various
talks - unfortunatly I can not find a link to it anymore, should I still
get a hold of it, I'll send a message. But you can find a lot here too
tutorial/LinearAlgebraInMathematicaOverview
>
> Q2)
> But what I really was looking for is to do incomplete LU decomposition
> on my A matrix as a stand alone operation, as I wrote the preconditioned
> conjugate gradient solver in Mathematica, described here:
>
> http://en.wikipedia.org/wiki/Conjugate_gradient_method#The_preconditioned_conjugate_gradient_method
>
> And you see in the above algorithm, what is called the 'M' matrix, is the
> preconditioner matrix. This is what I wanted to generate in Mathematica.
>
> One choice for a preconditioner matrix is to use ILU on A, then save
> result in what is called M above, and then use it in the iterations
> as decribed above.
You could start with ref/LUDecomposition to get your code working and then
implement your own.
>
> I implemented the above conjuagte gradient myself and not using LinearSolver.
> (This is for learning, and will be in a demo later.) Actually I did not
> know that one can use "GMRES" in Mathematica before now.
>
> So my question is, it would be nice if Mathematica had a direct API
> to allow one to do this ILU decomposition on a matrix as a stand alone
> function.
>
> I can try to implement ILU myself, the algorithm is described here
>
If you do that I'd interested to hear about it.
> http://www.cfd-online.com/Wiki/Incomplete_LU_factorization_-_ILU
>
> And in other Fortran code on the net. http://www.netlib.org/linalg/
>
> But thought to ask first here before I do that.
>
Yes, this is always a good thing to do.
Hope this helps,
Oliver
>>
>>>
>>> Also, I see still that the Incomplete Cholesky decomposition
>>> does not have a drop tolerance parameter. I hope in version 9
>>> it will.
>>
>> Hm, while I agree that this would be useful - there are bigger fish to
>> fry. This a personal opinion and I do not know if this will be in V9.
>>
>> Oliver
>>
>
> Ok, but just you know, your market competitor in this area, does has
> this option on their chol() implementation and has ilu() for many years
> now :)
>
> thank you,
>
> --Nasser
>
Prev by Date:
**Re: gives different result compared to 1/Diagonal[Normal@A] when A is sparse**
Next by Date:
**Re: A matrix element specified by a vector?**
Previous by thread:
**Re: A function to do incomplete LU decomposition with a drop tolerance?**
Next by thread:
**Re: FindShortestTour Function- Error**
| |