PETSc is a software package that provides lots of functionality for linear algebra, among other things. For example, it includes implementations of a variety of linear solvers, as well as various different sparse and dense matrix and vector formats. Of particular interest to deal.II is their ability to provide this functionality both on sequential and parallel (using MPI) computers.
deal.II has wrapper classes to the linear algebra parts of PETSc that provide almost the same interfaces as the built-in deal.II linear algebra classes. We use these interfaces for parallel computations based on MPI since the native deal.II linear algebra classes lack this ability. They are used, among other programs, in step-17, step-18 and step-40.
Note: The latest version of PETSc tested is 3.4.2. Major releases after this version may cause problems, so we recommend sticking to this version if at all possible.
PETSc usually requires you to set the
environment variables PETSC_DIR
and PETSC_ARCH
to a path to PETSc and denoting the architecture for which PETSc is
compiled (a string you can choose however you like, it is simply
intended to identify one of possibly several different PETSc
installations). If these environment variables are set, then
deal.II will pick them up during
configuration, and store them. It will then also recognize that
PETSc shall be used, and enable the wrapper classes.
Alternatively, the -DPETSC_DIR=DIR
and
-DPETSC_ARCH=ARCH
options for cmake
can be used to override the values of PETSC_DIR
and PETSC_ARCH
or if these environment
variables are not set at all. If you do have a PETSc
installation and have set the PETSC_DIR
and
PETSC_ARCH
environment variables but do not wish
deal.II to be configured for PETSc use, you
should specify -DDEAL_II_WITH_PETSC=OFF
as a flag
during configuration.
Note: deal.II can be installed with both PETSc and Trilinos and they do not usually get in their respective ways. There are, however, occasions where this is not true and this fundamentally comes from the fact that both of these packages are built from subpackages that are developed by independent groups. Unfortunately, some of these sub-packages can be configured to be part of both PETSc and Trilinos, and if you try to use deal.II with versions of PETSc and Trilinos that both contain a particular sub-package, little good will come of it in general. In particular, we have experienced this with the ML package that can serve as an algebraic multigrid method to both PETSc and Trilinos. If both of these packages are configured to use ML, then difficult to understand error messages at compile or link time are almost inevitable, and there is little the deal.II build system can do to prevent this. Thus, don't try to do that!
Installing PETSc correctly can be a bit of a challenge. To start, take a look at the PETSc installation instructions. We have found that the following steps generally appear to work where we simply unpack and build PETSc in its final location (i.e., we do not first build and then install it into a separate directory):
tar xvzf petsc-x-y-z.tar.gz cd petsc-x-y-z export PETSC_DIR=`pwd` export PETSC_ARCH=x86_64 # or any other identifying text for your machine ./config/configure.py --with-shared=1 --with-x=0 --with-mpi=1 --download-hypre=1 make
This automatically builds PETSc with both MPI and the algebraic
multigrid preconditioner package Hypre (which we use in step-40).
Now let PETSc check his own sanity:
make testwill self-check the serial (and MPI) implementation of PETSc.
export
commands into
your ~/.bashrc
or ~/.cshrc
files, with
the first one replaced by something of the kind
export PETSC_DIR=/path/to/petsc-x-y-z
By default, PETSc is compiled in "debug mode". You can switch this to "optimized mode" by adding the command line parameter
--with-debugging=0to the call of
./config/configure.py
above. In some cases,
this has made linear solvers run up to 30% faster. As with choosing
between deal.II's debug and optimized modes, you
should only use optimized PETSc builds once you have tested that your
program runs well in debug mode.