Revision 3 as of 2010-09-24 19:45:47

Clear message

Back to the HomePage.

Obtaining and running Fast Downward

This page is intended for people who are not core developers of the planner and want to use it in their own research, for example in order to:

Since we don't currently consider the code to be in releasable shape, there is no "official" code distribution of the planner available at the moment. We can still give you access to the planner, but it's important that you be aware of some issues with the current codebase.

/!\ Note on running LAMA: As of this writing, most parts of LAMA have been (re-)integrated into the Fast Downward codebase, but there are still some important features and optimizations missing. If the reason why you want to access the Fast Downward code is because you want to run experiments that compare against the LAMA planner, we strongly recommend downloading the older, but more stable LAMA distribution from Silvia Richter's homepage instead.

Getting access to the planner

In the following I will assume Ubuntu Linux or a similar system. If you use other kinds of Linux or other Unix-ish systems, you should be able to adapt the following steps to your setting. If you do not use a Unix-style system, you will probably not be able to compile the planner.

The code is currently hosted in a Subversion repository. (A switch to Mercurial is planned for the near future.) If you haven't used Subversion before, you may want to have a look at the excellent documentation available online. However, you don't really have to understand Subversion to use the planner, unless you also want to use our repository for your development efforts (see below).

Dependencies

To obtain and build the planner, you will need a non-ancient version of Subversion, a reasonably modern version of the GNU C++ compiler and the usual build tools such as GNU make. To run the planner, you will also need a reasonably modern version of Python 2.x (not Python 3.x!). These dependencies are typically satisfied by default on systems used to develop code. If not,

sudo apt-get install subversion g++ make python

should be sufficient.

Getting the code

To get access to the planner, send me an email <helmert@informatik.uni-freiburg.de>. I will then add you to the list of people with access to the Subversion repository that hosts the code, and you will receive an automated email with instructions on how to set up your system to allow checking out the code.

The email will tell you to use the command

svn checkout svn+ssh://downward DIRNAME

for the actual check-out, which is not a good command to use since it would check out all the (many) branches of the code, with multiple copies of the benchmark suite etc. Instead, I recommend you use the command

svn checkout svn+ssh://downward/trunk DIRNAME

to only check out the trunk version of the code (i.e., the main line of current development).

Compiling the planner

The following assumes that you have checked out the planner trunk into directory WORKING-COPY. To first build the planner, run:

   1 cd WORKING-COPY
   2 cd downward
   3 ./build_all

If you later make changes to the planner, it is preferable to use make in the different subdirectories of downward instead of rerunning build_all since you'll usually only work on one part of the planner at a time.

Running the planner

For basic instructions on how to run the planner, see PlannerUsage. The search component of the planner accepts a host of different options with widely differing behaviour. At the very least, you will want to choose a SearchEngine with one or more HeuristicSpecifications.

Caveats

Please be aware of the following issues when working with the planner, especially if you want to use it for conducting scientific experiments:

  1. You are working with the most recent version of the development branch of the code, which of course is a moving target. Things can break or degrade with every commit. Typically they don't, but if they do, don't be surprised. We are working on a proper release, but we are not there yet. (For details, see the list of outstanding issues for release 1.0 of the planner.

  2. There are known bugs, especially with the translator component. To find out more, check out our issue tracker. The planner has only really been tested with IPC domains, and even for those it does not work properly with all formulations of all domains. See the list of "known good domains" below.

  3. We have recently integrated various branches of the code, and this has led to a performance degradation in some cases. For example, we know that there are issues with the landmark-cut heuristic (see http://issues.fast-downward.org/issue69). Other cases have not yet been properly tested for such performance degradations. Hence, before you report any performance results for the planner, please check them for plausibility by comparing them to our published results, and contact us if anything looks suspicious.

  4. Action costs as introduced for IPC-2008 are accepted by the planner, but currently most heuristics completely ignore them, which is often not what you want. For now, we recommend not conducting experiments with the IPC-2008 domains. (If you really want to, please get in touch so that we can tell you how your desired planner configuration handles action costs at the moment.)

  5. The search options are built with flexibility in mind, not ease of use. It is very easy to use option settings that look plausible, yet introduce significant inefficiencies. For example, an invocation like

    search/downward --search lazy_greedy(ff(), preferred=(ff()))" < output

    looks plausible, yet is hugely inefficient since it will compute the FF heuristic twice per state. See the examples on the PlannerUsage page to see how to call the planner properly. If in doubt, ask.

Known good domains

There is a large collection of planning competition benchmarks in trunk/benchmarks, which includes all IPC domains before 2008 (but not all formulations of all domains). As of this writing, the domain collection does not include the IPC 2008 benchmarks. The planner is somewhat sensitive to non-STRIPS language constructs and will choke on some valid PDDL inputs. Moreover, many of the heuristics do not support axioms or conditional effects. Even worse, sometimes the translator will introduce conditional effects even though the original PDDL input did not contain them. (We are working on this.)

We recommend that you use the following "known working" sets of domains for your experiments (names of domains given as they exist in directory trunk/benchmarks):

(The second list is a subset of the first in terms of domains but not domain formulations, as it uses the formulations pathways-noneg and trucks-strips instead of pathways and trucks.)

Scripts

The repository contains a set of scripts to automate experiments and conduct reports, but we don't currently feel they are in a shape to even start documenting them properly. However, you may want to play around with the scripts in the scripts and new-scripts directory to see if they do anything useful for you. However, no support is provided for them at the moment. Most scripts support a --help option.

One script that is very useful if you want to compare various configurations of the search component of the planner is do_preprocess.py, which runs the translator and preprocessor on a suite of benchmarks. Here are four ways of running it {{{!highlight bash cd scripts ./do_preprocess.py grid:prob04.pddl ./do_preprocess.py pathways-noneg ./do_preprocess.py ALL ./do_preprocess.py LMCUT_DOMAINS }}}

The first of these will run the translator and preprocessor only on problem file prob04.pddl of the grid domain. The second will run them on all problem files for the pathways-noneg domain. The third and fourth will run them on all tasks from the all suite and LM-cut domains suite, respectively (see section on known good domains above).

In all cases, the result files will be stored in a directory called results that is a sibling directory of benchmarks, downward and scripts.

In some domains, you will need a lot of memory to translate and preprocess the largest problem files (especially psr-large and satellite). We recommend 4 GB. If you have more memory and several cores, you can prepare several tasks in parallel with the option -j, which works the same way as in make: {{{!highlight bash cd scripts ./do_preprocess.py -j 8 ALL}}} will quickly preprocess all tasks in the all suite on a machine with 8 cores and 32 GB of RAM.

Using our repository to develop your own branch of the code

TODO