Revision 14 as of 2011-11-16 13:39:38

Clear message

Back to the HomePage.

Obtaining and running Fast Downward

If you want to:

then this page is for you.

Since we don't currently consider the code to be in releasable shape, there is no "official" code distribution of the planner available at the moment. You can still get access to the planner, but it's important that you are aware of some issues with the current codebase.

/!\ Important note: Make sure to completely read the #Caveats section below.

/!\ Note on running LAMA: As of 2011, LAMA has been merged back into the Fast Downward codebase. "LAMA 2011", the version of LAMA that participated in IPC 2011, is Fast Downward with a particular set of command-line arguments. Since LAMA 2011 greatly outperformed LAMA 2008 in the competition, we strongly encourage you to use the current Fast Downward code when evaluating your planner against LAMA. See the following note.

/!\ Note on running IPC 2011 planners based on Fast Downward: After familiarizing yourself with the basics on this page, please check out IpcPlanners.

/!\ License: While our repository does not currently contain licensing information, the planner is made available under the GNU Public License (GPL). If you want to use the planner in any way that is not compatible with the GPL, you will have to get permission from us.

Getting access to the planner

In the following I will assume Ubuntu Linux or a similar environment. If you use other kinds of Linux or other Unix-ish systems, you should be able to adapt the following steps to your setting. If you do not use a Unix-style system, you will probably not be able to compile the planner.

Since October 2010, the main development of the code takes place in a Mercurial repository. Before then, the code was hosted in a Subversion repository, which is still available. However, the trunk of the Subversion repository has been frozen (set to read-only) and will no longer receive updates or bug fixes. See LegacySubversionRepository if you need to access the old repository for some reason.

If you haven't used Mercurial before, you may want to have a look at the excellent documentation available online (also available as a printed book published by O'Reilly Media). However, you don't really have to understand Mercurial to use the planner. For those experienced with Mercurial or similar distributed version control systems, we currently use a "single-pusher" task-based workflow with a single pusher for the Fast Downward repository. If you want to contribute to Fast Downward, the recommended way is to set up a clone of the master repository (e.g. on Bitbucket) and provide a link to it in our issue tracker so that we can pull from it.

Dependencies

To obtain and build the planner, you need any version of Mercurial, a reasonably modern version of the GNU C++ compiler, and the usual build tools such as GNU make. For the validator, VAL, you will need flex and bison. To run the planner, you also need a reasonably modern version of Python 2.x (not Python 3.x!). These dependencies are typically satisfied by default on development machines. If not,

sudo apt-get install mercurial g++ make python flex bison

should be sufficient.

Additional dependencies on 64-bit systems

If you are using an x64 system, you will probably also need to run

sudo apt-get install g++-multilib

Obtaining the code

The command

hg clone http://hg.fast-downward.org DIRNAME

will create a clone of the Fast Downward master repository in directory DIRNAME. The directory is created if it does not yet exist. In the following, we assume that you used downward as the DIRNAME.

Compiling the planner

To build the planner for the first time, run:

   1 cd downward
   2 cd src
   3 ./build_all

If you later make changes to the planner, it is preferable to use make in the different subdirectories of downward/src instead of rerunning build_all since you'll usually only work on one part of the planner at a time.

Running the planner

For basic instructions on how to run the planner including examples, see PlannerUsage. The search component of the planner accepts a host of different options with widely differing behaviour. At the very least, you will want to choose a SearchEngine with one or more HeuristicSpecifications.

Caveats

Please be aware of the following issues when working with the planner, especially if you want to use it for conducting scientific experiments:

  1. You are working with the most recent version of the development branch of the code, which is usually not thoroughly tested. Things can break or degrade with every commit. Typically they don't, but if they do, don't be surprised. We are working on a proper release, but we are not there yet. For more information, see the list of outstanding issues for release 1.0 of the planner.

  2. There are known bugs, especially with the translator component. To find out more, check out our issue tracker. The planner has only really been tested with IPC domains, and even for those it does not work properly with all formulations of all domains. For more information, see the section on #Known_good_domains below.

  3. We have recently integrated various branches of the code, and this has led to a performance degradation in some cases. For example, we know that there are issues with the landmark-cut heuristic (see http://issues.fast-downward.org/issue69). Many other configurations have not yet been properly tested for such performance degradations. Hence, before you report any performance results for the planner, please check them for plausibility by comparing them to our published results, and contact us if anything looks suspicious.

  4. The search options are built with flexibility in mind, not ease of use. It is very easy to use option settings that look plausible, yet introduce significant inefficiencies. For example, an invocation like

    search/downward --search lazy_greedy(ff(), preferred=ff())" < output

    looks plausible, yet is hugely inefficient since it will compute the FF heuristic twice per state. See the examples on the PlannerUsage page to see how to call the planner properly. If in doubt, ask.

Known good domains

There is a large collection of planning competition benchmarks in downward/benchmarks, which includes all IPC domains before 2008 (but not all formulations of all domains). As of this writing, the domain collection does not include the IPC 2008 benchmarks. The planner is somewhat sensitive to non-STRIPS language constructs and will choke on some valid PDDL inputs. Moreover, many of the heuristics do not support axioms or conditional effects. Even worse, sometimes the translator will introduce conditional effects even though the original PDDL input did not contain them. (We are working on this.)

We recommend that you use the following "known working" sets of domains for your experiments (names of domains as in downward/benchmarks):

The LM-cut suite is a subset of the All suite in terms of domains but not domain formulations, as it uses the formulations pathways-noneg and trucks-strips instead of pathways and trucks.

Scripts

The repository contains a set of scripts to automate experiments and conduct reports, but we don't currently feel they are in a shape to document them for the general public. You may want to play around with the scripts in the downward/scripts and downward/new-scripts directory to see if they do anything useful for you. However, no support is provided for them at the moment. Most scripts support a --help option.

One script that is very useful if you want to compare various configurations of the search component of the planner is do_preprocess.py, which runs the translator and preprocessor on a suite of benchmarks. Here are four possible invocations:

   1 cd downward
   2 cd scripts
   3 ./do_preprocess.py grid:prob04.pddl
   4 ./do_preprocess.py pathways-noneg
   5 ./do_preprocess.py ALL
   6 ./do_preprocess.py LMCUT_DOMAINS

The first of these will run the translator and preprocessor only on problem file prob04.pddl of the grid domain. The second will run them on all problem files for the pathways-noneg domain. The third and fourth will run them on all tasks from the All suite and LM-cut suite, respectively (see section #Known_good_domains above).

In all cases, the result files will be stored in directory downward/results.

In some domains, you will need a lot of memory to translate and preprocess the largest problem files (especially psr-large and satellite). We recommend 4 GB to be on the safe side. If you have more memory and several cores, you can prepare several tasks in parallel with the option -j, which has the same meaning as in make. For example,

   1 cd downward
   2 cd scripts
   3 ./do_preprocess.py -j 8 ALL

will quickly preprocess all tasks in the All suite on a machine with 8 cores and 32 GB of RAM.