Differences between revisions 37 and 38
Revision 37 as of 2012-09-03 21:07:48
Size: 12788
Editor: MalteHelmert
Comment:
Revision 38 as of 2014-04-21 10:21:26
Size: 408
Editor: JendrikSeipp
Comment: remove text for new-scripts
Deletions are marked like this. Additions are marked like this.
Line 3: Line 3:
/!\ The scripts have been rewritten as a python package. You can find the code at https://bitbucket.org/jendrikseipp/lab. The documentation is available at http://readthedocs.org/docs/lab. /!\ = Fast Downward experiments =
Line 5: Line 5:
= Experiment scripts =

In the directory "new-scripts" you find some scripts that facilitate conducting experiments.
An experiment is conducted in three stages: Generation of experiments, fetching of results and production of reports. Each stage has its own generic main module: `experiments.py`, `resultfetcher.py` and `reports.py`. These modules provide useful classes and methods and can be imported by scripts that actually define concrete actions. For the Fast Downward planning system the example scripts that use these modules are `downward_experiments.py`, `downward-resultfetcher.py` and `downward-reports.py`. Together they can be used to conduct Fast Downward experiments. '''Passing `-h` on the command line gives you an overview of each script's commands. Be sure to check out the available options as they may be more up to date than the instructions here.'''

An even easier, but more restricted way is to use `experiment.sh` that is a wrapper around the above-mentioned python scripts. In the following we will give instructions for both options.

'''Note for gkigrid users:''' If you submit an experiment to the gkigrid from a shared account, please let the experiment name begin with your initials. In the examples below e.g. Jane Doe would write `downward_experiments.py --path jd-expname --suite ALL --configs lama gkigrid`

= Running experiments with `experiment.sh` =
For your convenience the shell script `experiment.sh` has been created. It facilitates conducting experiments in the following way: You write your own little shell script, e.g. `lama.sh`, setup some options like the problem suite and the desired configuration and invoke `experiment.sh` from your script. Then you only have to call `lama.sh 1`, `lama.sh 2` etc. to iterate over the experiment steps. Let's have a look at the annotated `lama.sh`:

{{{#!highlight bash
#! /bin/bash

# Use any config name from downward_configs.py or from you own module (mymodule.py:myconfig)
CONFIGS=lama

# Select a cluster queue (Only applicable for experiments run on University of Freiburg's gkigrid cluster)
QUEUE=athlon_core.q

# Select a problem suite from downward_suites.py
SUITE=IPC08_SAT_STRIPS

# Available options are local and gkigrid
EXPTYPE=gkigrid

# Lets experiment.sh prepare and run the actual experiment
source experiment.sh
}}}

== Running a local experiment directly with the Python scripts ==

{{{#!highlight bash
# Preprocessing
./downward_experiments.py --path expname --suite ALL --preprocess local
./expname-p/run
./resultfetcher.py expname-p

# Build and run the experiment
./downward_experiments.py --path expname --configs downward_configs.py:lama --suite ALL local
./expname/run

# Make reports
./downward-resultfetcher.py expname
./downward-reports.py expname-eval
}}}

Below you find a detailed description of the above steps.

== Generate an experiment ==

{{{
./downward_experiments.py --path expname --configs downward_configs.py:lama --suite ALL local
}}}

Generates a planning experiment for the suite ALL in the directory "expname". The planner will use the configuration string `lama` found in the file `downward_configs.py`. When you invoke the script you can specify on the command line whether you want the experiment to be run `local` or on the `gkigrid`. You can also directly set the timeout, memory limit, number of processes, etc.

Before you can execute that command however you have to run it once with the `--preprocess` parameter. This will generate a preprocessing experiment. After you have run this preprocessing experiment and fetched the results with `./resultfetcher.py` you can generate the search experiment. If you want to do the preprocessing and search in a single experiment you can pass the `--complete` parameter.

Local experiments can be started by running

{{{
./expname/run
}}}

Gkigrid experiments are submitted to the queue by running

{{{
qsub expname/expname.q
}}}

== Fetch and parse results ==

{{{
./downward-resultfetcher.py expname
}}}

Traverses the directory tree under "expname" and parses each run's experiment files. The results are written into a dictionary-like file at "expname-eval/properties". This file maps each run's id to its properties (search_time, problem_file, etc.). If additionally you want each run's files to be copied into a new directory structure organized by the ids you can pass the "--copy-all" option.

=== Partial fetching ===
If you don't need the results of the preprocessing or search phase, you can pass the options `--no-preprocess` and `--no-search` to `downward-resultfetcher.py`. This obviously speeds up parsing, too.

=== Combine results of multiple experiments ===
It is possible to combine the results of multiple experiments by running the above command on all the experiment directories while specifying the target evaluation directory. If you don't specify the evaluation directory it defaults to "expname-eval". An example would be

{{{
./downward-resultfetcher.py exp1 --dest my-eval-dir
./downward-resultfetcher.py exp2 --dest my-eval-dir
}}}

== Make reports ==

{{{
./downward-reports.py expname-eval
}}}

Reads all properties files found under "expname-eval" and generates a report based on the given commandline parameters. By default this report contains absolute numbers, writes a HTML file and analyzes all numeric attributes found in the dataset. You can however choose only a subset of attributes and filter by configurations or suites, too. A detailed description of the available parameters can be obtained by invoking `downward-reports.py -h`.

=== Making a problem suite from results ===
The `downward-reports.py` also gives you the possibility to create a new problem suite based on the results of an experiment. To select a subset of problems you can specify filters for the set of the runs. E.g. to get a list of problems that had more than 1000 states expanded in the `lama` config, you could issue the following command:

{{{#!highlight bash
./downward-reports.py expname-eval --filter config:eq:lama expanded:gt:1000 --report suite
}}}

(Remember to pass the correct name for config, it might not be just its nickname)
As you can see the format of a filter is '''<attribute_name>:<operator from the [[http://docs.python.org/library/operator.html|operator]] module>:<value>'''. If the expression '''operator(run[attribute], value)''' evaluates to True, the run's planning problem is '''not''' removed from the result list.

=== Reports for iterated searches ===
If you want to see the individual values e.g. for the attribute expansions and not only its cumulative value you can make an iterative report by setting ``--report iter``. The values for each attribute are stored in a list that has the name "<attribute>_all", so the corresponding call would be:

{{{#!highlight bash
./downward-reports.py expname-eval --report iter --attributes expansions_all
}}}

If no attributes are set on the commandline, all attribute names that end with "_all" are included in the report.

== Comparing different revisions ==
If you want to compare different revisions of Fast Downward, you can write your own script that imports python module `downward_experiments.py`. This module provides an easy way to select and compare specific revisions of the three subsystems (translate, preprocess and search). It does so by using the `experiments.py` module. You can find some example code in the `compare-revisions-example.py` script:

{{{#!highlight python
"""
This script shows the usage of the checkouts module together with the
downward_experiments module. Together those modules can be used to create
experiments that compare the performance of Fast Downward components. You can
easily compare different revisions of the translate, preprocess and search
component.
"""

import downward_experiments
from checkouts import Translator, Preprocessor, Planner

# This combination shows the available parameters and their possible values
combinations = [
    (Translator(repo='../', rev='e9845528763d'),
     Preprocessor(repo='http://hg.fast-downward.org', rev='tip', dest='repo1'),
     Planner(rev='WORK')),
]

# These combinations show how to have multiple copies of the same revision
# This can be useful e.g. if you want to check the impact of Makefile options
combinations = [
    (Translator(), Preprocessor(), Planner(rev=1600, dest='copy1')),
    (Translator(), Preprocessor(), Planner(rev=1600, dest='copy2')),
]

# These combinations show how to check the impacts of you uncommited changes
combinations = [
    (Translator(rev='tip'), Preprocessor(rev='tip'), Planner(rev='tip')),
    (Translator(rev='WORK'), Preprocessor(rev='WORK'), Planner(rev='WORK')),
]

downward_experiments.build_experiment(combinations)
}}}

If you call the `build_experiment` method your script inherits all the commandline parameters from `downward_experiments.py`, so you run it e.g. in the following way: `./compare-revisions-example.py --configs ou --suite STRIPS --path exp-revisions local`

=== Example: issue69 ===

As a more realistic example we will look at the code that has been used to get some information about [[http://issues.fast-downward.org/issue69|issue69]] from the issue tracker. The code that resides in issue69.py makes use of the SVN checkout mechanism rather than the above-mentioned HG checkouts, but the syntax is very similar as you see below:

{{{#!highlight python
#! /usr/bin/env python

import downward_experiments
from checkouts import TranslatorSvn, PreprocessorSvn, PlannerSvn

translator = TranslatorSvn(rev=3613)
preprocessor = PreprocessorSvn(rev='HEAD')

combinations = [
    (translator, preprocessor, PlannerSvn(rev=3612)),
    (translator, preprocessor, PlannerSvn(rev=3613)),
    (translator, preprocessor, PlannerSvn(rev='HEAD')),
]

downward_experiments.build_experiment(combinations)
}}}

This code builds an experiment that compares three revisions of the search component; rev 3612, rev 3613 and the latest (HEAD) revision of the deprecated SVN repository. The different Checkout classes also have another keyword parameter called `repo` that can be used when you want to checkout a revision from a different repository.

One combination of three checkouts results in one run of the fast-downward system (translate -> preprocess -> search) for each problem and configuration. Obviously you should checkout different revisions of the subsystems you want to compare and let the other subsystems have the same revisions in all runs.

The script `issue69.py` inherits all the commandline parameters from `downward_experiments.py`, so you run it e.g. in the following way: `./issue69.py --configs ou --suite STRIPS --path exp-issue69 local`

=== Example: issue7 ===
You don't have to supply new Checkout instances for each combination. See the comparison experiment for [[http://issues.fast-downward.org/issue7|issue7]] as an example. This code compares three different revisions of the translator in an external branch with the checked-in translator from trunk. The revisions for the preprocessor and the search component remain the same for all combinations.

{{{#!highlight python
#! /usr/bin/env python

import downward_experiments
from checkouts import Translator, Preprocessor, Planner
from checkouts import TranslatorSvn, PreprocessorSvn, PlannerSvn

branch = 'svn+ssh://downward-svn/branches/translate-andrew/'

preprocessor = PreprocessorSvn(rev='HEAD')
planner = PlannerSvn(rev=3842)

combinations = [
    (TranslatorSvn(repo=branch, rev=3827), preprocessor, planner),
    (TranslatorSvn(repo=branch, rev=3829), preprocessor, planner),
    (TranslatorSvn(repo=branch, rev=3840), preprocessor, planner),
    (TranslatorSvn(rev=4283), preprocessor, planner))
]

downward_experiments.build_experiment(combinations)
}}}

== Scatter plots ==
Scatter plots can be generated by passing the `--report scatter` option to `downward-reports.py`. Here is an example call:

{{{#!highlight bash
./downward-reports.py exp-issue69-eval/ --report scatter --res problem -a expansions --configs 3613-ou,5272-ou --suite gripper
}}}

You will have to specify exactly one attribute on the commandline. If there are more than 2 configurations in your experiment you will have to narrow those down to 2 for the report. If you want to make a plot for a single domain, or a subset of domains, list those with the `--suite` parameter.
We recommend using the {{{downward}}} package for running Fast Downward experiments. It is part of {{{lab}}}, a python library for running code on large benchmark sets. Experiments can be run either locally or on a computer cluster. You can find the code at https://bitbucket.org/jendrikseipp/lab. The documentation is available at http://lab.rtfd.org.

Back to HomePage.

Fast Downward experiments

We recommend using the downward package for running Fast Downward experiments. It is part of lab, a python library for running code on large benchmark sets. Experiments can be run either locally or on a computer cluster. You can find the code at https://bitbucket.org/jendrikseipp/lab. The documentation is available at http://lab.rtfd.org.