Differences between revisions 1 and 21 (spanning 20 versions)
Revision 1 as of 2010-08-03 09:58:17
Size: 584
Editor: GabiRoeger
Comment:
Revision 21 as of 2010-12-07 08:25:09
Size: 4706
Comment:
Deletions are marked like this. Additions are marked like this.
Line 2: Line 2:

Line 7: Line 5:
Running the planner is a three-step process as explained in Section 3 (pp. 202-203) of the [[http://www.jair.org/papers/paper1705.html|JAIR paper on Fast Downward]]. The following instructions show how to run these three steps, in sequence, assuming that the preprocessor and search component have been compiled and that you are currently located in the {{{src}}} directory.
Line 9: Line 9:
XXX TODO {{{
translate/translate.py [DOMAIN] PROBLEM
}}}
Line 11: Line 13:
 * `DOMAIN` (filename): PDDL domain file
 * `PROBLEM` (filename): PDDL problem file

If the domain file is not given, the planner will try to infer a likely name from the problem file name, using the conventions used at the various IPCs. (If in doubt if this will work for you, just try it out.)

Note: Creates a file called output.sas (as well as test.groups, all.groups, ...)
Line 13: Line 21:
XXX TODO {{{
preprocess/preprocess < OUTPUT.SAS
}}}

 * `OUTPUT.SAS` (filename): translator output

Note: Creates a file called output
Line 19: Line 33:
 search [OPTIONS] {-s|--search} SEARCH < OUTPUT search/downward [OPTIONS] --search SEARCH < OUTPUT
Line 25: Line 39:
 * `-h, --heuristic` [[ReusingHeuristics#predefinition|HEURISTIC_PREDEFINITION]]  * `--heuristic` [[ReusingHeuristics#predefinition|HEURISTIC_PREDEFINITION]]
Line 28: Line 42:
 * `-r, --random-seed` SEED  * `--random-seed` SEED
Line 30: Line 44:


=== Examples ===

A* search:

{{{#!highlight bash
# landmark-cut heuristic (previously configuration "ou")
 ./downward --search "astar(lmcut())" < output

# merge-and-shrink heuristic with default settings (previously configuration "oa50000")
 ./downward --search "astar(mas())" < output

# blind heuristic (previously configuarion "ob")
 ./downward --search "astar(blind())" < output
}}}

Lazy greedy best first search with preferred operators and the queue alternation method:

{{{#!highlight bash
## using FF heuristic and context-enhanced additive heuristic (previously: "fFyY")
 ./downward --heuristic "hff=ff()" --heuristic "hcea=cea()" \
            --search "lazy_greedy(hff, hcea, preferred=(hff, hcea))" \
            < output

## using FF heuristic (previously: "fF")
 ./downward --heuristic "hff=ff()" \
            --search "lazy_greedy(hff, preferred=(hff))" \
            < output

## using context-enhanced additive heuristic (previously: "yY")
 ./downward --heuristic "hcea=cea()" \
            --search "lazy_greedy(hcea, preferred=(hcea))" \
            < output
}}}

The above examples use the new best-first search implementation.
For comparison, the old best-first search implementation is still available:
{{{
 ./downward --heuristic "hff=ff()" --heuristic "hcea=cea()" \
            --search "old_greedy(hff, hcea, preferred=(hff, hcea))" \
            < output
}}}


Q: I would like to see an example that uses the Lama-FF Synergy feature. I'd be most interested in a configuration that is as close to LAMA as currently possible (i.e., using the LAMA and FF heuristics with synergy in a lazy alternation search with preferred operators for both heuristics, using iterated search with the appropriate set of options).

A: This is as close to LAMA as we have right now (same search algorithms and heuristics as in LAMA, iterated search, but no +1 to operator costs). The first search iteration uses the lazy greedy alternation search with LAMA and FF heuristics with synergy, using both heuristics for preferred operators. Later search iterations are conducted with the same heuristics and preferred operators, but using weighted A* search with a weight schedule rather than greedy search.

{{{
./downward --heuristic "hlm,hff=lm_ff_syn(lm_rhw())"
           --search "iterated(
                              lazy_greedy(hff,hlm,preferred=(hff,hlm)),
                              lazy_wastar(hff,hlm,preferred=(hff,hlm),w=5),
                              lazy_wastar(hff,hlm,preferred=(hff,hlm),w=3),
                              lazy_wastar(hff,hlm,preferred=(hff,hlm),w=2),
                              lazy_wastar(hff,hlm,preferred=(hff,hlm),w=1),
                              repeat_last=true)" < output
}}}

The following is the corresponding call to just find a first solution (i.e., not doing iterated search):

{{{
 ./downward --heuristic "hlm,hff=lm_ff_syn(lm_rhw())"
            --search "lazy_greedy(hlm, hff, preferred=(hlm, hff))"
            < output
}}}

If you would like to have another translation from an old-style configuration to the new call-syntax,
please add it here as a TODO.

Back to HomePage.

Usage

Running the planner is a three-step process as explained in Section 3 (pp. 202-203) of the JAIR paper on Fast Downward. The following instructions show how to run these three steps, in sequence, assuming that the preprocessor and search component have been compiled and that you are currently located in the src directory.

Translator

translate/translate.py [DOMAIN] PROBLEM
  • DOMAIN (filename): PDDL domain file

  • PROBLEM (filename): PDDL problem file

If the domain file is not given, the planner will try to infer a likely name from the problem file name, using the conventions used at the various IPCs. (If in doubt if this will work for you, just try it out.)

Note: Creates a file called output.sas (as well as test.groups, all.groups, ...)

Preprocessor

preprocess/preprocess < OUTPUT.SAS
  • OUTPUT.SAS (filename): translator output

Note: Creates a file called output

Search component

search/downward [OPTIONS] --search SEARCH < OUTPUT
  • SEARCH (SearchEngine): configuration of the search algorithm

  • OUTPUT (filename): preprocessor output

Options:

  • --heuristic HEURISTIC_PREDEFINITION

    • Predefines a heuristic that can afterwards be referenced by the name that is specified in the definition.
  • --random-seed SEED

    • Use random seed SEED

Examples

A* search:

   1 # landmark-cut heuristic (previously configuration "ou")
   2  ./downward --search "astar(lmcut())" < output
   3 
   4 # merge-and-shrink heuristic with default settings (previously configuration "oa50000")
   5  ./downward --search "astar(mas())" < output
   6 
   7 # blind heuristic (previously configuarion "ob")
   8  ./downward --search "astar(blind())" < output

Lazy greedy best first search with preferred operators and the queue alternation method:

   1 ## using FF heuristic and context-enhanced additive heuristic (previously: "fFyY")
   2  ./downward --heuristic "hff=ff()" --heuristic "hcea=cea()" \
   3             --search "lazy_greedy(hff, hcea, preferred=(hff, hcea))" \
   4             < output
   5 
   6 ## using FF heuristic (previously: "fF")
   7  ./downward --heuristic "hff=ff()" \
   8             --search "lazy_greedy(hff, preferred=(hff))" \
   9             < output
  10 
  11 ## using context-enhanced additive heuristic (previously: "yY")
  12  ./downward --heuristic "hcea=cea()" \
  13             --search "lazy_greedy(hcea, preferred=(hcea))" \
  14             < output

The above examples use the new best-first search implementation. For comparison, the old best-first search implementation is still available:

 ./downward --heuristic "hff=ff()" --heuristic "hcea=cea()" \
            --search "old_greedy(hff, hcea, preferred=(hff, hcea))" \
            < output

Q: I would like to see an example that uses the Lama-FF Synergy feature. I'd be most interested in a configuration that is as close to LAMA as currently possible (i.e., using the LAMA and FF heuristics with synergy in a lazy alternation search with preferred operators for both heuristics, using iterated search with the appropriate set of options).

A: This is as close to LAMA as we have right now (same search algorithms and heuristics as in LAMA, iterated search, but no +1 to operator costs). The first search iteration uses the lazy greedy alternation search with LAMA and FF heuristics with synergy, using both heuristics for preferred operators. Later search iterations are conducted with the same heuristics and preferred operators, but using weighted A* search with a weight schedule rather than greedy search.

./downward --heuristic "hlm,hff=lm_ff_syn(lm_rhw())"
           --search "iterated(
                              lazy_greedy(hff,hlm,preferred=(hff,hlm)),
                              lazy_wastar(hff,hlm,preferred=(hff,hlm),w=5),
                              lazy_wastar(hff,hlm,preferred=(hff,hlm),w=3),
                              lazy_wastar(hff,hlm,preferred=(hff,hlm),w=2),
                              lazy_wastar(hff,hlm,preferred=(hff,hlm),w=1),
                              repeat_last=true)" < output

The following is the corresponding call to just find a first solution (i.e., not doing iterated search):

 ./downward --heuristic "hlm,hff=lm_ff_syn(lm_rhw())" 
            --search "lazy_greedy(hlm, hff, preferred=(hlm, hff))" 
            < output

If you would like to have another translation from an old-style configuration to the new call-syntax, please add it here as a TODO.

FastDownward: PlannerUsage (last edited 2023-10-12 12:14:59 by GabiRoeger)