Differences between revisions 21 and 22
Revision 21 as of 2014-02-16 20:35:13
Size: 30462
Editor: XmlRpcBot
Comment:
Revision 22 as of 2014-07-24 14:25:47
Size: 31950
Editor: XmlRpcBot
Comment:
Deletions are marked like this. Additions are marked like this.
Line 24: Line 24:
 * '''conditional_effects:''' supported  * '''conditional effects:''' supported
Line 44: Line 44:
 * '''conditional_effects:''' supported  * '''conditional effects:''' supported
Line 63: Line 63:
 * '''conditional_effects:''' supported  * '''conditional effects:''' supported
Line 82: Line 82:
 * '''conditional_effects:''' supported  * '''conditional effects:''' supported
Line 105: Line 105:
 * '''conditional_effects:''' not supported  * '''conditional effects:''' not supported
Line 125: Line 125:
 * '''conditional_effects:''' supported  * '''conditional effects:''' supported
Line 165: Line 165:
 * '''conditional_effects:''' not supported  * '''conditional effects:''' not supported
Line 184: Line 184:
 * '''conditional_effects:''' supported  * '''conditional effects:''' supported
Line 204: Line 204:
 * '''conditional_effects:''' ignored  * '''conditional effects:''' ignored
Line 223: Line 223:
 * '''conditional_effects:''' supported  * '''conditional effects:''' supported
Line 243: Line 243:
ipdb(pdb_max_size=2000000, collection_max_size=20000000, num_samples=1000, min_improvement=10, cost_type=NORMAL) ipdb(pdb_max_size=2000000, collection_max_size=20000000, num_samples=1000, min_improvement=10, max_time=infinity, cost_type=NORMAL)
Line 251: Line 251:
 * ''max_time'' (int): maximum time in seconds for improving the initial pattern collection via hill climbing. If set to 0, no hill climbing is performed at all.
Line 273: Line 274:
 * '''conditional_effects:''' not supported  * '''conditional effects:''' not supported
Line 283: Line 284:
lmcount(lm_graph, admissible=false, optimal=false, pref=false, alm=true, cost_type=NORMAL) lmcount(lm_graph, admissible=false, optimal=false, pref=false, alm=true, lpsolver=CPLEX, cost_type=NORMAL)
Line 289: Line 290:
 * ''optimal'' (bool): use optimal (LP-based) cost sharing (only makes sense with `admissible=true`)  * ''optimal'' (bool): use optimal (LP-based) cost sharing (only makes sense with {{{admissible=true}}})
Line 292: Line 293:
 * ''cost_type'' ({NORMAL, ONE, PLUSONE}): Operator cost adjustment type. No matter what this setting is, axioms will always be considered as actions of cost 0 by the heuristics that treat axioms as actions.
  * {{{NORMAL}}}: all actions are accounted for with their real cost
  * {{{ONE}}}: all actions are accounted for as unit cost
  * {{{PLUSONE}}}: all actions are accounted for as their real cost + 1 (except if all actions have original cost 1, in which case cost 1 is used). This is the behaviour known for the heuristics of the LAMA planner. This is intended to be used by the heuristics, not search engines, but is supported for both.
'''Note:''' to use `optimal=true`, you must build the planner with USE_LP=1. See [[LPBuildInstructions|LPBuildInstructions]].

'''Optimal search:''' when using landmarks for optimal search (`admissible=true`), you probably also want to enable the mpd option of the A* algorithm to improve heuristic estimates

'''cost_type parameter:''' only used when `admissible=true` (see [[Doc/LandmarkGraph|LandmarkGraph]])

Language features supported:
 * '''action costs:''' supported
 * '''conditional_effects:''' supported if `admissible=false`
 * '''axioms:''' supported if `admissible=false` (but may behave stupidly and lead to an unsafe heuristic)
Properties:
 * '''admissible:''' yes if `admissible=true` and there are neither conditional effects nor axioms
 * '''consistent:''' no
 * '''safe:''' yes (except maybe on tasks with axioms or when using `admissible=true` on tasks with conditional effects)
 * '''preferred operators:''' yes (if enabled; see `pref` option)
 * ''lpsolver'' ({CLP, CPLEX, GUROBI}): external solver that should be used to solve linear programs
  * {{{CLP}}}: default LP solver shipped with the COIN library
  * {{{CPLEX}}}: commercial solver by IBM
  * {{{GUROBI}}}: commercial solver
* ''cost_type'' ({NORMAL, ONE, PLUSONE}): Operator cost adjustment type. No matter what this setting is, axioms will always be considered as actions of cost 0 by the heuristics that treat axioms as actions.
  * {{{NORMAL}}}: all actions are accounted for with their real cost
  * {{{ONE}}}: all actions are accounted for as unit cost
  * {{{PLUSONE}}}: all actions are accounted for as their real cost + 1 (except if all actions have original cost 1, in which case cost 1 is used). This is the behaviour known for the heuristics of the LAMA planner. This is intended to be used by the heuristics, not search engines, but is supported for both.
'''Note:''' to use {{{optimal=true}}}, you must build the planner with USE_LP=1. See [[LPBuildInstructions|LPBuildInstructions]].

'''Optimal search:''' when using landmarks for optimal search ({{{admissible=true}}}), you probably also want to enable the mpd option of the A* algorithm to improve heuristic estimates

'''cost_type parameter:''' only used when {{{admissible=true}}} (see [[Doc/LandmarkGraph|LandmarkGraph]])

'''Note:''' to use an LP solver, you must build the planner with USE_LP=1. See [[LPBuildInstructions|LPBuildInstructions]].

Language features supported:
 * '''action costs:''' supported
 * '''conditional_effects:''' supported if the [[Doc/LandmarkGraph|LandmarkGraph]] supports them; otherwise ignored with {{{admissible=false}}} and not allowed with {{{admissible=true}}}
 * '''axioms:''' ignored with {{{admissible=false}}}; not allowed with {{{admissible=true}}}
Properties:
 * '''admissible:''' yes if {{{admissible=true}}}
 * '''consistent:''' complicated; needs further thought
 * '''safe:''' yes except on tasks with axioms or on tasks with conditional effects when using a [[Doc/LandmarkGraph|LandmarkGraph]] not supporting them
 * '''preferred operators:''' yes
(if enabled; see {{{pref}}} option)
Line 323: Line 330:
 * '''conditional_effects:''' not supported  * '''conditional effects:''' not supported
Line 331: Line 338:
Note: The parameter space and syntax for the merge-and-shrink heuristic has changed significantly in August 2011.
{{{
merge_and_shrink(merge_strategy=MERGE_LINEAR_CG_GOAL_LEVEL, shrink_strategy=shrink_fh(max_states=50000, max_states_before_merge=50000, shrink_f=high, shrink_h=low), reduce_labels=true, expensive_statistics=false, cost_type=NORMAL)
}}}


 * ''merge_strategy'' ({MERGE_LINEAR_CG_GOAL_LEVEL, MERGE_LINEAR_CG_GOAL_RANDOM, MERGE_LINEAR_GOAL_CG_LEVEL, MERGE_LINEAR_RANDOM, MERGE_DFP, MERGE_LINEAR_LEVEL, MERGE_LINEAR_REVERSE_LEVEL}): merge strategy
{{{
merge_and_shrink(merge_strategy=merge_linear, shrink_strategy=shrink_fh(max_states=50000, max_states_before_merge=50000, shrink_f=high, shrink_h=low), label_reduction_method=ALL_ABSTRACTIONS_WITH_FIXPOINT, label_reduction_system_order=RANDOM, expensive_statistics=false, cost_type=NORMAL)
}}}


 * ''merge_strategy'' ([[Doc/MergeStrategy|MergeStrategy]]): merge strategy; choose between merge_linear and merge_dfp
Line 344: Line 350:
 f-preserving abstractions from the Helmert/Haslum/Hoffmann ICAPS 2007 paper (called HHH in the IJCAI 2011 paper by Nissim, Hoffmann and Helmert). Here, N is a numerical parameter for which sensible values include 1000, 10000, 50000, 100000 and 200000. Combine this with the default merge strategy MERGE_LINEAR_CG_GOAL_LEVEL to match the heuristic in the paper.  f-preserving abstractions from the Helmert/Haslum/Hoffmann ICAPS 2007 paper (called HHH in the IJCAI 2011 paper by Nissim, Hoffmann and Helmert). Here, N is a numerical parameter for which sensible values include 1000, 10000, 50000, 100000 and 200000. Combine this with the default linear merge strategy CG_GOAL_LEVEL to match the heuristic in the paper.
Line 350: Line 356:
 Greedy bisimulation without size bound (called M&S-gop in the IJCAI 2011 paper by Nissim, Hoffmann and Helmert). Combine this with the merge strategy MERGE_LINEAR_REVERSE_LEVEL to match the heuristic in the paper.  Greedy bisimulation without size bound (called M&S-gop in the IJCAI 2011 paper by Nissim, Hoffmann and Helmert). Combine this with the linear merge strategy REVERSE_LEVEL to match the heuristic in the paper.
Line 356: Line 362:
 Exact bisimulation with a size limit (called DFP-bop in the IJCAI 2011 paper by Nissim, Hoffmann and Helmert), where N is a numerical parameter for which sensible values include 1000, 10000, 50000, 100000 and 200000. Combine this with the merge strategy MERGE_LINEAR_REVERSE_LEVEL to match the heuristic in the paper.
 * ''reduce_labels'' (bool): enable label reduction. Note: it is hard to fathom a scenario where label reduction is a bad idea. The overhead should be low and the gains in time and memory can be massive. So unless you really know what you're doing, don't set this to false. (The point of this option is to perform controlled experiments on how useful label reduction is exactly.)
 Exact bisimulation with a size limit (called DFP-bop in the IJCAI 2011 paper by Nissim, Hoffmann and Helmert), where N is a numerical parameter for which sensible values include 1000, 10000, 50000, 100000 and 200000. Combine this with the linear merge strategy REVERSE_LEVEL to match the heuristic in the paper.
 * ''label_reduction_method'' ({NONE, OLD, TWO_ABSTRACTIONS, ALL_ABSTRACTIONS, ALL_ABSTRACTIONS_WITH_FIXPOINT}): label reduction method: none: no label reduction will be performed old: emulate the label reduction as desribed in the IJCAI 2011 paper by Nissim, Hoffmann and Helmert.two_abstractions: compute the 'combinable relation' for labels only for the two abstractions that will be merged next and reduce labels.all_abstractions: compute the 'combinable relation' for labels once for every abstraction and reduce labels.all_abstractions_with_fixpoint: keep computing the 'combinable relation' for labels iteratively for all abstractions until no more labels can be reduced.
 * ''label_reduction_system_order'' ({REGULAR, REVERSE, RANDOM}): order of transition systems for the label reduction methods that iterate over the set of all abstractions. only useful for the choices all_abstractions and all_abstractions_with_fixpoint for the option label_reduction_method.
Line 363: Line 370:
Language features supported:
 * '''action costs:''' supported
 * '''conditional_effects:''' not supported
'''Note:''' Conditional effects are supported directly. Note, however, that for tasks that are not factored (in the sense of the JACM 2014 merge-and-shrink paper), the atomic abstractions on which merge-and-shrink heuristics are based are nondeterministic, which can lead to poor heuristics even when no shrinking is performed.

Language features supported:
 * '''action costs:''' supported
 * '''conditional effects:''' supported (but see note)
Line 387: Line 396:
 * '''conditional_effects:''' not supported  * '''conditional effects:''' not supported
Line 410: Line 419:
 * '''conditional_effects:''' not supported  * '''conditional effects:''' not supported

A heuristic specification is either a newly created heuristic instance or a heuristic that has been defined previously. This page describes how one can specify a new heuristic instance. For re-using heuristics, see Heuristic Predefinitions.

Definitions of properties in the descriptions below:

  • admissible: h(s) <= h*(s) for all states s

  • consistent: h(s) + c(s, s') >= h(s') for all states s connected to states s' by an action with cost c(s, s')

  • safe: h(s) = infinity is only true for states with h*(s) = infinity

  • preferred operators: this heuristic identifies preferred operators

Additive heuristic

add(cost_type=NORMAL)
  • cost_type ({NORMAL, ONE, PLUSONE}): Operator cost adjustment type. No matter what this setting is, axioms will always be considered as actions of cost 0 by the heuristics that treat axioms as actions.

    • NORMAL: all actions are accounted for with their real cost

    • ONE: all actions are accounted for as unit cost

    • PLUSONE: all actions are accounted for as their real cost + 1 (except if all actions have original cost 1, in which case cost 1 is used). This is the behaviour known for the heuristics of the LAMA planner. This is intended to be used by the heuristics, not search engines, but is supported for both.

Language features supported:

  • action costs: supported

  • conditional effects: supported

  • axioms: supported (in the sense that the planner won't complain -- handling of axioms might be very stupid and even render the heuristic unsafe)

Properties:

  • admissible: no

  • consistent: no

  • safe: yes for tasks without axioms

  • preferred operators: yes

Blind heuristic

Returns cost of cheapest action for non-goal states, 0 for goal states

blind(cost_type=NORMAL)
  • cost_type ({NORMAL, ONE, PLUSONE}): Operator cost adjustment type. No matter what this setting is, axioms will always be considered as actions of cost 0 by the heuristics that treat axioms as actions.

    • NORMAL: all actions are accounted for with their real cost

    • ONE: all actions are accounted for as unit cost

    • PLUSONE: all actions are accounted for as their real cost + 1 (except if all actions have original cost 1, in which case cost 1 is used). This is the behaviour known for the heuristics of the LAMA planner. This is intended to be used by the heuristics, not search engines, but is supported for both.

Language features supported:

  • action costs: supported

  • conditional effects: supported

  • axioms: supported

Properties:

  • admissible: yes

  • consistent: yes

  • safe: yes

  • preferred operators: no

Context-enhanced additive heuristic

cea(cost_type=NORMAL)
  • cost_type ({NORMAL, ONE, PLUSONE}): Operator cost adjustment type. No matter what this setting is, axioms will always be considered as actions of cost 0 by the heuristics that treat axioms as actions.

    • NORMAL: all actions are accounted for with their real cost

    • ONE: all actions are accounted for as unit cost

    • PLUSONE: all actions are accounted for as their real cost + 1 (except if all actions have original cost 1, in which case cost 1 is used). This is the behaviour known for the heuristics of the LAMA planner. This is intended to be used by the heuristics, not search engines, but is supported for both.

Language features supported:

  • action costs: supported

  • conditional effects: supported

  • axioms: supported (in the sense that the planner won't complain -- handling of axioms might be very stupid and even render the heuristic unsafe)

Properties:

  • admissible: no

  • consistent: no

  • safe: no

  • preferred operators: yes

Causal graph heuristic

cg(cost_type=NORMAL)
  • cost_type ({NORMAL, ONE, PLUSONE}): Operator cost adjustment type. No matter what this setting is, axioms will always be considered as actions of cost 0 by the heuristics that treat axioms as actions.

    • NORMAL: all actions are accounted for with their real cost

    • ONE: all actions are accounted for as unit cost

    • PLUSONE: all actions are accounted for as their real cost + 1 (except if all actions have original cost 1, in which case cost 1 is used). This is the behaviour known for the heuristics of the LAMA planner. This is intended to be used by the heuristics, not search engines, but is supported for both.

Language features supported:

  • action costs: supported

  • conditional effects: supported

  • axioms: supported (in the sense that the planner won't complain -- handling of axioms might be very stupid and even render the heuristic unsafe)

Properties:

  • admissible: no

  • consistent: no

  • safe: no

  • preferred operators: yes

Canonical PDB

The canonical pattern database heuristic is calculated as follows. For a given pattern collection C, the value of the canonical heuristic function is the maximum over all maximal additive subsets A in C, where the value for one subset S in A is the sum of the heuristic values for all patterns in S for a given state.

cpdbs(cost_type=NORMAL, patterns=None, combo=false, max_states=1000000)
  • cost_type ({NORMAL, ONE, PLUSONE}): Operator cost adjustment type. No matter what this setting is, axioms will always be considered as actions of cost 0 by the heuristics that treat axioms as actions.

    • NORMAL: all actions are accounted for with their real cost

    • ONE: all actions are accounted for as unit cost

    • PLUSONE: all actions are accounted for as their real cost + 1 (except if all actions have original cost 1, in which case cost 1 is used). This is the behaviour known for the heuristics of the LAMA planner. This is intended to be used by the heuristics, not search engines, but is supported for both.

  • patterns (list of list of int): list of patterns (which are lists of variable numbers of the planning task) Default: each goal variable is used as a single-variable pattern in the collection.

  • combo (bool): use the combo strategy

  • max_states (int): maximum abstraction size for combo strategy

Language features supported:

  • action costs: supported

  • conditional effects: not supported

  • axioms: not supported

Properties:

  • admissible: yes

  • consistent: yes

  • safe: yes

  • preferred operators: no

FF heuristic

See also Synergy.

ff(cost_type=NORMAL)
  • cost_type ({NORMAL, ONE, PLUSONE}): Operator cost adjustment type. No matter what this setting is, axioms will always be considered as actions of cost 0 by the heuristics that treat axioms as actions.

    • NORMAL: all actions are accounted for with their real cost

    • ONE: all actions are accounted for as unit cost

    • PLUSONE: all actions are accounted for as their real cost + 1 (except if all actions have original cost 1, in which case cost 1 is used). This is the behaviour known for the heuristics of the LAMA planner. This is intended to be used by the heuristics, not search engines, but is supported for both.

Language features supported:

  • action costs: supported

  • conditional effects: supported

  • axioms: supported (in the sense that the planner won't complain -- handling of axioms might be very stupid and even render the heuristic unsafe)

Properties:

  • admissible: no

  • consistent: no

  • safe: yes for tasks without axioms

  • preferred operators: yes

Genetic Algorithm PDB

The following paper describes the automated creation of pattern databases with a genetic algorithm. Pattern collections are initially created with a bin-packing algorithm. The genetic algorithm is used to optimize the pattern collections with an objective function that estimates the mean heuristic value of the the pattern collections. Pattern collections with higher mean heuristic estimates are more likely selected for the next generation.

gapdb(pdb_max_size=50000, num_collections=5, num_episodes=30, mutation_probability=0.01, disjoint=false, cost_type=NORMAL)
  • pdb_max_size (int): maximal number of states per pattern database

  • num_collections (int): number of pattern collections to maintain in the genetic algorithm (population size)

  • num_episodes (int): number of episodes for the genetic algorithm

  • mutation_probability (double): probability between 0 and 1 for flipping a bit in the genetic algorithm

  • disjoint (bool): consider a pattern collection invalid (giving it very low fitness) if its patterns are not disjoint

  • cost_type ({NORMAL, ONE, PLUSONE}): Operator cost adjustment type. No matter what this setting is, axioms will always be considered as actions of cost 0 by the heuristics that treat axioms as actions.

    • NORMAL: all actions are accounted for with their real cost

    • ONE: all actions are accounted for as unit cost

    • PLUSONE: all actions are accounted for as their real cost + 1 (except if all actions have original cost 1, in which case cost 1 is used). This is the behaviour known for the heuristics of the LAMA planner. This is intended to be used by the heuristics, not search engines, but is supported for both.

Note: This pattern generation method uses the zero/one pattern database heuristic.

Implementation Notes

The standard genetic algorithm procedure as described in the paper is implemented in Fast Downward. The implementation is close to the paper.

  • Initialization
    In Fast Downward bin-packing with the next-fit strategy is used. A bin corresponds to a pattern which contains variables up to pdb_max_size. With this method each variable occurs exactly in one pattern of a collection. There are num_collections collections created.

  • Mutation
    With probability mutation_probability a bit is flipped meaning that either a variable is added to a pattern or deleted from a pattern.

  • Recombination
    Recombination isn't implemented in Fast Downward. In the paper recombination is described but not used.

  • Evaluation
    For each pattern collection the mean heuristic value is computed. For a single pattern database the mean heuristic value is the sum of all pattern database entries divided through the number of entries. Entries with infinite heuristic values are ignored in this calculation. The sum of these individual mean heuristic values yield the mean heuristic value of the collection.

  • Selection
    The higher the mean heuristic value of a pattern collection is, the more likely this pattern collection should be selected for the next generation. Therefore the mean heuristic values are normalized and converted into probabilities and Roulette Wheel Selection is used.

Language features supported:

  • action costs: supported

  • conditional effects: not supported

  • axioms: not supported

Properties:

  • admissible: yes

  • consistent: yes

  • safe: yes

  • preferred operators: no

Goal count heuristic

goalcount(cost_type=NORMAL)
  • cost_type ({NORMAL, ONE, PLUSONE}): Operator cost adjustment type. No matter what this setting is, axioms will always be considered as actions of cost 0 by the heuristics that treat axioms as actions.

    • NORMAL: all actions are accounted for with their real cost

    • ONE: all actions are accounted for as unit cost

    • PLUSONE: all actions are accounted for as their real cost + 1 (except if all actions have original cost 1, in which case cost 1 is used). This is the behaviour known for the heuristics of the LAMA planner. This is intended to be used by the heuristics, not search engines, but is supported for both.

Language features supported:

  • action costs: ignored by design

  • conditional effects: supported

  • axioms: supported

Properties:

  • admissible: no

  • consistent: no

  • safe: yes

  • preferred operators: no

h^m heuristic

hm(m=2, cost_type=NORMAL)
  • m (int): subset size

  • cost_type ({NORMAL, ONE, PLUSONE}): Operator cost adjustment type. No matter what this setting is, axioms will always be considered as actions of cost 0 by the heuristics that treat axioms as actions.

    • NORMAL: all actions are accounted for with their real cost

    • ONE: all actions are accounted for as unit cost

    • PLUSONE: all actions are accounted for as their real cost + 1 (except if all actions have original cost 1, in which case cost 1 is used). This is the behaviour known for the heuristics of the LAMA planner. This is intended to be used by the heuristics, not search engines, but is supported for both.

Language features supported:

  • action costs: supported

  • conditional effects: ignored

  • axioms: ignored

Properties:

  • admissible: yes for tasks without conditional effects or axioms

  • consistent: yes for tasks without conditional effects or axioms

  • safe: yes for tasks without conditional effects or axioms

  • preferred operators: no

Max heuristic

hmax(cost_type=NORMAL)
  • cost_type ({NORMAL, ONE, PLUSONE}): Operator cost adjustment type. No matter what this setting is, axioms will always be considered as actions of cost 0 by the heuristics that treat axioms as actions.

    • NORMAL: all actions are accounted for with their real cost

    • ONE: all actions are accounted for as unit cost

    • PLUSONE: all actions are accounted for as their real cost + 1 (except if all actions have original cost 1, in which case cost 1 is used). This is the behaviour known for the heuristics of the LAMA planner. This is intended to be used by the heuristics, not search engines, but is supported for both.

Language features supported:

  • action costs: supported

  • conditional effects: supported

  • axioms: supported (in the sense that the planner won't complain -- handling of axioms might be very stupid and even render the heuristic unsafe)

Properties:

  • admissible: yes for tasks without axioms

  • consistent: yes for tasks without axioms

  • safe: yes for tasks without axioms

  • preferred operators: no

iPDB

This pattern generation method is an adaption of the algorithm described in the following paper:

For implementation notes, see also this paper:

ipdb(pdb_max_size=2000000, collection_max_size=20000000, num_samples=1000, min_improvement=10, max_time=infinity, cost_type=NORMAL)
  • pdb_max_size (int): maximal number of states per pattern database

  • collection_max_size (int): maximal number of states in the pattern collection

  • num_samples (int): number of samples (random states) on which to evaluate each candidate pattern collection

  • min_improvement (int): minimum number of samples on which a candidate pattern collection must improve on the current one to be considered as the next pattern collection

  • max_time (int): maximum time in seconds for improving the initial pattern collection via hill climbing. If set to 0, no hill climbing is performed at all.

  • cost_type ({NORMAL, ONE, PLUSONE}): Operator cost adjustment type. No matter what this setting is, axioms will always be considered as actions of cost 0 by the heuristics that treat axioms as actions.

    • NORMAL: all actions are accounted for with their real cost

    • ONE: all actions are accounted for as unit cost

    • PLUSONE: all actions are accounted for as their real cost + 1 (except if all actions have original cost 1, in which case cost 1 is used). This is the behaviour known for the heuristics of the LAMA planner. This is intended to be used by the heuristics, not search engines, but is supported for both.

Note: The pattern collection created by the algorithm will always contain all patterns consisting of a single goal variable, even if this violates the pdb_max_size or collection_max_size limits.

Note: This pattern generation method uses the canonical pattern collection heuristic.

Implementation Notes

The following will very briefly describe the algorithm and explain the differences between the original implementation from 2007 and the new one in Fast Downward.

The aim of the algorithm is to output a pattern collection for which the Canonical PDB yields the best heuristic estimates.

The algorithm is basically a local search (hill climbing) which searches the "pattern neighbourhood" (starting initially with a pattern for each goal variable) for improving the pattern collection. This is done exactly as described in the section "pattern construction as search" in the paper. For evaluating the neighbourhood, the "counting approximation" as introduced in the paper was implemented. An important difference however consists in the fact that this implementation computes all pattern databases for each candidate pattern rather than using A* search to compute the heuristic values only for the sample states for each pattern.

Also the logic for sampling the search space differs a bit from the original implementation. The original implementation uses a random walk of a length which is binomially distributed with the mean at the estimated solution depth (estimation is done with the current pattern collection heuristic). In the Fast Downward implementation, also a random walk is used, where the length is the estimation of the number of solution steps, which is calculated by dividing the current heuristic estimate for the initial state by the average operator costs of the planning task (calculated only once and not updated during sampling!) to take non-unit cost problems into account. This yields a random walk of an expected lenght of np = 2 * estimated number of solution steps. If the random walk gets stuck, it is being restarted from the initial state, exactly as described in the original paper.

The section "avoiding redundant evaluations" describes how the search neighbourhood of patterns can be restricted to variables that are somewhat relevant to the variables already included in the pattern by analyzing causal graphs. This is also implemented in Fast Downward. The second approach described in the paper (statistical confidence interval) is not applicable to this implementation, as it doesn't use A* search but constructs the entire pattern databases for all candidate patterns anyway. The search is ended if there is no more improvement (or the improvement is smaller than the minimal improvement which can be set as an option), how ever there is no limit of iterations of the local search. This is similar to the techniques used in the original implementation as described in the paper.

Language features supported:

  • action costs: supported

  • conditional effects: not supported

  • axioms: not supported

Properties:

  • admissible: yes

  • consistent: yes

  • safe: yes

  • preferred operators: no

Landmark-count heuristic

See also Synergy

lmcount(lm_graph, admissible=false, optimal=false, pref=false, alm=true, lpsolver=CPLEX, cost_type=NORMAL)
  • lm_graph (LandmarkGraph): the set of landmarks to use for this heuristic. The set of landmarks can be specified here, or predefined (see LandmarkGraph).

  • admissible (bool): get admissible estimate

  • optimal (bool): use optimal (LP-based) cost sharing (only makes sense with admissible=true)

  • pref (bool): identify preferred operators (see Using preferred operators with the lmcount heuristic)

  • alm (bool): use action landmarks

  • lpsolver ({CLP, CPLEX, GUROBI}): external solver that should be used to solve linear programs

    • CLP: default LP solver shipped with the COIN library

    • CPLEX: commercial solver by IBM

    • GUROBI: commercial solver

  • cost_type ({NORMAL, ONE, PLUSONE}): Operator cost adjustment type. No matter what this setting is, axioms will always be considered as actions of cost 0 by the heuristics that treat axioms as actions.

    • NORMAL: all actions are accounted for with their real cost

    • ONE: all actions are accounted for as unit cost

    • PLUSONE: all actions are accounted for as their real cost + 1 (except if all actions have original cost 1, in which case cost 1 is used). This is the behaviour known for the heuristics of the LAMA planner. This is intended to be used by the heuristics, not search engines, but is supported for both.

Note: to use optimal=true, you must build the planner with USE_LP=1. See LPBuildInstructions.

Optimal search: when using landmarks for optimal search (admissible=true), you probably also want to enable the mpd option of the A* algorithm to improve heuristic estimates

cost_type parameter: only used when admissible=true (see LandmarkGraph)

Note: to use an LP solver, you must build the planner with USE_LP=1. See LPBuildInstructions.

Language features supported:

  • action costs: supported

  • conditional_effects: supported if the LandmarkGraph supports them; otherwise ignored with admissible=false and not allowed with admissible=true

  • axioms: ignored with admissible=false; not allowed with admissible=true

Properties:

  • admissible: yes if admissible=true

  • consistent: complicated; needs further thought

  • safe: yes except on tasks with axioms or on tasks with conditional effects when using a LandmarkGraph not supporting them

  • preferred operators: yes (if enabled; see pref option)

Landmark-cut heuristic

lmcut(cost_type=NORMAL)
  • cost_type ({NORMAL, ONE, PLUSONE}): Operator cost adjustment type. No matter what this setting is, axioms will always be considered as actions of cost 0 by the heuristics that treat axioms as actions.

    • NORMAL: all actions are accounted for with their real cost

    • ONE: all actions are accounted for as unit cost

    • PLUSONE: all actions are accounted for as their real cost + 1 (except if all actions have original cost 1, in which case cost 1 is used). This is the behaviour known for the heuristics of the LAMA planner. This is intended to be used by the heuristics, not search engines, but is supported for both.

Language features supported:

  • action costs: supported

  • conditional effects: not supported

  • axioms: not supported

Properties:

  • admissible: yes

  • consistent: no

  • safe: yes

  • preferred operators: no

Merge-and-shrink heuristic

merge_and_shrink(merge_strategy=merge_linear, shrink_strategy=shrink_fh(max_states=50000, max_states_before_merge=50000, shrink_f=high, shrink_h=low), label_reduction_method=ALL_ABSTRACTIONS_WITH_FIXPOINT, label_reduction_system_order=RANDOM, expensive_statistics=false, cost_type=NORMAL)
  • merge_strategy (MergeStrategy): merge strategy; choose between merge_linear and merge_dfp

  • shrink_strategy (ShrinkStrategy): shrink strategy; try one of the following:

shrink_fh(max_states=N)
  • f-preserving abstractions from the Helmert/Haslum/Hoffmann ICAPS 2007 paper (called HHH in the IJCAI 2011 paper by Nissim, Hoffmann and Helmert). Here, N is a numerical parameter for which sensible values include 1000, 10000, 50000, 100000 and 200000. Combine this with the default linear merge strategy CG_GOAL_LEVEL to match the heuristic in the paper.

shrink_bisimulation(max_states=infinity, threshold=1, greedy=true, initialize_by_h=false, group_by_h=false)
  • Greedy bisimulation without size bound (called M&S-gop in the IJCAI 2011 paper by Nissim, Hoffmann and Helmert). Combine this with the linear merge strategy REVERSE_LEVEL to match the heuristic in the paper.

shrink_bisimulation(max_states=N, greedy=false, initialize_by_h=true, group_by_h=true)
  • Exact bisimulation with a size limit (called DFP-bop in the IJCAI 2011 paper by Nissim, Hoffmann and Helmert), where N is a numerical parameter for which sensible values include 1000, 10000, 50000, 100000 and 200000. Combine this with the linear merge strategy REVERSE_LEVEL to match the heuristic in the paper.
  • label_reduction_method ({NONE, OLD, TWO_ABSTRACTIONS, ALL_ABSTRACTIONS, ALL_ABSTRACTIONS_WITH_FIXPOINT}): label reduction method: none: no label reduction will be performed old: emulate the label reduction as desribed in the IJCAI 2011 paper by Nissim, Hoffmann and Helmert.two_abstractions: compute the 'combinable relation' for labels only for the two abstractions that will be merged next and reduce labels.all_abstractions: compute the 'combinable relation' for labels once for every abstraction and reduce labels.all_abstractions_with_fixpoint: keep computing the 'combinable relation' for labels iteratively for all abstractions until no more labels can be reduced.

  • label_reduction_system_order ({REGULAR, REVERSE, RANDOM}): order of transition systems for the label reduction methods that iterate over the set of all abstractions. only useful for the choices all_abstractions and all_abstractions_with_fixpoint for the option label_reduction_method.

  • expensive_statistics (bool): show statistics on "unique unlabeled edges" (WARNING: these are *very* slow, i.e. too expensive to show by default (in terms of time and memory). When this is used, the planner prints a big warning on stderr with information on the performance impact. Don't use when benchmarking!)

  • cost_type ({NORMAL, ONE, PLUSONE}): Operator cost adjustment type. No matter what this setting is, axioms will always be considered as actions of cost 0 by the heuristics that treat axioms as actions.

    • NORMAL: all actions are accounted for with their real cost

    • ONE: all actions are accounted for as unit cost

    • PLUSONE: all actions are accounted for as their real cost + 1 (except if all actions have original cost 1, in which case cost 1 is used). This is the behaviour known for the heuristics of the LAMA planner. This is intended to be used by the heuristics, not search engines, but is supported for both.

Note: Conditional effects are supported directly. Note, however, that for tasks that are not factored (in the sense of the JACM 2014 merge-and-shrink paper), the atomic abstractions on which merge-and-shrink heuristics are based are nondeterministic, which can lead to poor heuristics even when no shrinking is performed.

Language features supported:

  • action costs: supported

  • conditional effects: supported (but see note)

  • axioms: not supported

Properties:

  • admissible: yes

  • consistent: yes

  • safe: yes

  • preferred operators: no

Pattern database heuristic

TODO

pdb(cost_type=NORMAL, max_states=1000000, pattern=None)
  • cost_type ({NORMAL, ONE, PLUSONE}): Operator cost adjustment type. No matter what this setting is, axioms will always be considered as actions of cost 0 by the heuristics that treat axioms as actions.

    • NORMAL: all actions are accounted for with their real cost

    • ONE: all actions are accounted for as unit cost

    • PLUSONE: all actions are accounted for as their real cost + 1 (except if all actions have original cost 1, in which case cost 1 is used). This is the behaviour known for the heuristics of the LAMA planner. This is intended to be used by the heuristics, not search engines, but is supported for both.

  • max_states (int): maximal number of abstract states in the pattern database

  • pattern (list of int): list of variable numbers of the planning task that should be used as pattern. Default: the variables are selected automatically based on a simple greedy strategy.

Language features supported:

  • action costs: supported

  • conditional effects: not supported

  • axioms: not supported

Properties:

  • admissible: yes

  • consistent: yes

  • safe: yes

  • preferred operators: no

Zero-One PDB

The zero/one pattern database heuristic is simply the sum of the heuristic values of all patterns in the pattern collection. In contrast to the canonical pattern database heuristic, there is no need to check for additive subsets, because the additivity of the patterns is guaranteed by action cost partitioning. This heuristic uses the most simple form of action cost partitioning, i.e. if an operator affects more than one pattern in the collection, its costs are entirely taken into account for one pattern (the first one which it affects) and set to zero for all other affected patterns.

zopdbs(cost_type=NORMAL, patterns=None, combo=false, max_states=1000000)
  • cost_type ({NORMAL, ONE, PLUSONE}): Operator cost adjustment type. No matter what this setting is, axioms will always be considered as actions of cost 0 by the heuristics that treat axioms as actions.

    • NORMAL: all actions are accounted for with their real cost

    • ONE: all actions are accounted for as unit cost

    • PLUSONE: all actions are accounted for as their real cost + 1 (except if all actions have original cost 1, in which case cost 1 is used). This is the behaviour known for the heuristics of the LAMA planner. This is intended to be used by the heuristics, not search engines, but is supported for both.

  • patterns (list of list of int): list of patterns (which are lists of variable numbers of the planning task) Default: each goal variable is used as a single-variable pattern in the collection.

  • combo (bool): use the combo strategy

  • max_states (int): maximum abstraction size for combo strategy

Language features supported:

  • action costs: supported

  • conditional effects: not supported

  • axioms: not supported

Properties:

  • admissible: yes

  • consistent: yes

  • safe: yes

  • preferred operators: no