A heuristic specification is either a newly created heuristic instance or a heuristic that has been defined previously. This page describes how one can specify a new heuristic instance. For re-using heuristics, see Heuristic Predefinitions.

Definitions of properties in the descriptions below:

• admissible: h(s) <= h*(s) for all states s

• consistent: h(s) <= c(s, s') + h(s') for all states s connected to states s' by an action with cost c(s, s')

• safe: h(s) = infinity is only true for states with h*(s) = infinity

• preferred operators: this heuristic identifies preferred operators

`add(transform=no_transform(), cache_estimates=true)`

• cache_estimates (bool): cache heuristic estimates

Language features supported:

• action costs: supported

• conditional effects: supported

• axioms: supported (in the sense that the planner won't complain -- handling of axioms might be very stupid and even render the heuristic unsafe)

Properties:

• consistent: no

• safe: yes for tasks without axioms

• preferred operators: yes

## Potential heuristic optimized for all states

The algorithm is based on

`all_states_potential(max_potential=1e8, lpsolver=CPLEX, transform=no_transform(), cache_estimates=true)`
• max_potential (double [0.0, infinity]): Bound potentials by this number

• lpsolver ({CLP, CPLEX, GUROBI}): external solver that should be used to solve linear programs

• CLP: default LP solver shipped with the COIN library

• CPLEX: commercial solver by IBM

• GUROBI: commercial solver

• cache_estimates (bool): cache heuristic estimates

Note: to use an LP solver, you must build the planner with LP support. See LPBuildInstructions.

Language features supported:

• action costs: supported

• conditional effects: not supported

• axioms: not supported

Properties:

• consistent: yes

• safe: yes

• preferred operators: no

## Blind heuristic

Returns cost of cheapest action for non-goal states, 0 for goal states

`blind(transform=no_transform(), cache_estimates=true)`

• cache_estimates (bool): cache heuristic estimates

Language features supported:

• action costs: supported

• conditional effects: supported

• axioms: supported

Properties:

• consistent: yes

• safe: yes

• preferred operators: no

`cea(transform=no_transform(), cache_estimates=true)`

• cache_estimates (bool): cache heuristic estimates

Language features supported:

• action costs: supported

• conditional effects: supported

• axioms: supported (in the sense that the planner won't complain -- handling of axioms might be very stupid and even render the heuristic unsafe)

Properties:

• consistent: no

• safe: no

• preferred operators: yes

See the paper introducing Counterexample-guided Abstraction Refinement (CEGAR) for classical planning:

and the paper showing how to make the abstractions additive:

`cegar(subtasks=[landmarks(),goals()], max_states=infinity, max_transitions=1000000, max_time=infinity, pick=MAX_REFINED, use_general_costs=true, transform=no_transform(), cache_estimates=true, random_seed=-1)`

• max_states (int [1, infinity]): maximum sum of abstract states over all abstractions

• max_transitions (int [0, infinity]): maximum sum of real transitions (excluding self-loops) over all abstractions

• max_time (double [0.0, infinity]): maximum time in seconds for building abstractions

• use_general_costs (bool): allow negative costs in cost partitioning

• cache_estimates (bool): cache heuristic estimates

• random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.

Language features supported:

• action costs: supported

• conditional effects: not supported

• axioms: not supported

Properties:

• consistent: yes

• safe: yes

• preferred operators: no

## Causal graph heuristic

`cg(transform=no_transform(), cache_estimates=true)`

• cache_estimates (bool): cache heuristic estimates

Language features supported:

• action costs: supported

• conditional effects: supported

• axioms: supported (in the sense that the planner won't complain -- handling of axioms might be very stupid and even render the heuristic unsafe)

Properties:

• consistent: no

• safe: no

• preferred operators: yes

## Constant evaluator

Returns a constant value.

`const(value=1, transform=no_transform(), cache_estimates=true)`
• value (int [0, infinity]): the constant value

• cache_estimates (bool): cache heuristic estimates

Properties:

• consistent: yes

• safe: no

• preferred operators: no

## Canonical PDB

The canonical pattern database heuristic is calculated as follows. For a given pattern collection C, the value of the canonical heuristic function is the maximum over all maximal additive subsets A in C, where the value for one subset S in A is the sum of the heuristic values for all patterns in S for a given state.

`cpdbs(patterns=systematic(1), max_time_dominance_pruning=infinity, transform=no_transform(), cache_estimates=true)`
• patterns (PatternCollectionGenerator): pattern generation method

• max_time_dominance_pruning (double [0.0, infinity]): The maximum time in seconds spent on dominance pruning. Using 0.0 turns off dominance pruning. Dominance pruning excludes patterns and additive subsets that will never contribute to the heuristic value because there are dominating subsets in the collection.

• cache_estimates (bool): cache heuristic estimates

Language features supported:

• action costs: supported

• conditional effects: not supported

• axioms: not supported

Properties:

• consistent: yes

• safe: yes

• preferred operators: no

## Diverse potential heuristics

The algorithm is based on

`diverse_potentials(num_samples=1000, max_num_heuristics=infinity, max_potential=1e8, lpsolver=CPLEX, transform=no_transform(), cache_estimates=true, random_seed=-1)`
• num_samples (int [0, infinity]): Number of states to sample

• max_num_heuristics (int [0, infinity]): maximum number of potential heuristics

• max_potential (double [0.0, infinity]): Bound potentials by this number

• lpsolver ({CLP, CPLEX, GUROBI}): external solver that should be used to solve linear programs

• CLP: default LP solver shipped with the COIN library

• CPLEX: commercial solver by IBM

• GUROBI: commercial solver

• cache_estimates (bool): cache heuristic estimates

• random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.

Note: to use an LP solver, you must build the planner with LP support. See LPBuildInstructions.

Language features supported:

• action costs: supported

• conditional effects: not supported

• axioms: not supported

Properties:

• consistent: yes

• safe: yes

• preferred operators: no

## FF heuristic

`ff(transform=no_transform(), cache_estimates=true)`

• cache_estimates (bool): cache heuristic estimates

Language features supported:

• action costs: supported

• conditional effects: supported

• axioms: supported (in the sense that the planner won't complain -- handling of axioms might be very stupid and even render the heuristic unsafe)

Properties:

• consistent: no

• safe: yes for tasks without axioms

• preferred operators: yes

## LAMA-FF synergy slave

See LAMA-FF synergy master for the LAMA-FF synergy master.

`ff_synergy(lama_synergy_heuristic)`
• lama_synergy_heuristic (Heuristic): The heuristic used here has to be an instance of the LAMA-FF synergy master, which can be achieved using the option name lama_synergy. See LAMA-FF synergy master for details.

## Goal count heuristic

`goalcount(transform=no_transform(), cache_estimates=true)`

• cache_estimates (bool): cache heuristic estimates

Language features supported:

• action costs: ignored by design

• conditional effects: supported

• axioms: supported

Properties:

• consistent: no

• safe: yes

• preferred operators: no

## h^m heuristic

`hm(m=2, transform=no_transform(), cache_estimates=true)`
• m (int [1, infinity]): subset size

• cache_estimates (bool): cache heuristic estimates

Language features supported:

• action costs: supported

• conditional effects: ignored

• axioms: ignored

Properties:

• consistent: yes for tasks without conditional effects or axioms

• safe: yes for tasks without conditional effects or axioms

• preferred operators: no

## Max heuristic

`hmax(transform=no_transform(), cache_estimates=true)`

• cache_estimates (bool): cache heuristic estimates

Language features supported:

• action costs: supported

• conditional effects: supported

• axioms: supported (in the sense that the planner won't complain -- handling of axioms might be very stupid and even render the heuristic unsafe)

Properties:

• consistent: yes for tasks without axioms

• safe: yes for tasks without axioms

• preferred operators: no

## Potential heuristic optimized for initial state

The algorithm is based on

`initial_state_potential(max_potential=1e8, lpsolver=CPLEX, transform=no_transform(), cache_estimates=true)`
• max_potential (double [0.0, infinity]): Bound potentials by this number

• lpsolver ({CLP, CPLEX, GUROBI}): external solver that should be used to solve linear programs

• CLP: default LP solver shipped with the COIN library

• CPLEX: commercial solver by IBM

• GUROBI: commercial solver

• cache_estimates (bool): cache heuristic estimates

Note: to use an LP solver, you must build the planner with LP support. See LPBuildInstructions.

Language features supported:

• action costs: supported

• conditional effects: not supported

• axioms: not supported

Properties:

• consistent: yes

• safe: yes

• preferred operators: no

## iPDB

This pattern generation method is an adaption of the algorithm described in the following paper:

For implementation notes, see:

`ipdb(pdb_max_size=2000000, collection_max_size=20000000, num_samples=1000, min_improvement=10, max_time=infinity, random_seed=-1, max_time_dominance_pruning=infinity, transform=no_transform(), cache_estimates=true)`
• pdb_max_size (int [1, infinity]): maximal number of states per pattern database

• collection_max_size (int [1, infinity]): maximal number of states in the pattern collection

• num_samples (int [1, infinity]): number of samples (random states) on which to evaluate each candidate pattern collection

• min_improvement (int [1, infinity]): minimum number of samples on which a candidate pattern collection must improve on the current one to be considered as the next pattern collection

• max_time (double [0.0, infinity]): maximum time in seconds for improving the initial pattern collection via hill climbing. If set to 0, no hill climbing is performed at all. Note that this limit only affects hill climbing. Use max_time_dominance_pruning to limit the time spent for pruning dominated patterns.

• random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.

• max_time_dominance_pruning (double [0.0, infinity]): The maximum time in seconds spent on dominance pruning. Using 0.0 turns off dominance pruning. Dominance pruning excludes patterns and additive subsets that will never contribute to the heuristic value because there are dominating subsets in the collection.

• cache_estimates (bool): cache heuristic estimates

Note: The pattern collection created by the algorithm will always contain all patterns consisting of a single goal variable, even if this violates the pdb_max_size or collection_max_size limits.

Note: This pattern generation method uses the canonical pattern collection heuristic.

### Implementation Notes

The following will very briefly describe the algorithm and explain the differences between the original implementation from 2007 and the new one in Fast Downward.

The aim of the algorithm is to output a pattern collection for which the Canonical PDB yields the best heuristic estimates.

The algorithm is basically a local search (hill climbing) which searches the "pattern neighbourhood" (starting initially with a pattern for each goal variable) for improving the pattern collection. This is done as described in the section "pattern construction as search" in the paper, except for the corrected search neighbourhood discussed below. For evaluating the neighbourhood, the "counting approximation" as introduced in the paper was implemented. An important difference however consists in the fact that this implementation computes all pattern databases for each candidate pattern rather than using A* search to compute the heuristic values only for the sample states for each pattern.

Also the logic for sampling the search space differs a bit from the original implementation. The original implementation uses a random walk of a length which is binomially distributed with the mean at the estimated solution depth (estimation is done with the current pattern collection heuristic). In the Fast Downward implementation, also a random walk is used, where the length is the estimation of the number of solution steps, which is calculated by dividing the current heuristic estimate for the initial state by the average operator costs of the planning task (calculated only once and not updated during sampling!) to take non-unit cost problems into account. This yields a random walk of an expected lenght of np = 2 * estimated number of solution steps. If the random walk gets stuck, it is being restarted from the initial state, exactly as described in the original paper.

The section "avoiding redundant evaluations" describes how the search neighbourhood of patterns can be restricted to variables that are relevant to the variables already included in the pattern by analyzing causal graphs. There is a mistake in the paper that leads to some relevant neighbouring patterns being ignored. See the errata for details. This mistake has been addressed in this implementation. The second approach described in the paper (statistical confidence interval) is not applicable to this implementation, as it doesn't use A* search but constructs the entire pattern databases for all candidate patterns anyway. The search is ended if there is no more improvement (or the improvement is smaller than the minimal improvement which can be set as an option), however there is no limit of iterations of the local search. This is similar to the techniques used in the original implementation as described in the paper.

Language features supported:

• action costs: supported

• conditional effects: not supported

• axioms: not supported

Properties:

• consistent: yes

• safe: yes

• preferred operators: no

## LAMA-FF synergy master

This class implements the LAMA-FF synergy. This synergy can be used for combining the FF heuristic (e.g. using its estimates or its (preferred operators) and the landmark count heuristic with preferred operators. See below for an example of how to combine the synergy with the slave class.

`lama_synergy(lm_factory, admissible=false, optimal=false, alm=true, lpsolver=CPLEX, transform=no_transform(), cache_estimates=true)`
• lm_factory (LandmarkFactory):

• optimal (bool): optimal cost sharing

• alm (bool): use action landmarks

• lpsolver ({CLP, CPLEX, GUROBI}): external solver that should be used to solve linear programs

• CLP: default LP solver shipped with the COIN library

• CPLEX: commercial solver by IBM

• GUROBI: commercial solver

• cache_estimates (bool): cache heuristic estimates

Using the synergy: To use the synergy, combine the master with the slave (see LAMA-FF synergy slave) using predefinitions (see Predefinitions), for example:

```--heuristic "lama_master=lama_synergy(lm_factory=lm_rhw))"
--heuristic "lama_slave=ff_synergy(lama_master)"```

Note: Regarding using different cost transformations, there are a few caveats to be considered, see OptionCaveats.

Note: to use an LP solver, you must build the planner with LP support. See LPBuildInstructions.

## Landmark-count heuristic

`lmcount(lm_factory, admissible=false, optimal=false, pref=false, alm=true, lpsolver=CPLEX, transform=no_transform(), cache_estimates=true)`
• lm_factory (LandmarkFactory): the set of landmarks to use for this heuristic. The set of landmarks can be specified here, or predefined (see LandmarkFactory).

• optimal (bool): use optimal (LP-based) cost sharing (only makes sense with admissible=true)

• pref (bool): identify preferred operators (see Using preferred operators with the lmcount heuristic)

• alm (bool): use action landmarks

• lpsolver ({CLP, CPLEX, GUROBI}): external solver that should be used to solve linear programs

• CLP: default LP solver shipped with the COIN library

• CPLEX: commercial solver by IBM

• GUROBI: commercial solver

• cache_estimates (bool): cache heuristic estimates

Note: Regarding using different cost transformations, there are a few caveats to be considered, see OptionCaveats.

Optimal search: when using landmarks for optimal search (admissible=true), you probably also want to enable the mpd option of the A* algorithm to improve heuristic estimates

Note: to use optimal=true, you must build the planner with LP support. See LPBuildInstructions.

Note: to use an LP solver, you must build the planner with LP support. See LPBuildInstructions.

Language features supported:

• action costs: supported

• conditional_effects: supported if the LandmarkFactory supports them; otherwise ignored with admissible=false and not allowed with admissible=true

Properties:

• consistent: complicated; needs further thought

• safe: yes except on tasks with axioms or on tasks with conditional effects when using a LandmarkFactory not supporting them

• preferred operators: yes (if enabled; see pref option)

## Landmark-cut heuristic

`lmcut(transform=no_transform(), cache_estimates=true)`

• cache_estimates (bool): cache heuristic estimates

Language features supported:

• action costs: supported

• conditional effects: not supported

• axioms: not supported

Properties:

• consistent: no

• safe: yes

• preferred operators: no

## Merge-and-shrink heuristic

This heuristic implements the algorithm described in the following paper:

For a more exhaustive description of merge-and-shrink, see the journal paper

Please note that the journal paper describes the "old" theory of label reduction, which has been superseded by the above conference paper and is no longer implemented in Fast Downward.

The following paper describes how to improve the DFP merge strategy with tie-breaking, and presents two new merge strategies (dyn-MIASM and SCC-DFP):

`merge_and_shrink(merge_strategy, shrink_strategy, label_reduction=<none>, prune_unreachable_states=true, prune_irrelevant_states=true, max_states=-1, max_states_before_merge=-1, threshold_before_merge=-1, transform=no_transform(), cache_estimates=true, verbosity=verbose)`
• merge_strategy (MergeStrategy): See detailed documentation for merge strategies. We currently recommend SCC-DFP, which can be achieved using merge_strategy=merge_sccs(order_of_sccs=topological,merge_selector=score_based_filtering(scoring_functions=[goal_relevance,dfp,total_order]))

• shrink_strategy (ShrinkStrategy): See detailed documentation for shrink strategies. We currently recommend non-greedy shrink_bisimulation, which can be achieved using shrink_strategy=shrink_bisimulation(greedy=false)

• label_reduction (LabelReduction): See detailed documentation for labels. There is currently only one 'option' to use label_reduction, which is label_reduction=exact Also note the interaction with shrink strategies.

• prune_unreachable_states (bool): If true, prune abstract states unreachable from the initial state.

• prune_irrelevant_states (bool): If true, prune abstract states from which no goal state can be reached.

• max_states (int [-1, infinity]): maximum transition system size allowed at any time point.

• max_states_before_merge (int [-1, infinity]): maximum transition system size allowed for two transition systems before being merged to form the synchronized product.

• threshold_before_merge (int [-1, infinity]): If a transition system, before being merged, surpasses this soft transition system size limit, the shrink strategy is called to possibly shrink the transition system.

• cache_estimates (bool): cache heuristic estimates

• verbosity ({silent, normal, verbose}): Option to specify the level of verbosity.

• silent: silent: no output during construction, only starting and final statistics

• normal: normal: basic output during construction, starting and final statistics

• verbose: verbose: full output during construction, starting and final statistics

Note: Conditional effects are supported directly. Note, however, that for tasks that are not factored (in the sense of the JACM 2014 merge-and-shrink paper), the atomic transition systems on which merge-and-shrink heuristics are based are nondeterministic, which can lead to poor heuristics even when only perfect shrinking is performed.

Note: A currently recommended good configuration uses bisimulation based shrinking, the merge strategy SCC-DFP, and the appropriate label reduction setting (max_states has been altered to be between 10000 and 200000 in the literature):

`merge_and_shrink(shrink_strategy=shrink_bisimulation(greedy=false),merge_strategy=merge_sccs(order_of_sccs=topological,merge_selector=score_based_filtering(scoring_functions=[goal_relevance,dfp,total_order])),label_reduction=exact(before_shrinking=true,before_merging=false),max_states=50000,threshold_before_merge=1)`

Note that for versions of Fast Downward prior to 2016-08-19, the syntax differs. See the recommendation in the file merge_and_shrink_heuristic.cc for an example configuration.

Language features supported:

• action costs: supported

• conditional effects: supported (but see note)

• axioms: not supported

Properties:

• consistent: yes

• safe: yes

• preferred operators: no

## Operator counting heuristic

An operator counting heuristic computes a linear program (LP) in each state. The LP has one variable Count_o for each operator o that represents how often the operator is used in a plan. Operator counting constraints are linear constraints over these varaibles that are guaranteed to have a solution with Count_o = occurrences(o, pi) for every plan pi. Minimizing the total cost of operators subject to some operator counting constraints is an admissible heuristic. For details, see

• Florian Pommerening, Gabriele Roeger, Malte Helmert and Blai Bonet.
LP-based Heuristics for Cost-optimal Planning.
In Proceedings of the Twenty-Fourth International Conference on Automated Planning and Scheduling (ICAPS 2014), pp. 226-234. AAAI Press 2014.

`operatorcounting(constraint_generators, lpsolver=CPLEX, transform=no_transform(), cache_estimates=true)`
• constraint_generators (list of ConstraintGenerator): methods that generate constraints over operator counting variables

• lpsolver ({CLP, CPLEX, GUROBI}): external solver that should be used to solve linear programs

• CLP: default LP solver shipped with the COIN library

• CPLEX: commercial solver by IBM

• GUROBI: commercial solver

• cache_estimates (bool): cache heuristic estimates

Note: to use an LP solver, you must build the planner with LP support. See LPBuildInstructions.

Language features supported:

• action costs: supported

• conditional effects: not supported (the heuristic supports them in theory, but none of the currently implemented constraint generators do)

• axioms: not supported (the heuristic supports them in theory, but none of the currently implemented constraint generators do)

Properties:

• consistent: yes, if all constraint generators represent consistent heuristics

• safe: yes

• preferred operators: no

## Pattern database heuristic

TODO

`pdb(pattern=greedy(), transform=no_transform(), cache_estimates=true)`
• pattern (PatternGenerator): pattern generation method

• cache_estimates (bool): cache heuristic estimates

Language features supported:

• action costs: supported

• conditional effects: not supported

• axioms: not supported

Properties:

• consistent: yes

• safe: yes

• preferred operators: no

## Sample-based potential heuristics

Maximum over multiple potential heuristics optimized for samples. The algorithm is based on

`sample_based_potentials(num_heuristics=1, num_samples=1000, max_potential=1e8, lpsolver=CPLEX, transform=no_transform(), cache_estimates=true, random_seed=-1)`
• num_heuristics (int [0, infinity]): number of potential heuristics

• num_samples (int [0, infinity]): Number of states to sample

• max_potential (double [0.0, infinity]): Bound potentials by this number

• lpsolver ({CLP, CPLEX, GUROBI}): external solver that should be used to solve linear programs

• CLP: default LP solver shipped with the COIN library

• CPLEX: commercial solver by IBM

• GUROBI: commercial solver

• cache_estimates (bool): cache heuristic estimates

• random_seed (int [-1, infinity]): Set to -1 (default) to use the global random number generator. Set to any other value to use a local random number generator with the given seed.

Note: to use an LP solver, you must build the planner with LP support. See LPBuildInstructions.

Language features supported:

• action costs: supported

• conditional effects: not supported

• axioms: not supported

Properties:

• consistent: yes

• safe: yes

• preferred operators: no

## Zero-One PDB

The zero/one pattern database heuristic is simply the sum of the heuristic values of all patterns in the pattern collection. In contrast to the canonical pattern database heuristic, there is no need to check for additive subsets, because the additivity of the patterns is guaranteed by action cost partitioning. This heuristic uses the most simple form of action cost partitioning, i.e. if an operator affects more than one pattern in the collection, its costs are entirely taken into account for one pattern (the first one which it affects) and set to zero for all other affected patterns.

`zopdbs(patterns=systematic(1), transform=no_transform(), cache_estimates=true)`
• patterns (PatternCollectionGenerator): pattern generation method

• cache_estimates (bool): cache heuristic estimates

Language features supported:

• action costs: supported

• conditional effects: not supported

• axioms: not supported

Properties: