Constraint satisfaction problem

Constraint satisfaction problems (CSPs) are mathematical questions defined as a set of objects whose state must satisfy a number of constraints or limitations. CSPs represent the entities in a problem as a homogeneous collection of finite constraints over variables, which is solved by constraint satisfaction methods. CSPs are the subject of intense research in both artificial intelligence and operations research, since the regularity in their formulation provides a common basis to analyze and solve problems of many seemingly unrelated families. CSPs often exhibit high complexity, requiring a combination of heuristics and combinatorial search methods to be solved in a reasonable time. Constraint Programming (CP) is the field of research that specifically focuses on tackling these kinds of problems.[1][2] Additionally, boolean satisfiability problem (SAT), the satisfiability modulo theories (SMT), mixed integer programming (MIP) and answer set programming (ASP) are all fields of research focusing on the resolution of particular forms of the constraint satisfaction problem.

Examples of problems that can be modeled as a constraint satisfaction problem include:

These are often provided with tutorials of CP, ASP, Boolean SAT and SMT solvers. In the general case, constraint problems can be much harder, and may not be expressible in some of these simpler systems. "Real life" examples include automated planning,[5][6] lexical disambiguation,[7][8] musicology[9] and resource allocation.[10]

The existence of a solution to a CSP can be viewed as a decision problem. This can be decided by finding a solution, or failing to find a solution after exhaustive search (stochastic algorithms typically never reach an exhaustive conclusion, while directed searches often do, on sufficiently small problems). In some cases the CSP might be known to have solutions beforehand, through some other mathematical inference process.

Formal definition

Formally, a constraint satisfaction problem is defined as a triple , where [11]

is a set of variables,
is a set of their respective domains of values, and
is a set of constraints.

Each variable can take on the values in the nonempty domain . Every constraint is in turn a pair , where is a subset of variables and is a -ary relation on the corresponding subset of domains . An evaluation of the variables is a function from a subset of variables to a particular set of values in the corresponding subset of domains. An evaluation satisfies a constraint if the values assigned to the variables satisfies the relation .

An evaluation is consistent if it does not violate any of the constraints. An evaluation is complete if it includes all variables. An evaluation is a solution if it is consistent and complete; such an evaluation is said to solve the constraint satisfaction problem.

Resolution

Constraint satisfaction problems on finite domains are typically solved using a form of search. The most used techniques are variants of backtracking, constraint propagation, and local search. These techniques are also often combined, as in the VLNS method, and current research involves other technologies such as linear programming.[12]

Backtracking is a recursive algorithm. It maintains a partial assignment of the variables. Initially, all variables are unassigned. At each step, a variable is chosen, and all possible values are assigned to it in turn. For each value, the consistency of the partial assignment with the constraints is checked; in case of consistency, a recursive call is performed. When all values have been tried, the algorithm backtracks. In this basic backtracking algorithm, consistency is defined as the satisfaction of all constraints whose variables are all assigned. Several variants of backtracking exist. Backmarking improves the efficiency of checking consistency. Backjumping allows saving part of the search by backtracking "more than one variable" in some cases. Constraint learning infers and saves new constraints that can be later used to avoid part of the search. Look-ahead is also often used in backtracking to attempt to foresee the effects of choosing a variable or a value, thus sometimes determining in advance when a subproblem is satisfiable or unsatisfiable.

Constraint propagation techniques are methods used to modify a constraint satisfaction problem. More precisely, they are methods that enforce a form of local consistency, which are conditions related to the consistency of a group of variables and/or constraints. Constraint propagation has various uses. First, it turns a problem into one that is equivalent but is usually simpler to solve. Second, it may prove satisfiability or unsatisfiability of problems. This is not guaranteed to happen in general; however, it always happens for some forms of constraint propagation and/or for certain kinds of problems. The most known and used forms of local consistency are arc consistency, hyper-arc consistency, and path consistency. The most popular constraint propagation method is the AC-3 algorithm, which enforces arc consistency.

Local search methods are incomplete satisfiability algorithms. They may find a solution of a problem, but they may fail even if the problem is satisfiable. They work by iteratively improving a complete assignment over the variables. At each step, a small number of variables are changed in value, with the overall aim of increasing the number of constraints satisfied by this assignment. The min-conflicts algorithm is a local search algorithm specific for CSPs and is based on that principle. In practice, local search appears to work well when these changes are also affected by random choices. An integration of search with local search has been developed, leading to hybrid algorithms.

Theoretical aspects

Decision problems

CSPs are also studied in computational complexity theory and finite model theory. An important question is whether for each set of relations, the set of all CSPs that can be represented using only relations chosen from that set is either in P or NP-complete. If such a dichotomy theorem is true, then CSPs provide one of the largest known subsets of NP which avoids NP-intermediate problems, whose existence was demonstrated by Ladner's theorem under the assumption that P ≠ NP. Schaefer's dichotomy theorem handles the case when all the available relations are Boolean operators, that is, for domain size 2. Schaefer's dichotomy theorem was recently generalized to a larger class of relations.[13]

Most classes of CSPs that are known to be tractable are those where the hypergraph of constraints has bounded treewidth (and there are no restrictions on the set of constraint relations), or where the constraints have arbitrary form but there exist essentially non-unary polymorphisms of the set of constraint relations.

Every CSP can also be considered as a conjunctive query containment problem.[14]

Function problems

A similar situation exists between the functional classes FP and #P. By a generalization of Ladner's theorem, there are also problems in neither FP nor #P-complete as long as FP ≠ #P. As in the decision case, a problem in the #CSP is defined by a set of relations. Each problem takes a Boolean formula as input and the task is to compute the number of satisfying assignments. This can be further generalized by using larger domain sizes and attaching a weight to each satisfying assignment and computing the sum of these weights. It is known that any complex weighted #CSP problem is either in FP or #P-hard.[15]

Variants

The classic model of Constraint Satisfaction Problem defines a model of static, inflexible constraints. This rigid model is a shortcoming that makes it difficult to represent problems easily.[16] Several modifications of the basic CSP definition have been proposed to adapt the model to a wide variety of problems.

Dynamic CSPs

Dynamic CSPs[17] (DCSPs) are useful when the original formulation of a problem is altered in some way, typically because the set of constraints to consider evolves because of the environment.[18] DCSPs are viewed as a sequence of static CSPs, each one a transformation of the previous one in which variables and constraints can be added (restriction) or removed (relaxation). Information found in the initial formulations of the problem can be used to refine the next ones. The solving method can be classified according to the way in which information is transferred:

  • Oracles: the solution found to previous CSPs in the sequence are used as heuristics to guide the resolution of the current CSP from scratch.
  • Local repair: each CSP is calculated starting from the partial solution of the previous one and repairing the inconsistent constraints with local search.
  • Constraint recording: new constraints are defined in each stage of the search to represent the learning of inconsistent group of decisions. Those constraints are carried over to the new CSP problems.

Flexible CSPs

Classic CSPs treat constraints as hard, meaning that they are imperative (each solution must satisfy all of them) and inflexible (in the sense that they must be completely satisfied or else they are completely violated). Flexible CSPs relax those assumptions, partially relaxing the constraints and allowing the solution to not comply with all of them. This is similar to preferences in preference-based planning. Some types of flexible CSPs include:

  • MAX-CSP, where a number of constraints are allowed to be violated, and the quality of a solution is measured by the number of satisfied constraints.
  • Weighted CSP, a MAX-CSP in which each violation of a constraint is weighted according to a predefined preference. Thus satisfying constraint with more weight is preferred.
  • Fuzzy CSP model constraints as fuzzy relations in which the satisfaction of a constraint is a continuous function of its variables' values, going from fully satisfied to fully violated.

Decentralized CSPs

In DCSPs[19] each constraint variable is thought of as having a separate geographic location. Strong constraints are placed on information exchange between variables, requiring the use of fully distributed algorithms to solve the constraint satisfaction problem.

See also

References

  1. Lecoutre, Christophe (2013). Constraint Networks: Techniques and Algorithms. Wiley. p. 26. ISBN 978-1-118-61791-5.
  2. "Constraints – incl. option to publish open access". springer.com. Retrieved 2019-10-03.
  3. Chandra, Satish, et al. "Type inference for static compilation of JavaScript." ACM SIGPLAN Notices 51.10 (2016): 410-429.
  4. Jim, Trevor, and Jens Palsberg. "Type inference in systems of recursive types with subtyping." Available on authors’ web page (1999).
  5. Malik Ghallab; Dana Nau; Paolo Traverso (21 May 2004). Automated Planning: Theory and Practice. Elsevier. pp. 1–. ISBN 978-0-08-049051-9.
  6. Dynamic Flexible Constraint Satisfaction and Its Application to AI Planning, Archived 2009-02-06 at the Wayback Machine Ian Miguel – slides.
  7. Demetriou, George C. "Lexical disambiguation using constraint handling in Prolog (CHIP)." Proceedings of the sixth conference on European chapter of the Association for Computational Linguistics. Association for Computational Linguistics, 1993.
  8. MacDonald, Maryellen C., and Mark S. Seidenberg. "Constraint satisfaction accounts of lexical and sentence comprehension." Handbook of Psycholinguistics (Second Edition). 2006. 581–611.
  9. Mauricio Toro, Carlos Agon, Camilo Rueda, Gerard Assayag. "GELISP: A FRAMEWORK TO REPRESENT MUSICAL CONSTRAINT SATISFACTION PROBLEMS AND SEARCH STRATEGIES." Journal of Theoretical and Applied Information Technology 86 (2). 2016. 327–331.
  10. Modi, Pragnesh Jay, et al. "A dynamic distributed constraint satisfaction approach to resource allocation." International Conference on Principles and Practice of Constraint Programming. Springer, Berlin, Heidelberg, 2001.
  11. Stuart Jonathan Russell; Peter Norvig (2010). Artificial Intelligence: A Modern Approach. Prentice Hall. p. Chapter 6. ISBN 9780136042594.
  12. Hybrid optimization : the ten years of CPAIOR. Milano, Michela., Van Hentenryck, Pascal., International Conference on Integration of AI and OR Techniques in Constraint Programming for Combinatorial Optimisation Problems. New York: Springer. 2011. ISBN 9781441916440. OCLC 695387020.CS1 maint: others (link)
  13. Bodirsky, Manuel; Pinsker, Michael (2011). "Schaefer's theorem for graphs". Proceedings of the 43rd Annual Symposium on Theory of Computing (STOC '11). Association for Computing Machinery. pp. 655–664. arXiv:1011.2894. Bibcode:2010arXiv1011.2894B. doi:10.1145/1993636.1993724. ISBN 978-1-4503-0691-1. S2CID 47097319.
  14. Kolaitis, Phokion G.; Vardi, Moshe Y. (2000). "Conjunctive-Query Containment and Constraint Satisfaction". Journal of Computer and System Sciences. 61 (2): 302–332. doi:10.1006/jcss.2000.1713.
  15. Cai, Jin-Yi; Chen, Xi (2012). Complexity of counting CSP with complex weights. pp. 909–920. arXiv:1111.2384. doi:10.1145/2213977.2214059. ISBN 978-1-4503-1245-5. S2CID 53245129.
  16. Miguel, Ian (July 2001). Dynamic Flexible Constraint Satisfaction and its Application to AI Planning (Ph.D. thesis). University of Edinburgh School of Informatics. CiteSeerX 10.1.1.9.6733. hdl:1842/326.
  17. Dechter, R. and Dechter, A., Belief Maintenance in Dynamic Constraint Networks Archived 2012-11-17 at the Wayback Machine In Proc. of AAAI-88, 37–42.
  18. Solution reuse in dynamic constraint satisfaction problems, Thomas Schiex
  19. Duffy, K.R.; Leith, D.J. (August 2013), "Decentralized Constraint Satisfaction", IEEE/ACM Transactions on Networking, 21(4), 21, pp. 1298–1308, arXiv:1103.3240, doi:10.1109/TNET.2012.2222923, S2CID 11504393

Further reading

This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.