Subset sum problem
The subset sum problem is a decision problem in computer science. In its most general formulation, there is a multiset S of integers and a target sum T, and the question is to decide whether any subset of the integers sum to precisely T.[1] The problem is known to be NP-complete. Moreover, some restricted variants of it are NP-complete too, for example:[1]
- The variant in which all input integers are positive.
- The variant in which input integers may be positive or negative, and T = 0. For example, given the set , the answer is yes because the subset sums to zero.
- The variant in which all input integers are positive, and the target sum is exactly half the sum of all input numbers, i.e., . This variant is known as the partition problem.
Subset sum can also be regarded as an optimization problem: find a subset whose sum is as close as possible to T. It is NP-hard, but there are several algorithms that can solve it reasonably fast in practice.
The knapsack problem is a generalization of subset-sum.[2]
Computational hardness
The run-time complexity of the subset sum problem depends on two parameters: n - the number of input integers, and L - the precision of the problem, stated as the number of binary place values that it takes to state the problem.
- If n (the number of integers) is a small fixed number, then an exhaustive search for the solution is practical.
- If L (the number of binary digits) is a small fixed number, then there are dynamic programming algorithms that can solve it exactly.
When both n and L are large, the problem is NP-hard. The complexity of the best known algorithms is exponential in the smaller of the two parameters n and L. The following variants are known to be NP-hard:
- All input integers are positive (and the target-sum T is a part of the input). This can be proved by a direct reduction from 3SAT.[3][4]
- The input integers can be both positive and negative, and the target-sum T = 0. This can be proved by reduction from the variant with positive integers. Denote that variant by SubsetSumPositive and the current variant by SubsetSumZero. Given an instance (S, T) of SubsetSumPositive, construct an instance of SubsetSumZero by adding a single element with value -T. Given a solution to the SubsetSumPositive instance, adding the -T yields a solution to the SubsetSumZero instance. Conversely, given a solution to the SubsetSumZero instance, it must contain the -T (since all integers in S are positive), so to get a sum of zero, it must also contain a subset of S with a sum of +T, which is a solution of the SubsetSumPositive instance.
- The input integers are positive, and T = sum(S)/2. This can also be proved by reduction from the general variant; see partition problem.
Exponential time algorithms
There are several ways to solve subset sum in time exponential in n.[5]
Inclusion-Exclusion
The most naïve algorithm would be to cycle through all subsets of n numbers and, for every one of them, check if the subset sums to the right number. The running time is of order , since there are subsets and, to check each subset, we need to sum at most n elements.
The algorithm can be implemented by depth-first search of a binary tree: each level in the tree corresponds to an input number; the left branch corresponds to excluding the number from the set, and the right branch corresponds to including the number (hence the name Inclusion-Exclusion). The memory required is . The run-time can be improved by several heuristics:[5]
- Process the input numbers in descending order.
- If the integers included in a given node exceed the sum of the best subset found so far, the node is pruned.
- If the integers included in a given node, plus all remaining integers, are less than the sum of the best subset found so far, the node is pruned.
Horowitz and Sahni
Horowitz and Sahni[6] published a faster exponential-time algorithm, which runs in time , but requires much more space - . The algorithm splits arbitrarily the n elements into two sets of each. For each of these two sets, it stores a list of the sums of all possible subsets of its elements. Each of these two lists is then sorted. Using a standard comparison sorting algorithm for this step would take time . However, given a sorted list of sums for elements, the list can be expanded to two sorted lists with the introduction of a ()th element, and these two sorted lists can be merged in time . Thus, each list can be generated in sorted form in time . Given the two sorted lists, the algorithm can check if an element of the first array and an element of the second array sum up to T in time . To do that, the algorithm passes through the first array in decreasing order (starting at the largest element) and the second array in increasing order (starting at the smallest element). Whenever the sum of the current element in the first array and the current element in the second array is more than T, the algorithm moves to the next element in the first array. If it is less than T, the algorithm moves to the next element in the second array. If two elements that sum to T are found, it stops.
Schroeppel and Shamir
Schroeppel and Shamir[7] presented an algorithm based on Horowitz and Sanhi, that requires similar runtime - , much less space - . Rather than generating all subsets of n/2 elements in advance, they partition the elements into 4 sets of n/4 elements each, and generate subsets of n/2 elements dynamically using a min heap.
Due to space requirements, the HS algorithm is practical for up to about 50 integers, and the SS algorithm is practical for up to 100 integers.[5]
Howgrave-Graham and Joux
Howgrave-Graham and Joux[8] presented a probabilistic algorithm that runs faster than all previous ones - in time . It solves only the decision problem, cannot prove there is no solution for a given sum, and does not return the subset sum closest to T.
Pseudo-polynomial time dynamic programming solution
The problem can be solved in pseudo-polynomial time using dynamic programming. Suppose the sequence is
sorted in the increasing order and we wish to determine if there is a nonempty subset which sums to zero. Define the boolean-valued function to be the value ( or ) of
- "there is a nonempty subset of which sums to ."
Thus, the solution to the problem "Given a set of integers, is there a non-empty subset whose sum is zero?" is the value of .
Let be the sum of the negative values and the sum of the positive values. Clearly, , if or . So these values do not need to be stored or computed.
Create an array to hold the values for and .
The array can now be filled in using a simple recursion. Initially, for , set
where is a boolean function that returns true if is equal to , false otherwise.
Then, for , set
- or or .
For each assignment, the values of on the right side are already known, either because they were stored in the table for the previous value of or because if or . Therefore, the total number of arithmetic operations is . For example, if all the values are for some , then the time required is .
This algorithm is easily modified to return the subset with sum 0 if there is one.
The dynamic programming solution has runtime of where is the sum we want to find in set of numbers. This solution does not count as polynomial time in complexity theory because is not polynomial in the size of the problem, which is the number of bits used to represent it. This algorithm is polynomial in the values of and , which are exponential in their numbers of bits.
For the case that each is positive and bounded by a fixed constant , Pisinger found a linear time algorithm having time complexity (note that this is for the version of the problem where the target sum is not necessarily zero, otherwise the problem would be trivial).[9][10] In 2015, Koiliaris and Xu found a deterministic algorithm for the subset sum problem where is the sum we need to find.[11] In 2017, Bringmann found a randomized time algorithm.[12]
In 2014, Curtis and Sanches found a simple recursion highly scalable in SIMD machines having time and space, where is the number of processing elements, and is the lowest integer.[13] This is the best theoretical parallel complexity known so far.
A comparison of practical results and the solution of hard instances of the SSP is discussed in.[14]
Polynomial time approximate algorithm
An approximate version of the subset sum would be: given a set of numbers and a number , output:
- Yes, if there is a subset that sums up to .
- No, if there is no subset summing up to a number between and for some small .
- Any answer, if there is a subset summing up to a number between and but no subset summing up to .
If all numbers are non-negative, the approximate subset sum is solvable in time polynomial in and .
The solution for subset sum also provides the solution for the original subset sum problem in the case where the numbers are small (again, for non-negative numbers). If any sum of the numbers can be specified with at most bits, then solving the problem approximately with is equivalent to solving it exactly. Then, the polynomial time algorithm for approximate subset sum becomes an exact algorithm with running time polynomial in and (i.e., exponential in ).
The algorithm for the approximate subset sum problem is as follows:
initialize a list S to contain one element 0. for each i from 1 to N do let T be a list consisting of xi + y, for all y in S let U be the union of T and S sort U make S empty let y be the smallest element of U add y to S for each element z of U in increasing order do // Trim the list by eliminating numbers close to one another // and throw out elements greater than s. if y + cs/N < z ≤ s then y = z add z to S if S contains a number between (1 − c)s and s then return yes else return no
The algorithm is polynomial time because the lists , and always remain of size polynomial in and and, as long as they are of polynomial size, all operations on them can be done in polynomial time. The size of lists is kept polynomial by the trimming step, in which we only include a number into if it is greater than the previous one by and not greater than .
This step ensures that each element in is smaller than the next one by at least and do not contain elements greater than . Any list with that property consists of no more than elements.
The algorithm is correct because each step introduces an additive error of at most and steps together introduce the error of at most .
See also
References
- Kleinberg, Jon; Tardos, Éva (2006). Algorithm Design (2nd ed.). p. 491. ISBN 0-321-37291-3.
- Martello, Silvano; Toth, Paolo (1990). "4 Subset-sum problem". Knapsack problems: Algorithms and computer interpretations. Wiley-Interscience. pp. 105–136. ISBN 0-471-92420-2. MR 1086874.
- Goodrich, Michael. "More NP complete and NP hard problems" (PDF).
- Informal-CS (2018). "Reduction : 3-CNF SAT to Subset Sum". Youtube.
- Richard E. Korf, Ethan L. Schreiber, and Michael D. Moffitt (2014). "Optimal Sequential Multi-Way Number Partitioning" (PDF).CS1 maint: multiple names: authors list (link)
- Horowitz, Ellis; Sahni, Sartaj (1974), "Computing partitions with applications to the knapsack problem" (PDF), Journal of the Association for Computing Machinery, 21 (2): 277–292, doi:10.1145/321812.321823, hdl:1813/5989, MR 0354006
- Schroeppel, Richard; Shamir, Adi (1981-08-01). "A $T = O(2^{n/2} )$, $S = O(2^{n/4} )$ Algorithm for Certain NP-Complete Problems". SIAM Journal on Computing. 10 (3): 456–464. doi:10.1137/0210033. ISSN 0097-5397.
- Howgrave-Graham, Nick; Joux, Antoine (2010). Gilbert, Henri (ed.). "New Generic Algorithms for Hard Knapsacks". Advances in Cryptology – EUROCRYPT 2010. Lecture Notes in Computer Science. Berlin, Heidelberg: Springer: 235–256. doi:10.1007/978-3-642-13190-5_12. ISBN 978-3-642-13190-5.
- http://hjemmesider.diku.dk/~pisinger/codes.html
- Pisinger D (1999). "Linear Time Algorithms for Knapsack Problems with Bounded Weights". Journal of Algorithms, Volume 33, Number 1, October 1999, pp. 1–14
- Koiliaris, Konstantinos; Xu, Chao (2015-07-08). "A Faster Pseudopolynomial Time Algorithm for Subset Sum". arXiv:1507.02318 [cs.DS].
- Bringmann K. A near-linear pseudopolynomial time algorithm for subset sum[C]//Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms. Society for Industrial and Applied Mathematics, 2017: 1073-1084
- Curtis, V. V.; Sanches, C. A. A. (January 2016). "An efficient solution to the subset-sum problem on GPU: An efficient solution to the subset-sum problem on GPU". Concurrency and Computation: Practice and Experience. 28 (1): 95–113. doi:10.1002/cpe.3636.
- Curtis, V. V.; Sanches, C. A. A. (July 2017). "A low-space algorithm for the subset-sum problem on GPU". Computers & Operations Research. 83: 120–124. doi:10.1016/j.cor.2017.02.006.
Further reading
- Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein, Clifford (2001) [1990]. "35.5: The subset-sum problem". Introduction to Algorithms (2nd ed.). MIT Press and McGraw-Hill. ISBN 0-262-03293-7.
- Michael R. Garey and David S. Johnson (1979). Computers and Intractability: A Guide to the Theory of NP-Completeness. W.H. Freeman. ISBN 0-7167-1045-5. A3.2: SP13, pg.223.