Bin packing problem

In the bin packing problem, items of different volumes must be packed into a finite number of bins or containers each of a fixed given volume in a way that minimizes the number of bins used. In computational complexity theory, it is a combinatorial NP-hard problem.[1] The decision problem (deciding if items will fit into a specified number of bins) is NP-complete.[2]

There are many variations of this problem, such as 2D packing, linear packing, packing by weight, packing by cost, and so on. They have many applications, such as filling up containers, loading trucks with weight capacity constraints, creating file backups in media and technology mapping in field-programmable gate array semiconductor chip design.

The bin packing problem can also be seen as a special case of the cutting stock problem. When the number of bins is restricted to 1 and each item is characterised by both a volume and a value, the problem of maximising the value of items that can fit in the bin is known as the knapsack problem.

Despite the fact that the bin packing problem has an NP-hard computational complexity, optimal solutions to very large instances of the problem can be produced with sophisticated algorithms. In addition, many heuristics have been developed: for example, the first fit algorithm provides a fast but often non-optimal solution, involving placing each item into the first bin in which it will fit. It requires Θ(n log n) time, where n is the number of items to be packed. The algorithm can be made much more effective by first sorting the list of items into decreasing order (sometimes known as the first-fit decreasing algorithm), although this still does not guarantee an optimal solution, and for longer lists may increase the running time of the algorithm. It is known, however, that there always exists at least one ordering of items that allows first-fit to produce an optimal solution.[3]

A variant of bin packing that occurs in practice is when items can share space when packed into a bin. Specifically, a set of items could occupy less space when packed together than the sum of their individual sizes. This variant is known as VM packing[4] since when virtual machines (VMs) are packed in a server, their total memory requirement could decrease due to pages shared by the VMs that need only be stored once. If items can share space in arbitrary ways, the bin packing problem is hard to even approximate. However, if the space sharing fits into a hierarchy, as is the case with memory sharing in virtual machines, the bin packing problem can be efficiently approximated.

Another variant of bin packing of interest in practice is the so-called online bin packing. Here the items of different volume are supposed to arrive sequentially, and the decision maker has to decide whether to select and pack the currently observed item, or else to let it pass. Each decision is without recall. In contrast, offline bin packing allows rearranging the items in the hope of achieving a better packing once additional items arrive. This will of course require additional storage for holding the items to be rearranged.

Formal statement

In Computers and Intractability[5] Garey and Johnson list the bin packing problem under the reference [SR1]. They define its decision variant as follows.

Instance: Finite set of items, a size for each , a positive integer bin capacity , and a positive integer .
Question: Is there a partition of into disjoint sets such that the sum of the sizes of the items in each is or less?

Note that in the literature often an equivalent notation is used, where and for each . Furthermore, research is mostly interested in the optimization variant, which asks for the smallest possible value of . A solution is optimal if it has minimal . The -value for an optimal solution for a set of items is denoted by or just if the set of items is clear from the context.

A possible integer linear programming formulation of the problem is:

minimize
subject to

where if bin is used and if item is put into bin .[6]

Hardness of bin packing

The bin packing problem is strongly NP-complete.[5] This can be proven by reducing the strongly NP-complete 3-partition problem to bin packing. On the other hand, it is solvable in pseudo-polynomial time for each fixed and solvable in polynomial time for each fixed .[5] Furthermore, a reduction from the partition problem shows that there can be no approximation algorithm with absolute approximation ratio smaller than unless .[7]

In the online version of the bin packing problem, the items arrive one after another and the (irreversible) decision where to place an item has to be made before knowing the next item or even if there will be another one. Yao [8] proved 1980 that there can be no online algorithm with asymptotic competitive ratio smaller than . Brown [9] and Liang[10] improved this bound to . Afterward, this bound was improved to by Vliet.[11] In 2012, this lower bound was again improved by Békési and Galambos[12] to .

Approximation algorithms for bin packing

To measure the performance of an approximation algorithm there are two approximation ratios considered in the literature. For a given list of items the number denotes the number of bins used when algorithm is applied to list , while denotes the optimum number for this list. The absolute worst-case performance ratio for an algorithm is defined as

On the other hand, the asymptotic worst-case ratio is defined as

Additionally, one can restrict the lists to those for which all items have a size of at most . For such lists, the bounded size performance ratios are denoted as and .

Approximation algorithms for bin packing can be classified into two categories. First heuristics that considered the items in a given order and place them one by one inside the bins. These heuristics are also applicable to the online version of this problem. The other class contains the offline algorithms. It contains heuristics as well, but they modify the given list of items e.g. by sorting the items by size. These algorithms are no longer applicable to the online variant of this problem. However, they have an improved approximation guarantee compared to their offline versions while maintaining the advantage of their small time-complexity. This class contains asymptotic approximation schemes as well. These algorithms have an approximation guarantee of the form for some constant that may depend on . For an arbitrarily large these algorithms get arbitrarily close to . However, this comes at the cost of a (drastically) increased time complexity compared to the heuristical approaches.

Online heuristics

A diverse set of offline and online heuristics for bin-packing have been studied by Johnson.[13] He introduced the following two characterizations for online heuristics. An algorithm is an any-fit (AF) algorithm if it fulfills the following property: A new bin is opened for a considered item, only if it does not fit into an already open bin. On the other hand, an algorithm is an almost-any-fit (AAF) algorithm if it has the additional property: If a bin is the unique bin with the lowest non-zero level, it cannot be chosen unless the item will not fit in any other bin with a non-zero level. He proved that each AAF-algorithm has an approximation guarantee of , meaning that it has an asymptotic approximation ratio of at most and that there are lists for that it has an asymptotic approximation ratio of at least .

An online algorithm uses k-bounded space if, for each new item, the number of bins in which it may be packed is at most k.[14] Examples for these algorithms are Next-k-Fit and Harmonic-k.

AlgorithmApproximation guaranteeWorst case list Time-complexity
Next-fit (NF)[13][13]
First-fit (FF)[15][15][13]
Best-fit (BF)[16] [16][13]
Worst-Fit (WF)[13][13][13]
Almost-Worst-Fit (AWF)[13][13][13]
Refined-First-Fit (RFF)[8] (for )[8][8]
Harmonic-k (Hk) for [17] [17][17]
Refined Harmonic (RH)[17][17]
Modified Harmonic (MH)[18]
Modified Harmonic 2 (MH2)[18]
Harmonic + 1 (H+1)[19]
Harmonic ++ (H++)[19][19]

Next-Fit (NF)

Next Fit (NF) is a bounded space AF-algorithm with only one partially filled bin that is open at any time. The algorithm works as follows. It considers the items in an order defined by a list . If an item fits inside the currently considered bin, the item is placed inside it. Otherwise, the current bin is closed, a new bin is opened and the current item is placed inside this new bin.

This algorithm was studied by Johnson in this doctoral thesis[13] in the year 1973. It has the following properties:

  • The running time can be bounded by , where is the number of items.[13]
  • For each list it holds that and hence .[13]
  • For each there exists a list such that and .[13]
  • for all .[13]
  • for all .[13]
  • For each algorithm that is an AF-algorithm it holds that .[13]

The intuition to the proof of the upper bound is the following: The number of bins used by this algorithm is no more than twice the optimal number of bins. In other words, it is impossible for 2 bins to be at most half full because such a possibility implies that at some point, exactly one bin was at most half full and a new one was opened to accommodate an item of size at most . But since the first one has at least a space of , the algorithm will not open a new bin for any item whose size is at most . Only after the bin fills with more than or if an item with a size larger than arrives, the algorithm may open a new bin. Thus if we have bins, at least bins are more than half full. Therefore, . Because is a lower bound of the optimum value , we get that and therefore .[20]

The family of lists for which it holds that is given by with . The optimal solution for this list has bins containing two items with size and one bin with items with size (i.e., bins total), while the solution generated by NF has bins with one item of size and one item with size .

Next-k-Fit (NkF)

NkF works as NF, but instead of keeping only one bin open, the algorithm keeps the last bins open and chooses the first bin in which the item fits.

For the NkF delivers results that are improved compared to the results of NF, however, increasing to constant values larger than improves the algorithm no further in its worst-case behavior. If algorithm is an AAF-algorthm and then .[13]

First-Fit (FF)

First-Fit is an AF-algorithm that processes the items in a given arbitrary order . For each item in , it attempts to place the item in the first bin that can accommodate the item. If no bin is found, it opens a new bin and puts the item within the new bin.

The first upper bound of for FF was proven by Ullman[21] in 1971. In 1972, this upper bound was improved to by Garey et al.[22] In 1976, it was improved by Garey et al. [23] to , which equivalent to due to the integrality of and . The next improvement, by Xia and Tan [24] in 2010, lowered the bound to . Finally 2013, this bound was improved to by Dósa and Sgall.[15] They also present an example input list , for that matches this bound.

Best-Fit (BF)

Best-fit is an AAF-algorithm similar to First-fit. Instead of placing the next item in the first bin, where it fits, it is placed inside the bin that has the maximum load, where the item fits.

The first upper bound of for BF was proven by Ullman[21] in 1971. This upper bound was improved to by Garey et al.[22] Afterward, it was improved by Garey et al. [23] to . Finally this bound was improved to by Dósa and Sgall.[16] They also present an example input list , for that matches this bound.

Worst-Fit (WF)

This algorithm is similar to Best-fit. instead of placing the item inside the bin with the maximum load, the item is placed inside the bin with the minimum load.

This algorithm can behave as badly as Next-Fit and will do so on the worst-case list for that .[13] Furthermore, it holds that . Since WF is an AF-algorithm, there exists an AF-algorithm such that .[13]

Almost Worst-Fit (AWF)

AWF is an AAF-algorithm that considers the items in order of a given list . It tries to fill the next item inside the second most empty open bin (or emptiest bin if there are two such bins). If it does not fit, it tries the most empty one, and if it does not fit there as well, the algorithm opens a new bin. As AWF is an AAF algorithm it has an asymptotic worst-case ratio of .[13]

Refined-First-Fit (RFF)

The items are categorized in four classes. An item is called -piece, -piece, -piece, or -piece, if its size is in the interval , , , or respectively. Similarly, the bins are categorized into four classes. Let be a fixed integer. The next item is assigned due to the following rules: It is placed using First-Fit into a bin in

  • Class 1, if is an -piece,
  • Class 2, if is an -piece,
  • Class 3, if is an -piece, but not the th -piece seen so far, for any integer .
  • Class 1, if is the th -piece seen so far,
  • Class 4, if is an -piece.

Note that this algorithm is not an any-Fit algorithm since it may open a new bin despite the fact that the current item fits inside an open bin. This algorithm was first presented by Andrew Chi-Chih Yao,[8] who proved that it has an approximation guarantee of and presented a familie of lists with for .

Harmonic-k

The Harmonic-k algorithm partitiones the interval of sizes harmonically into pieces for and such that . An item is called an -item, if . The algorithm divides the set of empty bins into infinite classes for , one bin type for each item type. A bin of type is only used for bins to pack items of type . Each bin of type for can contain exactly -items. The algorithm now acts as follows: If the next item is an -item for , the item is placed in the first (only open) bin that contains fewer than pieces or opens a new one if no such bin exists. If the next item is an -item, the algorithm places it into the bins of type using Next-Fit.

This algorithm was first described by Lee and Lee.[17] It has a time complexity of and at each step, there are at most open bins that can be potentially used to place items, i.e., it is a k-bounded space algorithm. Furthermore, they studied its asymptotic approximation ratio. They defined a sequence , for and proved that for it holds that . For it holds that . Additionally, they presented a family of worst-case examples for that

Refined-Harmonic (RH)

The Refined-Harmonic combines ideas from the Harmonic-k algorithm with ideas from Refined-First-Fit. It places the items larger than similar as in Refined-First-Fit, while the smaller items are placed using Harmonic-k. The intuition for this strategy is to reduce the huge waste for bins containing pieces that are just larger than .

The algorithm classifies the items with regard to the following intervals: , , , , , for , and . The algorithm places the -items as in Harmonic-k, while it follows a different strategy for the items in and . There are four possibilities to pack -items and -items into bins.

  • An -bin contains only one -item.
  • An -bin contains only one -item.
  • An -bin contains one -item and one -item.
  • An -bin contains two -items.

An -bin denotes a bin that is designated to contain a second -item. The algorithm uses the numbers N_a, N_b, N_ab, N_bb, and N_b' to count the numbers of corresponding bins in the solution. Furthermore, N_c= N_b+N_ab

Algorithm Refined-Harmonic-k for a list L = (i_1, \dots i_n):
1. N_a = N_b = N_ab = N_bb = N_b' = N_c = 0
2. If i_j is an I_k-piece
       then use algorithm Harmonic-k to pack it
3.     else if i_j is an I_a-item
           then if N_b != 1, 
               then pack i_j into any J_b-bin; N_b--;  N_ab++;                
               else place i_j in a new (empty) bin; N_a++;
4.         else if i_j is an I_b-item
               then if N_b' = 1
                   then place i_j into the I_b'-bin; N_b' = 0; N_bb++;
5.                 else if N_bb <= 3N_c
                       then place i_j in a new bin and designate it as an I_b'-bin; N_b' = 1
                       else if N_a != 0
                           then place i_j into any I_a-bin; N_a--; N_ab++;N_c++
                           else place i_j in a new bin; N_b++;N_c++

This algorithm was first described by Lee and Lee.[17] They proved that for it holds that .

Offline algorithms

AlgorithmApproximation guaranteeWorst case instance
First-fit-decreasing (FFD) [25][25]
Modified-first-fit-decreasing (MFFD)[26][27]
Hoberg and Rothvoss[28]

First Fit Decreasing (FFD)

This algorithm works analog to First-Fit. However, before starting to place the items, they are sorted in non-increasing order of their sizes. This algorithm can be implemented to have a running time of at most .

In 1973, D.S. Johnson proved in his doctoral theses[13] that . In 1985, B.S. Backer[29] gave a slightly simpler proof and showed that the additive constant is not more than 3. Yue Minyi[30] proved that in 1991 and, in 1997, improved this analysis to together with Li Rongheng.[31] In 2007 György Dósa[25] proved the tight bound and presented an example for which .

The lower bound example given in by Dósa[25] is the following: Consider the two bin configurations and . If there are 4 copies of and 2 copies of in the optimal solution, FFD will compute the following bins: 4 bins with configuration , one bin with configuration , one bin with configuration , one bin with configuration , and one final bin with configuration , i.e. 8 bins total, while the optimum has only 6 bins. Therefore, the upper bound is tight, because . This example can be extended to all sizes of .[25]

Modified First Fit Decreasing (MFFD)

Modified first fit decreasing (MFFD)[27] improves on FFD for items larger than half a bin by classifying items by size into four size classes large, medium, small, and tiny, corresponding to items with size > 1/2 bin, > 1/3 bin, > 1/6 bin, and smaller items respectively. Then it proceeds through five phases:

  1. Allot a bin for each large item, ordered largest to smallest.
  2. Proceed forward through the bins. On each: If the smallest remaining medium item does not fit, skip this bin. Otherwise, place the largest remaining medium item that fits.
  3. Proceed backward through those bins that do not contain a medium item. On each: If the two smallest remaining small items do not fit, skip this bin. Otherwise, place the smallest remaining small item and the largest remaining small item that fits.
  4. Proceed forward through all bins. If the smallest remaining item of any size class does not fit, skip this bin. Otherwise, place the largest item that fits and stay on this bin.
  5. Use FFD to pack the remaining items into new bins.

This algorithm was first studied by Johnson and Garey[27] in 1985, where they proved that . This bound was improved in the year 1995 by Yue and Zhang[26] who proved that .

Asymptotic approximation schemes

The first asymptotic approximation scheme was described by Fernandez de la Vega and Lueker.[32] I has a time complexity of , where denotes a function only dependent on , and generates a solution with size at most . The time complexity of this algorithm was improved by Karmarkar and Karp[33] to be polynomial in and .

Exact algorithm

Martello and Toth[34] developed an exact algorithm for the 1-D bin-packing problem, called MTP. A faster alternative is the Bin Completion algorithm proposed by Korf in 2002[35] and later improved.[36]

A further improvement was presented by Schreiber and Korf in 2013.[37] The new Improved Bin Completion algorithm is shown to be up to five orders of magnitude faster than Bin Completion on non-trivial problems with 100 items, and outperforms the BCP (branch-and-cut-and-price) algorithm by Belov and Scheithauer on problems that have fewer than 20 bins as the optimal solution. Which algorithm performs best depends on problem properties like number of items, optimal number of bins, unused space in the optimal solution and value precision.

In the bin packing problem, the size of the bins is fixed and their number can be enlarged (but should be as small as possible).

In contrast, in the multiway number partitioning problem, the number of bins is fixed and their size can be enlarged. The objective is to find a partition in which the bin sizes are as nearly equal is possible (in the variant called multiprocessor scheduling problem or minimum makespan problem, the goal is specifically to minimize the size of the largest bin).

In the inverse bin packing problem,[38] both the number of bins and their sizes are fixed, but the item sizes can be changed. The objective is to achieve the minimum perturbation to the item size vector so that all the items can be packed into the prescribed number of bins.

In the maximum resource bin packing problem,[39] the goal is to maximize the number of bins used, such that, for some ordering of the bins, no item in a later bin fits in an earlier bin. In a dual problem, the number of bins is fixed, and the goal is to minimize the total number or the total size of items placed into the bins, such that no remaining item fits into an unfilled bin.

In the bin covering problem, the bin size is bounded from below: the goal is to maximize the number of bins used such that the total size in each bin is at least a given threshold.

In the fair indivisible chore allocation problem (a variant of fair item allcation), the items represents chores, and there are different people each of whom attributes a different difficulty-value to each chore. The goal is to allocate to each person a set of chores with an upper bound on its total difficulty-value (thus, each person corresponds to a bin). Many techniques from bin packing are used in this problem too.[40]

In the guillotine cutting problem, both the items and the "bins" are two-dimensional rectangles rather than one-dimensional numbers, and the items have to be cut from the bin using end-to-end cuts.

References

  1. Korte, Bernhard; Vygen, Jens (2006). "Bin-Packing". Combinatorial Optimization: Theory and Algorithms. Algorithms and Combinatorics 21. Springer. pp. 426–441. doi:10.1007/3-540-29297-7_18. ISBN 978-3-540-25684-7.
  2. Barrington, David Mix (2006). "Bin Packing". Archived from the original on 2019-02-16. Retrieved 2016-02-27.
  3. Lewis 2009
  4. Sindelar, Sitaraman & Shenoy 2011, pp. 367–378
  5. Garey, M. R.; Johnson, D. S. (1979). Victor Klee (ed.). Computers and Intractability: A Guide to the Theory of NP-Completeness. A Series of Books in the Mathematical Sciences. San Francisco, Calif.: W. H. Freeman and Co. pp. x+338. ISBN 0-7167-1045-5. MR 0519066.
  6. Martello & Toth 1990, p. 221
  7. Vazirani, Vijay V. (14 March 2013). Approximation Algorithms. Springer Berlin Heidelberg. p. 74. ISBN 978-3662045657.
  8. Yao, Andrew Chi-Chih (April 1980). "New Algorithms for Bin Packing". Journal of the ACM. 27 (2): 207–227. doi:10.1145/322186.322187. S2CID 7903339.
  9. Donna J, Brown (1979). "A Lower Bound for On-Line One-Dimensional Bin Packing Algorithms" (PDF). Technical Rept.
  10. Liang, Frank M. (1980). "A lower bound for on-line bin packing". Information Processing Letters. 10 (2): 76–79. doi:10.1016/S0020-0190(80)90077-0.
  11. van Vliet, André (1992). "An improved lower bound for on-line bin packing algorithms". Information Processing Letters. 43 (5): 277–284. doi:10.1016/0020-0190(92)90223-I.
  12. Balogh, János; Békési, József; Galambos, Gábor (July 2012). "New lower bounds for certain classes of bin packing algorithms". Theoretical Computer Science. 440–441: 1–13. doi:10.1016/j.tcs.2012.04.017.
  13. Johnson, David S (1973). "Near-optimal bin packing algorithms" (PDF). Massachusetts Institute of Technology.
  14. Gonzalez, Teofilo F. (23 May 2018). Handbook of approximation algorithms and metaheuristics. Volume 2 Contemporary and emerging applications. ISBN 9781498770156.
  15. Dósa, György; Sgall, Jiri (2013). "First Fit bin packing: A tight analysis". 30th International Symposium on Theoretical Aspects of Computer Science (STACS 2013). Schloss Dagstuhl–Leibniz-Zentrum für Informatik. 20: 538–549. doi:10.4230/LIPIcs.STACS.2013.538.
  16. György, Dósa; Sgall, Jirí (2014). "Optimal Analysis of Best Fit Bin Packing". {Automata, Languages, and Programming – 41st International Colloquium (ICALP)}. Lecture Notes in Computer Science. 8572: 429–441. doi:10.1007/978-3-662-43948-7_36. ISBN 978-3-662-43947-0.
  17. Lee, C. C.; Lee, D. T. (July 1985). "A simple on-line bin-packing algorithm". Journal of the ACM. 32 (3): 562–572. doi:10.1145/3828.3833. S2CID 15441740.
  18. Ramanan, Prakash; Brown, Donna J; Lee, C.C; Lee, D.T (September 1989). "On-line bin packing in linear time". Journal of Algorithms. 10 (3): 305–326. doi:10.1016/0196-6774(89)90031-X. hdl:2142/74206.
  19. Seiden, Steven S. (2002). "On the online bin packing problem". Journal of the ACM. 49 (5): 640–671. doi:10.1145/585265.585269. S2CID 14164016.
  20. Vazirani 2003, p. 74.
  21. Ullman, J. D. (1971). "The performance of a memory allocation algorithm". Technical Report 100 Princeton Univ.
  22. Garey, M. R; Graham, R. L; Ullman, J. D. (1972). "Worst-case analysis of memory allocation algorithms | Proceedings of the fourth annual ACM symposium on Theory of computing". Proceedings of the Fourth Annual ACM Symposium on Theory of Computing: 143–150. doi:10.1145/800152.804907. S2CID 26654056.
  23. Garey, M. R; Graham, R. L; Johnson, D. S; Yao, Andrew Chi-Chih (1976). "Resource constrained scheduling as generalized bin packing". Journal of Combinatorial Theory, Series A. 21 (3): 257–298. doi:10.1016/0097-3165(76)90001-7. ISSN 0097-3165.
  24. Xia, Binzhou; Tan, Zhiyi (August 2010). "Tighter bounds of the First Fit algorithm for the bin-packing problem". Discrete Applied Mathematics. 158 (15): 1668–1675. doi:10.1016/j.dam.2010.05.026.
  25. Dósa, György (2007). "The Tight Bound of First Fit Decreasing Bin-Packing Algorithm Is FFD(I) ≤ 11/9OPT(I) + 6/9". Combinatorics, Algorithms, Probabilistic and Experimental Methodologies. ESCAPE. doi:10.1007/978-3-540-74450-4_1.
  26. Yue, Minyi; Zhang, Lei (July 1995). "A simple proof of the inequality MFFD(L) ≤ 71/60 OPT(L) + 1,L for the MFFD bin-packing algorithm". Acta Mathematicae Applicatae Sinica. 11 (3): 318–330. doi:10.1007/BF02011198. S2CID 118263129.
  27. Johnson, David S; Garey, Michael R (October 1985). "A 7160 theorem for bin packing". Journal of Complexity. 1 (1): 65–106. doi:10.1016/0885-064X(85)90022-6.
  28. Hoberg, Rebecca; Rothvoss, Thomas (2017-01-01), "A Logarithmic Additive Integrality Gap for Bin Packing", Proceedings of the 2017 Annual ACM-SIAM Symposium on Discrete Algorithms, Proceedings, Society for Industrial and Applied Mathematics, pp. 2616–2625, doi:10.1137/1.9781611974782.172, retrieved 2020-11-22
  29. Baker, Brenda S. (1985). "A New Proof for the First-Fit Decreasing Bin-Packing Algorithm". J. Algorithms. 6 (1): 49–70. doi:10.1016/0196-6774(85)90018-5.
  30. Yue, Minyi (October 1991). "A simple proof of the inequality FFD (L) ≤ 11/9 OPT (L) + 1, ∀L for the FFD bin-packing algorithm". Acta Mathematicae Applicatae Sinica. 7 (4): 321–331. doi:10.1007/BF02009683. S2CID 189915733.
  31. Li, Rongheng; Yue, Minyi (August 1997). "The proof of FFD(L) < -OPT(L) + 7/9". Chinese Science Bulletin. 42 (15): 1262–1265. Bibcode:1997ChSBu..42.1262L. doi:10.1007/BF02882754. S2CID 93280100.
  32. Fernandez de la Vega, W.; Lueker, G. S. (1981). "Bin packing can be solved within 1 + ε in linear time". Combinatorica. 1 (4): 349–355. doi:10.1007/BF02579456. ISSN 1439-6912. S2CID 10519631.
  33. Karmarkar, Narendra; Karp, Richard M. (November 1982). "An efficient approximation scheme for the one-dimensional bin-packing problem". 23rd Annual Symposium on Foundations of Computer Science (SFCS 1982): 312–320. doi:10.1109/SFCS.1982.61. S2CID 18583908.
  34. Martello & Toth 1990, pp. 237–240.
  35. Korf 2002
  36. R. E. Korf (2003), An improved algorithm for optimal bin packing. Proceedings of the International Joint Conference on Artificial Intelligence, (pp. 1252–1258)
  37. Schreiber & Korf 2013
  38. Chung, Yerim; Park, Myoung-Ju (2015-01-01). "Notes on inverse bin-packing problems". Information Processing Letters. 115 (1): 60–68. doi:10.1016/j.ipl.2014.09.005. ISSN 0020-0190.
  39. Boyar, Joan; Epstein, Leah; Favrholdt, Lene M.; Kohrt, Jens S.; Larsen, Kim S.; Pedersen, Morten M.; Wøhlk, Sanne (2006-10-11). "The maximum resource bin packing problem". Theoretical Computer Science. 362 (1): 127–139. doi:10.1016/j.tcs.2006.06.001. ISSN 0304-3975.
  40. Huang, Xin; Lu, Pinyan (2020-11-10). "An Algorithmic Framework for Approximating Maximin Share Allocation of Chores". arXiv:1907.04505 [cs].

Bibliography

  1. Korf, Richard E. (2002), A new algorithm for optimal bin packing. (PDF)
  2. Vazirani, Vijay V. (2003), Approximation Algorithms, Berlin: Springer, ISBN 3-540-65367-8
  3. Yue, Minyi (October 1991), "A simple proof of the inequality FFD (L) ≤ 11/9 OPT (L) + 1, ∀L for the FFD bin-packing algorithm", Acta Mathematicae Applicatae Sinica, 7 (4): 321–331, doi:10.1007/BF02009683, ISSN 0168-9673, S2CID 189915733
  4. Dósa, György (2007). "The Tight Bound of First Fit Decreasing Bin-Packing Algorithm Is FFD(I) ≤ (11/9)OPT(I)+6/9". In Chen, Bo; Paterson, Mike; Zhang, Guochuan (eds.). Combinatorics, Algorithms, Probabilistic and Experimental Methodologies. Lecture Notes in Computer Science. 7000 (2011). Lecture Notes in Computer Science. 4614/2007. Springer Berlin / Heidelberg. pp. 1–11. doi:10.1007/978-3-540-74450-4. ISBN 978-3-540-74449-8. ISSN 0302-9743.
  5. Xia, Binzhou; Tan, Zhiyi (2010), "Tighter bounds of the First Fit algorithm for the bin-packing problem", Discrete Applied Mathematics, 158 (15): 1668–1675, doi:10.1016/j.dam.2010.05.026, ISSN 0166-218X
  6. Garey, Michael R.; Johnson, David S. (1985), "A 71/60 theorem for bin packing*1", Journal of Complexity, 1: 65–106, doi:10.1016/0885-064X(85)90022-6
  7. Yue, Minyi; Zhang, Lei (July 1995), "A simple proof of the inequality MFFD(L) ≤ 71/60 OPT(L) + 1,L for the MFFD bin-packing algorithm", Acta Mathematicae Applicatae Sinica, 11 (3): 318–330, doi:10.1007/BF02011198, ISSN 0168-9673, S2CID 118263129
  8. Fernandez de la Vega, W.; Lueker, G. S. (December 1981), "Bin packing can be solved within 1 + ε in linear time", Combinatorica, Springer Berlin / Heidelberg, 1 (4): 349–355, doi:10.1007/BF02579456, ISSN 0209-9683, S2CID 10519631
  9. Lewis, R. (2009), "A General-Purpose Hill-Climbing Method for Order Independent Minimum Grouping Problems: A Case Study in Graph Colouring and Bin Packing" (PDF), Computers and Operations Research, 36 (7): 2295–2310, doi:10.1016/j.cor.2008.09.004
  10. Martello, Silvano; Toth, Paolo (1990), "Bin-packing problem" (PDF), Knapsack Problems: Algorithms and Computer Implementations, Chichester, UK: John Wiley and Sons, ISBN 0471924202
  11. Michael R. Garey and David S. Johnson (1979), Computers and Intractability: A Guide to the Theory of NP-Completeness. W.H. Freeman. ISBN 0-7167-1045-5. A4.1: SR1, p. 226.
  12. David S. Johnson, Alan J. Demers, Jeffrey D. Ullman, M. R. Garey, Ronald L. Graham. Worst-Case Performance Bounds for Simple One-Dimensional Packing Algorithms. SICOMP, Volume 3, Issue 4. 1974.
  13. Lodi A., Martello S., Monaci, M., Vigo, D. (2010) "Two-Dimensional Bin Packing Problems". In V.Th. Paschos (Ed.), Paradigms of Combinatorial Optimization, Wiley/ISTE, pp. 107–129
  14. Dósa, György; Sgall, Jiří (2013). "First Fit bin packing: A tight analysis". 30th International Symposium on Theoretical Aspects of Computer Science (STACS 2013). Dagstuhl, Germany. pp. 538–549. ISBN 978-3-939897-50-7.
  15. Benkő A., Dósa G., Tuza Z. (2010) "Bin Packing/Covering with Delivery, Solved with the Evolution of Algorithms," Proceedings 2010 IEEE 5th International Conference on Bio-Inspired Computing: Theories and Applications, BIC-TA 2010, art. no. 5645312, pp. 298–302.
  16. Sindelar, Michael; Sitaraman, Ramesh; Shenoy, Prashant (2011), "Sharing-Aware Algorithms for Virtual Machine Colocation" (PDF), Proceedings of 23rd ACM Symposium on Parallelism in Algorithms and Architectures (SPAA), San Jose, CA, June 2011: 367–378
  17. Schreiber, Ethan L.; Korf, Richard E. (2013), Improved Bin Completion for Optimal Bin Packing and Number Partitioning, IJCAI '13, Beijing, China: AAAI Press, pp. 651–658, ISBN 978-1-57735-633-2
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.