1 ∗ Predicting Learnt Clauses Quality in Modern SAT Solvers Laurent Simon Gilles Audemard Univ. Paris-Sud Univ. Lille-Nord de France LRI/CNRS UMR 8623 / INRIA Saclay CRIL/CNRS UMR8188 Orsay, F-91405 Lens, F-62307 [email protected] [email protected] Abstract height-years old solver, with small improvements only (phase ] [ caching Pipatsrisawat and Darwiche, 2007 , luby restarts Beside impressive progresses made by SAT solvers [ ] Huang, 2007 , ...). over the last ten years, only few works tried to un- Since the breakthrough of Chaff, most effort in the design derstand why Conflict Directed Clause Learning of efficient SAT solvers has always been focused on efficient algorithms (CDCL) are so strong and efficient on Boolean Constraint Propagation (BCP), the heart of all mod- most industrial applications. We report in this work ern SAT solvers. The global idea is to reach conflicts as soon a key observation of CDCL solvers behavior on this as possible, but with no guarantees on the new learnt clause family of benchmarks and explain it by an unsus- usefulness. Following the successful idea of the Variable pected side effect of their particular Clause Learn- State Independent Decaying Sum (VSIDS) heuristics, which ing scheme. This new paradigm allows us to solve favours variables that were often – and recently – used in con- an important, still open, question: How to design- flict analysis, future learnt clause usefulness was supposed to ing a fast, static, accurate, and predictive measure be related to its activity in recent conflicts analyses. of new learnt clauses pertinence. Our paper is fol- In this context, detecting what is a good learnt clause in lowed by empirical evidences that show how our advance is still considered as a challenge, and from first im- new learning scheme improves state-of-the art re- portance: deleting useful clauses can be dramatic in practice. sults by an order of magnitude on both SAT and To prevent this, solvers have to let the maximum number of UNSAT industrial problems. learnt clauses grow exponentially. On very hard benchmarks, CDCL solvers hangs-up for memory problems and, even if they don’t, their greedy learning scheme deteriorates their 1 Introduction heart: BCP performances. problem was mainly consid- Only fifteen years ago, the SAT We report, in section 2, experimental evidences showing ered as theoretical, and polynomial reduction to it was a tool an unsuspected phenomenon of CDCL solvers on industrial to show the intractability of any new problem. Nowadays, problems. Based on these observations, we present, in section with the introduction of lazy data-structures, efficient learn- 3, our static measure of learnt clause usefulness and, in sec- [ et Moskewicz ing mechanisms and activity-based heuristics tion 4, we discuss the introduction of our measure in CDCL ] ́ ̈ orensson, 2003 al. en and S , 2001; E , the picture is quite solvers. In section 5, we compare our solver with state-of- different. In many combinatorial fields, the state-of-the-art the-art ones, showing order of magnitude improvements. schema is often to use an adequate polynomial reduction to this canonical NP-Complete problem, and then to solve it effi- 2 Decision levels regularly decrease solver. This is especially true with the help SAT ciently with a Writing an experimental study in the first section of a paper of Conflict Directed Clause Learning algorithms (CDCL), an is not usual. Paradoxically, CDCL solvers lack for strong efficient SAT algorithmic framework, where our work takes empirical studies: most papers contain experimental sections, place. With the help of those algorithms, valuable problems but focus is often given to achievement illustrations only. [ ] , 2005 et al. are efficiently solved every day. Prasad In this section, we emphasize an observation – decision However, the global picture is not so perfect. For instance, level is decreasing – made on most CDCL solvers on most ] [ , 2001 Moskewicz et al. ZCHAFF since the introduction of industrial benchmarks. This behavior may shed a new light [ ] ́ , the global architecture en and Biere, 2005 E and M INISAT on the underlying reasons of the good performances of the of solvers is stalling, despite important efforts of the whole CDCL paradigm. The relationship between performances [ ] . Of course, improvements , 2007 et al. Le Berre community and decreasing is the basis of our work in the following sec- have been made, but they are sometimes reduced to data- tions. structures tricks. It may be thus argued that most solvers are For lack of space, we suppose the reader familiar with Sat- often described as a compact re-encoding of ideas of Chaff, a x , clause, unit x ¬ or , literal x isfiability notions (variables i i i ∗ ◦ 328904 supported by ANR UNLOC project n BLAN08-1 clause and so on). We just recall the global schema of CDCL 399

2 Series #Benchs % Decr. ( > 0) Reduc. − n/m 1e+07 3 62% × 10 een 1762% 1 1 8 . 6 goldb 1 . 93% 11 × 10 4 100% 6 grieu × 10 7 − 71% 1 . 3 1e+06 4 hoons 10 . 123% 2 7 100% × 5 4 ibm-2002 × 7 71% 28% 4 . 6 10 5 ibm-2004 × 10 1 52% 9 13 92% . 5 1e+05 manol-pipe 1 64% 9 × 10 55 . 91% miz 0% − − 13 5 schup . 8 32% 10 × 5 80% 4 6 simon × 10 10 50% 90% 1 . 1 1e+04 5 Historical justification of needed conflicts vange . 4 10 66% 3 0 6% × Prediction vs Reality (SAT) 5 Prediction vs Reality (UNSAT) velev 5 10 54 × 92% 1 . 81% Linear Regression m=3.75 Restricted Regression m=1.37 5 199 2 3 83% 68% 10 × all . Lower Bound: m=0.90 Upper Bound: m=8.33 1e+03 1e+03 1e+04 1e+05 1e+06 1e+07 Effective number of conflicts reached = x Table 1: (fourth column) is the positive look- − n/m j back “justification” of the number of conflicts needed to solve Figure 1: Relationship between look-back justification and benchmarks, median value over all benchmarks that show a to solve the INISAT effective number of conflicts needed by M decreasing of their decision level (%Decr). Last column will instance. be discussed in section 5. ber of benchmarks in series, “%Decr.” gives the percentage solvers: A typical branch of a CDCL solver can be seen as of benchmarks that exhibits a decreasing of decision levels. If a sequence of decisions followed by propagations, repeated a cut-off occurs, then the last value of the estimation is taken. until a conflict is reached. Each decision literal is assigned In most of the cases (167 over 199 benchmarks), the regres- at its own level (starting from 1), shared with all propagated illustrates sion line is decreasing, and some small values of x 0 literals assigned at the same level. Each time a conflict is important ones. This phenomenon was not observed on ran- is extracted using a particular method, usu- nogood reached, a dom problems and seems heavily related to the “industrial” [ Zhang and ally the First UIP (Unique Implication Point) one origin of benchmarks: the “mizh” series is 100% increasing, ] . The learnt clause is then added to the clause Madigan, 2001 but it encodes cryptographic problems. level is computed from it. The backjumping database and a [ ] interested reader can refer to et al. for Marques-Silva , 2009 2.2 Justifying the number of reached conflicts more details. At this point of the discussion, we showed that decision lev- 2.1 Observing the decreasing els are decreasing in most of the cases. However, is it re- ally a strong – but unsuspected – explanation of the power of - The experimental study is done as follows. We run M IN CDCL solvers, or just a natural side effect of their learning on a selection of benchmarks from last SAT contests ISAT schema? It is indeed possible that learning clauses trivially x and races. For a given benchmark, each time a conflict c add more constraints, and conflicts are obviously reached at y where it occurs. is reached, we store the decision level l lower and lower levels in the search tree. However, if the We limit the search to 2 million of conflicts. Then, we look-back justification is a strong estimation of the effective compute the simple least-square linear regression on the line number of conflicts needed by the solver to find the contra- ) .If m is ,y x n + x × m = y that fits the set of couples ( c l diction (or, amazingly, a solution), then this probably means negative (resp. positive) then decision levels decrease during that the decreasing reveals an important characteristic of their search (resp. increase). In case of decreasing, we can trivially overall behavior and strength: CDCL solvers enforce the de- “ predict ” when the solver will finish the search. This should cision level to decrease along the computation, whatever the occur, if the solver follows its characteristic line, when this fitting quality of the linear regression. -axis. We call this point the “ line intersects the x look-back justification ” of the solver performance (looking-back is nec- Figure 1 exhibits the relationship between look-back jus- - IN essary to compute the characteristic line). The coordinates of tification and effective number of conflicts needed by M ). This value gives also a good = n/m ISAT to solve the instance. It is built as follows: Each dot , − 0 this point are ( x j x, y ( ) represents an instance. x intuition of how decision levels decrease during search. Note corresponds to the effective to solve the instance, INISAT number of conflicts needed by M that we make very strong hypotheses here: (1) the solver fol- y corresponds to the look-back justification. Only instances lows a linear decreasing of its decision levels (this is false in solved in less than 2 million of conflicts are represented (oth- most of the cases, but sufficient to compute the look-back jus- erwise, we do not have the real look-back justification). Fig- tification), (2) finding a contradiction or a solution gives the ure 1 clearly shows the strong relationship between justifica- same look-back justification and (3) the solution (or contra- tion and effective number of conflicts needed: in all the cases, diction) is not found by chance at any point of the computa- the look-back justification is bounded between 0.90 and 8.33 tion. to INISAT (fourth column) times the real number of conflicts needed by M Table 1 shows the median values of x j over some series of benchmarks. “#Benchs” gives the num- solve the problem. In most of the cases, the justification is 400

3 100% around 1.37 times the effective number of conflicts. On may 90% also notice that no distinction can be made between justifica- 80% and SAT UNSAT instances. tion over It is important here to notice that the aim of our study is 70% [ not to predict the CPU time of the solver like in Hutter et 60% ] . Our goal is, whatever the error obtained when al. , 2006 50% computing the regression line, to show the strong relation- 40% CDF of measure ship between the overall decreasing of decision levels and the 30% performances of the solver. What is really striking is that the 20% look-back justification is also strong when the solver finds a LBD, used in propagations 10% Length, used in propagations solution. When a solution exists, it is never found suddenly, LBD, used in conflict analysis Length, used in conflict analysis 0% instances, the solver SAT by chance. This suggests that, on 5 25 10 15 20 ) endencies level Measure ( len g p th or de does not correctly guess a value for a literal, but learns that the opposite value directly leads to a contradiction. Thus, if Figure 2: CDF of usefulness of clauses w.r.t. LBD and size. we find the part of the learning schema that enforces this de- creasing, we may be able (1) to speed-up the decreasing, and thus the CPU time of the solver, and (2) to identify in ad- 3.2 Measuring learnt clause quality vance the clauses that play this particular role for protection During search, each decision is generally followed by a num- and aggressive clause database deletion. ber of unit propagations. All literals from the same level are what we call “blocks” of literals in the later. At the semantic 3 Identifying good clauses in advance level, there is a chance that they are linked with each other by direct dependencies. Thus, a good learning schema should Trying to enhance the decreasing of decision levels during add explicit links between independent blocks of propagated search is not new. It was already pointed out, but from a (or decision) literals. If the solver stays in the same search general perspective only, in earlier papers on CDCL solvers: space, such a clause will probably help reducing the number “ A good learning scheme should reduce the number of deci- of next decision levels in the remaining computation. sions needed to solve certain problems as much as possible .” ] [ . In the previous section, it was Zhang and Madigan, 2001 Given a Definition 1 (Literals Blocks Distance (LBD)) however demonstrated for the first time how crucial this abil- n , and a partition of its literals into C clause subsets accord- ity is for CDCL solvers. ing to the current assignment, s.t. literals are partitioned w.r.t their decision level. The LBD of is exactly n . C 3.1 Some known results on CDCL behavior Let us first recall some high level principles of CDCL solvers, From a practical point of view, we compute and store the by firstly focusing on their restart policies. In earlier works, LBD score of each learnt clause when it is produced. This restarting was proposed as an efficient way to prevent heavy- measure is static, even if it is possible (we will see in the [ ] tailed phenomena et al. Gomes . However, in the , 2000 later) to update it during search. Intuitively, it is easy to un- last years, restart policies were more and more aggressive, derstand the importance of learnt clauses of LBD 2: they only and, now, a typical run of a state-of-the art CDCL solver contain one variable of the last decision level (they are FUIP), sees most of its restarts occurring before the 1000th con- and, later, this variable will be “glued” with the block of lit- ] [ flict Huang, 2007; Biere, 2008 (this strategy especially pays erals propagated above, no matter the size of the clause. We [ when it is associated to phase savings Pipatsrisawat and Dar- suspect all those clauses to be very important during search, ] ). So, in a few years, the concept of restarting wiche, 2007 and we give them a special name: “Glue Clauses”. has saw his meaning moving from “restart elsewhere” (trying to reach an easier contradiction elsewhere in the search space) 3.3 First results on the literals blocks distance 1 to “restart dependencies order” that just reorder variable de- We first ran M on the set of all SAT-Race 06 INISAT bench- pendencies, leading the solver to the same search space, by a marks, with a CPU cut off fixed at 1000s. For each learnt [ ] different path ( Biere, 2008 has for instance already pointed clause, we measured the number of times it was useful in out this phenomenon). unit-propagation and in conflict analysis. Figure 2 shows the In addition to the above remark, it is striking to notice how cumulative distribution function of the values over all bench- CDCL solvers are particularly efficient on the so-called “in- marks (statistics over unfinished jobs were also gathered). For dustrial” benchmarks. Those benchmarks share a very partic- instance, 40% of the unit propagations on learnt clauses are ular structure: they often contain very small backdoors sets done on glue clauses, and only 20% are done on clauses of [ ] , which, intuitively, encode inputs of Williams et al. , 2003 size 2. Half of the learnt clauses used in the resolution mech- formulas. Most of the remaining variables directly depend on anism during all conflict analysis have LBD 6, whereas we < their values. Those dependencies implicitly links sets of vari- need to consider clauses of size smaller than 13 for the same ables that depend on the same inputs. During search, those result. Let’s look at some details on table 2. What is strik- “linked” variables will probably be propagated together again ing is how the usefulness drops from 1827 times in conflict and again, especially with the locality of the search mecha- 1 Used as a basis for the SAT-Race 08 too. nism of CDCL solvers. 401

4 #N (sat-unsat) #avg time Measure 2345 209 M INISAT 70 (35 – 35) 202 / 1827 50 / 295 26 / 136 19 / 80 LBD M 74 (41 – 33) 194 INISAT +ag 209 / 2452 127 / 884 46 / 305 33 / 195 Length INISAT +lbd 79 ( 47 – 32) 145 M 37 (45 – 82 +ag+lbd INISAT M 175 ) Table 2: Average number of times a clauses of a given mea- sure was used (propagation/conflict analysis). INISAT . Table 3: Comparison of four versions of M analysis for glue clauses to 136 for clauses of LBD 4. We ran minisat - learnt clauses database size minisat+ag+lbd - learnt database clauses size some additional experiments to ensure that clauses may have minisat - bcp rate minisat+ag+lbd - bcp rate a large size (hundreds of literals) and a very small LBD. 300000 2.2e+06 From a theoretical point of view, it is interesting to no- 2e+06 tice that LBD of FUIP learnt clause is optimal over all other 250000 [ ] .If possible UIP learning schemas Jabbour and Sais, 2008 1.8e+06 our empirical study of this measure shows its accuracy, this 1.6e+06 200000 theoretical result will cast a good explanation of the effi- 1.4e+06 ciency of First UIP over all other UIP mechanisms (see for 150000 1.2e+06 ] [ instance for a complete survey et al. Marques-Silva , 2009 1e+06 100000 of UIP mechanisms): FUIP efficiency would then be partly learnt clause database size propagation rate (/seconds) explained by its ability to produce clauses of small LBD. 800000 50000 600000 Given Property 1 (Optimality of LBD for FUIP Clauses) 0 400000 800 2000 1400 1200 1000 1600 600 400 200 0 1800 a conflict graph, any First UIP asserting clause has the u time p c smallest LBD value over all other UIPs. Figure 3: Comparison of BCP rate (left y scale) and learnt sketch of proof: This property was proved as an extension INISAT and clauses database size (right y scale) between M [ ] , 2008 . When analyzing the conflict, no of Audemard et al. INISAT +ag+lbd. M decision level can be deleted by resolution on a variable of the same decision level, because the resolution is done with reasons clauses: if a variable is propagated at level l , then the gressive clause deletion (M INISAT +ag) based on the origi- reason clause contains also another variable set at the same INISAT nal clause activity, the classical one based on LBD (M l level , otherwise the variable would have been set at a lower +lbd) and the classical one with aggressive clause deletion , by unit propagation. decision level

5 solver SAT - UNSAT ) #U #B #S #N ( 10000 ZCHAFF 01 84 (47 – 37) 0 13 2.9 zchaff (2001) zchaff (2004) 5 0 3.9 ZCHAFF (39 – 41) 80 04 minisat (2007) minisat+ picosat (2008) 15 M INISAT + 136 (66 – 74) 0 1.5 rsat (2007) 8000 glucose 2.1 M INISAT 132 (53 – 79) 1 16 1.2 P ICOSAT 153 ( – 78) 1 26 75 6000 (63 – 75) 1 14 1.7 R 139 SAT ( G – 101 ) 176 LUCOSE 22 75 68 - 4000 cpu time (seconds) Table 4: Relative performances of solvers. Column #N re- ports the number of solved benchmarks with, in parenthe- 2000 SAT and instances. Column #U UNSAT sis, the number of shows the number of times where the solver is the only one to 0 0 180 160 140 120 100 80 60 40 20 solve an instance, column #B gives the number of times it is nb instances the fastest solver, and column #S gives the relative speed-up LUCOSE when considering only the subset of common of G and each solvers (for in- LUCOSE solved instances between G Figure 4: Cactus plot on the 234 industrial instances of the LUCOSE is 1.7 times faster than R SAT on the subset stance G SAT’07 competition. of benchmarks they both solve). 5 Comparison with the best CDCL solvers and 10000 seconds for this. One of the advantages of this kind of plots is its ability to emphasize when each solver reaches In this final section, we show how our ideas can be embed- some kind of CPU time limit, when adding more CPU time solver. We used as a basis for it the SAT ded in an efficient doesn’t add much chance to solve additional problems. For INISAT using a Luby restarts strategy well-known core of M solves approximatively 100 instances in 1000 SAT instance, R - LU (starting at 32) with phase savings. We call this solver G seconds but only 40 more in 10000. It is not the case for for its hability to detect and keep “Glue Clauses”. We COSE LUCOSE , which shows a better scaling up (it solves around G added two tricks to it. Firstly, each time a learnt clause is 120 instances in 1000 seconds and 60 more in 10000). This is used in unit-propagation (it is a reason of a propagated lit- clearly due to our aggressive learnt clause database reduction eral), we compute a new LBD score and update it if neces- combined with our clause usefulness measure (BCP perfor- sary. Secondly, we explicitly increase the score of variables mance is maintained and clauses are kept). good of the learnt clause that were propagated by a glue clause. solves 176 LUCOSE Table 4 shows some detailed results. G LUCOSE with the three We propose here to compare G SAT INISAT ,M and instances, for approximately 140 for R INISAT (version 2- winners of the last SAT competition: M ] [ INISAT + (note that G LUCOSE needs only 2500 seconds to M ́ ̈ SAT (version 2.02) ,R 070721) en and S E orensson, 2003 ] [ .G ICOSAT LUCOSE espe- solve 140 instances) and 153 for P Pipatsrisawat and Darwiche, 2007 and P ICOSAT (version [ ] instances, which UNSAT cially shows impressive results on ZCHAFF 846) Biere, 2008 . We also add in this comparison ] [ really matches our initial motivation, i.e. enhancing the de- , 2001 Moskewicz et al. , the first CDCL solver (versions of cision levels decreasing. However, we can assume that this years 2001 and 2004) and a version of M including INISAT SAT LUCOSE instances, since G strategy is also interesting on IN - luby restarts (starting at 100) and phase polarity (called M obtains, with P ICOSAT , the best result on these instances. An + in the rest of the section, as the last – but unreleased – ISAT LUCOSE is the only interesting point is the number of times G version of it, as it was described). We use the same (234) in- solver to solve a given problem (column #U). This shows the dustrial benchmarks and running characteristics (10000s and [ power of our method. Finally, the last two columns of table 4 1Gb memory limit) than the SAT’07 competition Le Berre ] LUCOSE is much faster than all other solvers. It show that G in the second stage. According to well admitted et al. , 2007 [ is the fastest solver in 68 cases and performs a speed-up on ́ E en and ATELITE idea, we pre-processed instances with S ] common instances of at least 1.2. Biere, 2005 and ran all solvers on these pre-processed in- stances. We used a farm of Xeon 3.2 Ghz with 2 Go RAM Before concluding this section, Figure 5 allows us to fo- for this experiment. and P LUCOSE ICOSAT cus on the relative performances of G which behaves very well (as shown table 4). The x-axis (resp. Figure 4 shows the classical ” cactus ” plot used in SAT ty (resp. tx y-axis) corresponds to the cpu time ) obtained by competitions. X-axis represents the number of instances and LUCOSE (resp. G ICOSAT matches the ) tx, ty ( ). Each dot P Y-axis represents the time needed to solve them if they were same benchmark. Thus, dots below (resp. above) the di- ZCHAFF and the last ran in parallel. First of all, comparing LUCOSE is faster (resp. slower) than agonal indicate that G winners of the 2007 SAT competition, we can see the gap ICOSAT . P made during the last seven years. We can also remark that ,R INISAT SAT INISAT . When + have approximatively the ICOSAT outperforms P LUCOSE and M In many cases, G M wins, G is still close to its performances. LUCOSE was the bet- ICOSAT ,P LUCOSE ICOSAT P same performances. Before G outperforms all of these LUCOSE UNSAT instances, and except a few cases, even when ter solver. We see how G On . Most bad perfor- LUCOSE looses, it is close to P ICOSAT G solvers. It is able to solve 140 problems in 2500 seconds, SAT instances only. After a deeper mances are observed on when currently state-of-the-art solvers need between 4000 403

6 References 10000 [ ] et al. , 2008 Audemard G. Audemard, L. Bordeaux, Y. Hamadi, S. Jabbour, and L. Sais. A generalized framework for conflict analysis. In proceedins of SAT , 1000 pages 21–27, 2008. [ ] A. Biere. PicoSAT essentials. Journal on Sat- Biere, 2008 isfiability , 4:75–97, 2008. 100 [ ] ́ ́ E en and A. Biere. Effective pre- en and Biere, 2005 N. E processing in SAT through variable and clause elimination. proceedings of SAT , pages 61–75, 2005. In 10 [ ] ̈ ̈ ́ ́ N. E en and N. S en and S E orensson, 2003 orensson. An ex- , pages 502– proceedings of SAT tensible SAT-solver. In SAT (others) SAT (mizh) 518, 2003. UNSAT 1 10000 1000 100 10 1 ] [ , 2000 Gomes et al. C. Gomes, B. Selman, N. Crato, and H. Kautz. Heavy-tailed phenomena in satisfiability and and P LUCOSE Figure 5: Comparison between G ICOSAT . constraint satisfaction problems. Journal of Automated , 24:67–100, 2000. Reasoning [ ] J. Huang. The effect of restarts on the effi- Huang, 2007 look on those instances, we noticed that seven of those diffi- , pages ciency of clause learning. In proceedings of IJCAI cult instances are from the mizh family. This result is para- 2318–2323, 2007. doxically yet again encouraging because, as pointed out in table 1, the evolution of decision levels of CDCL solvers on [ ] , 2006 et al. Hutter F. Hutter, Y. Hamadi, H. Hoos, and those instances is increasing, and thus does not match our ini- K. Leyton-Brown. Performance prediction and automated tial aim: accelerating the decreasing of decision levels (let us tuning of randomized and parametric algorithms. In pro- recall here that mizh benchmarks are not “industrial” ones, in , pages 213–228, 2006. ceedings of CP the classical sense). [ ] Jabbour and Sais, 2008 S. Jabbour and L. Sais. personnal In order to come full circle, let us explain now the last communication, February 2008. column of table 1. This percentage is the relative value of ] [ O. Rous- , 2007 et al. Le Berre D. Le Berre, by comparison with LUCOSE the look-back justification of G sel, and L. Simon. SAT competition, 2007. . For instance, on the ibm-2002 instances, the look- INISAT M http://www.satcompetition.org/. is 28% smaller than the one LUCOSE back justification of G [ ] , 2009 et al. Marques-Silva J. Marques-Silva, I. Lynce, and INISAT . Thus, our overall strategy pays: we accelerated of M Handbook of Satisfiability S. Malik. , chapter 4. 2009. the decreasing of decision levels in most of the cases. [ ] et al. , 2001 M. Moskewicz, C. Madigan, Moskewicz Y. Zhao, L. Zhang, and S. Malik. Chaff : Engineering 6 Conclusion , pages an efficient SAT solver. In proceedings of DAC 530–535, 2001. In this paper, we introduced a new static measure over learnt [ ] Pipatsrisawat and Darwiche, 2007 K. Pipatsrisawat and clauses quality that enhances the decreasing of decision lev- A. Darwiche. A lightweight component caching scheme SAT and UNSAT instances. els along the computation on both proceedings of SAT for satisfiability solvers. In , pages This measure seems to be so accurate that very aggressive 294–299, 2007. learnt clause database management is possible. We expect [ ] Prasad et al. , 2005 M. Prasad, A. Biere, and A. Gupta. A this new clause measure to have a number of great impacts. survey of recent advances in SAT-based formal verifica- We may try to use it to propose preprocessing techniques journal on Software Tools for Technology Transfer , tion. (adding short LBD clauses), but also more exotic ones, like 7(2):156–173, 2005. trying to reorder the formula in order to get a good starting of [ ] B. Selman, H. Kautz, and , 1997 Selman et al. the decreasing. More classically, we think that our measure D. McAllester. Ten challenges in propositional rea- will also be very useful in the context of parallel SAT solvers. proceedings of IJCAI soning and search. In , pages 50–54, Indeed, such solver needs to share good clauses, which can 1997. be now identified with confidence by our measure. [ ] Finally, we can now put efforts in designing an efficient R. Williams, C. Gomes, and B. Sel- , 2003 et al. Williams framework for incomplete algorithm for Unsatisfiability. We Proceed- man. Backdoors to typical case complexity. In know what efficient solvers should look for, and the future , 2003. ings of IJCAI’03 of our work is clearly on that path. We think that this work [ ] Zhang and Madigan, 2001 L. Zhang and C. Madigan. Ef- opens a new a promising path to answer one of the strongest ficient conflict driven learning in a boolean satisfiability [ challenges proposed more than ten years ago, in Selman et solver. In proceedings of ICCAD , pages 279–285, 2001. ] , and still wide open. , 1997 al. 404

201 8 Fourth National Report on Human Exposure to Environmental Chemicals U pdated Tables, March 2018 , Volume One

More info »/HRC/39/CRP.2 A September 2018 17 only English Human Rights Council Thirty ninth session - 10 28 September 2018 – Agenda item 4 Human rights situations that require the Council’s attention Report of t...

More info »U.S. DEPARTMENT OF TRANSPORTATION ORDER FEDERAL AVIATION ADMINISTRATION 7400.11C JO Air Traffic Organization Policy August 13, 2018 SUBJ: Airspace Designations and Reporting Points . This O rder, publ...

More info »D / E I A - 0 3 8 4 ( 2 0 1 1 ) E | S e p t e m b e r 2 0 1 2 O 1 1 0 2 w e i v e R y g r A n n u a l E n e . r a / v o g e a i e . w w w

More info »NATIONAL BUDGET DEFENSE ESTIMATES FOR FY 201 9 OFFICE OF THE UNDER SECRE TA RY OF DEFENSE (COMPTROLLER) APRIL 201 8

More info »The Economic Benefits of More Fully Utilizing Advanced 2 201 M ay Practice Registered Nurses in the Provision of Health An Analysis of Local and Statewide Effects on Care in Texas: Business Activity T...

More info »DOE/EIA ‐ 0035( 2019/4 ) April 2019 Monthly Energy Review www.eia.gov/mer

More info »SCRIE TENANTS LIST ~ By Docket Number ~ Borough of Bronx SCRIE in the last year; it includes tenants that have a lease expiration date equal or who have received • This report displays information on ...

More info »1 prostatecanceruk.org Specialist Nurses 0800 074 8383 Lifestyle Urinary problems after prostate cancer treatment : In this fact sheet Problems after cryotherapy • • Prostate cancer treatment and urin...

More info »2017 New Jersey Air Quality Report New Jersey Department of Environmental Protection ber 30 , 2018 Octo New Jersey Department of Environmental Protection Bureau of Air Monitoring Mail Code: 401 - 02E ...

More info »2017–18 Estimates Parts I and II The Government Expenditure Plan and Main Estimates ESTIMATES ESTIMATES

More info »2018 DECEMBER -m, Produced by Minnesota Management and Budget MINNeSOTA

More info »2013–14 Estimates Amendment Atlantic Canada Opportunity Agency: On February 22, 2013, the Prime-Minister announ ced changes to the Ministry. Effective immediately, the Minister of National Revenue is ...

More info »2018 Local Government Financial Information Handbook September 2018 The Florida Legislature’s Office of Economic and Demographic Research

More info »Child Maltreatment 2016 th th Y Y 25 27 G G E E N N A A I I R R T T O O R R F F O O P P R R E E U.S. Department of Health & Human Services Administration for Children and Families Administration on Ch...

More info »2018 Investment Company Fact Book A Review of Trends and Activities in the Investment Company Industry 58th edition www.icifactbook.org

More info »Income, Poverty, and Health Insurance Coverage in the United States: 2011 Current Population Reports By Carmen DeNavas-Walt, Bernadette D. Proctor, Jessica C. Smith 6 Issued September 2012 6 3 6 2 , P...

More info »H. R. 5515 One Hundred Fifteenth Congress of the United States of America AT THE SECOND SESSION Begun and held at the City of Washington on Wednesday, the third day of January, two thousand and eighte...

More info »Drug Abuse Warning Network, 2011: National Estimates of Drug- Related Emergency Department Visits U.S. DEPARTMENT OF HEALTH AND HUMAN SERVICES Substance Abuse and Mental Health Services Administration...

More info »