1 How to Round Any CSP ∗ † , , David Steurer Prasad Raghavendra ∗ Microsoft Research New England Cambridge, MA 02142 † Department of Computer Science Princeton University, Princeton, NJ 08540-5233 — A large number of interesting combinatorial opti- = . Hence, the value of the otherwise) as the optimum to Abstract U k -S , and , M G M mization problems like C serves as a bound on the optimum = optimum solution to relax fall under the class of constraint satisfaction problems (CSPs). value for the original instance = . The integrality gap of the ] by one of the authors identifies a semidefinite 32 Recent work [ relaxation is the maximum possible gap over all instances programming (SDP) relaxation that yields the optimal approximation , between the optimum of the original of the problem Γ ratio for every CSP, under the Unique Games Conjecture (UGC). . ], the authors also showed unconditionally that = and that of the relaxation = 33 instance Very recently [ relax the integrality gap of this basic SDP relaxation cannot be reduced Rounding: In this step, the optimal solution to the by adding large classes of valid inequalities (e.g., in the fashion of = convex relaxation is used to obtain a solution to the relax Sherali–Adams LP hierarchies). original instance = . By exhibiting a solution to = which is In this work, we present an e cient rounding scheme that ffi achieves the integrality gap of this basic SDP relaxation for every factor of the solution to , the rounding yields = within an α relax CSP (and by [ ] it also achieves the gap of much stronger 33 factor approximation guarantee. Further, the rounding α an SDP relaxations). The SDP relaxation we consider is stronger algorithm serves as a constructive proof that the integrality or equivalent to any relaxation used in literature to approximate α . Hence, this is easily the gap of the relaxation is at most CSPs. Thus, irrespective of the truth of the UGC, our work yields ffi cult step requiring ingenuity and techniques from most di cient generic algorithm that for every CSP, achieves an ffi an e approximation at least as good as the best known algorithm in linear programming duality, metric embeddings and other literature. areas. We will call a rounding scheme to be optimal for the The rounding algorithm in this paper can be summarized relaxation, if it achieves an approximation guarantee equal succinctly as follows: Reduce the dimension of SDP solution by to the integrality gap of the relaxation. In other words, an random projection, discretize the projected vectors, and solve the optimal rounding scheme extracts the best solution that could resulting CSP instance by brute force! Even the proof is simple in that it avoids the use of the machinery from unique games reductions possibly be obtained from the relaxation. such as dictatorship tests, Fourier analysis or the invariance principle. Among the relaxation techniques, perhaps the most general A common theme of this paper and the subsequent paper [ ] 33 for SDP relaxations which asserts that ap- robustness lemma is a and powerful is semidefinite programming. A semidefinite proximately feasible solutions can be made feasible by “smoothing” program consists of vector valued variables, with linear without changing the objective value significantly. constraints on their inner products. The objective being -semidefinite programming, approximation algorithms, Keywords optimized is a linear function of the inner products of rounding algorithm, integrality gap, dimension reduction, constraint the variables. Since the seminal work of Goemans and satisfaction problem. Williamson [ 15 ], SDPs have fueled some of the major advances in approximation algorithms. They have found 1. I application to problems ranging from constraint satisfaction A vast majority of approximation algorithms involve ] to vertex coloring [ 10 ], [ 5 ], [ 17 ], [ 20 ], [ 27 ], [ 6 problems [ ], 19 two distinct phases—relaxation and rounding. More pre- 2 ] to graph decomposition 12 ], [ 7 ], vertex ordering [ 11 ], [ 9 ], [ [ = cisely, given an instance of a combinatorial optimization [13], [3], and discrete optimization [29], [8], [1], [24]. problem , most approximation algorithms consist of the Γ Despite all the successes, rounding the solution of a following two stages: semidefinite program remains a di cult task. Contrast this ffi Relaxation: A convex relaxation = (linear or relax to linear programming which has seen the development = semidefinite) of the instance is constructed. These re- 35 ], 18 ] and iterative rounding techniques [ of primal-dual [ ffi ciently using interior point laxations can be optimized e ], leading to simple combinatorial algorithms. Part of 34 [ ]. Being a relaxation, the optimum to methods [ 36 ], [ 30 = relax the problem is that the approximation ratios involved in is trivially at least as good (larger for maximization, smaller SDP based algorithms are irrational numbers stemming from 3-S the geometry of vectors. Even for problems like M ∗ Work done while the author was at University of Washington, and where the optimal approximation ratio is a simple fraction visiting Carnegie Mellon University. Supported by NSF grant CCF-0343672. 7 † like Research supported by NSF grants 0830673, 0832797, 528414. [20], the analysis of the rounding procedure is fairly 8

2 involved. In this work, we study the problem of rounding guarantee on the performance of generic rounding scheme ] also obtains 32 a natural SDP relaxation for general constraint satisfaction ]. Towards rectifying this, the work [ in [ 32 problems. an unconditional guarantee on the rounding scheme for the In a constraint satisfaction problem (CSP), the objective case of 2-CSPs. Specifically, for every 2-CSP, irrespective is to find an assignment to a set of variables that satisfies of the truth of UGC, the generic rounding scheme in [ 32 ] the maximum number of a given set of constraints on them. achieves an approximation equal to the integrality gap of . In a related work, O’Donnell is specified by a set of predicates over Λ Formally, a CSP the SDP relaxation SDP gen { 1 ,..., ] obtained rounding schemes that achieve the } . Every instance of the a finite domain [ q ] = 31 and Wu [ q , and a set of M integrality gap of the relaxation unconditionally for the Λ consists of a set of variables V problem constraints on them. Each constraint consists of a predicate problem. C For general CSPs with arity greater than 2, there are no from Λ applied to a subset of variables. The objective is to known rounding schemes that are optimal for the relaxation find an assignment to the variables that satisfies the maximum SDP , unless one assumes UGC. To show unconditional number of constraints. More generally, the predicates can be gen 32 guarantees, the approach of [ ] relies on the Khot–Vishnoi ff replaced by bounded real valued payo functions while the ] for unique games. Extending this ap- integrality gap [ 26 objective is to maximize the total payo ff . A large number proach to CSPs of arity greater than 2, is tied to the problem of the fundamental optimization problems such as M C of constructing integrality gaps for stronger SDP relaxations k -S are examples of CSPs. and M of unique games. Specifically, extending the unconditional Beginning with the work of Goemans and Williamson result of [ ] to a CSP of arity r would require an SDP 32 C M , semidefinite programs have been [ 15 ] on the -rounds of any of the SDP r integrality gap for roughly instrumental in approximation algorithms for several well hierarchies. ], [ 6 [ 2-S ], M [ 3-S ], M 20 M 27 known CSPs like Integrality gaps for such strong SDP relaxations of Unique 4-S ] , M k -CSP [ 37 16 ], [ 17 D ], [ 6 ], M ], 28 [ C [ Games were not known, when this work was concieved. In M C G [ 8 ] and U G [ 5 ], [ 10 ]. Underlying subsequent work [ 33 ], the authors used techniques from this all these works on seemingly diverse CSPs lies the simplest paper to construct integrality gaps for strong SDP relaxations semidefinite relaxations for the problems. Even though the of Unique Games. However, the resulting rounding scheme rounding techniques vary from one problem to another, the 32 ] remains very complex, with its proof relying on from [ SDP relaxation involved in all these algorithms is roughly a web of reductions between integrality gaps, UG hardness the same. results and dictatorship tests. In fact, the Unique games conjecture of Khot [ ] implies 21 that indeed the simplest semidefinite relaxation yield the 1.1. Results optimal approximation ratios for CSPs. While several UG In this work, we design a generic rounding scheme that C M hardness results for problems like ], 31 ], [ 25 ], [ 23 [ for every SDP achieves the integrality gap of the relaxation gen ], [ 4 ] suggested such an implication, its full M 2-S [ 22 CSP unconditionally. To state our result precisely, we need ] and 4 generality was realized in recent works by Austrin [ c ( . S to define the SDP integrality gap curve ) for a CSP Λ Λ one of the authors [ 32 ]. Specifically, it was shown in [ 32 ] ) denote the objective value of an optimal solution P ( sdp Let that a simple generic SDP relaxation yields the optimal for the P . Let opt ( P relaxation of an instance SDP ) gen approximation for every CSP assuming the Unique Games P denote the value of the optimal solution to . The integrality to denote this SDP Conjecture. Henceforth, we shall use gen ), given ( S gap curve ) is the minimum value of ( opt P c Λ generic relaxation, which is either equivalent or stronger than = c P where the minimum is over all instances that sdp( P ) every SDP relaxation used for an algorithm in literature. of the problem . Formally, Λ Surprisingly, this pursuit for UG-hardness results has led to new rounding schemes for CSPs. The connection between S ) P opt( inf . = ( c ) Λ , Λ c = ) P∈ P sdp( rounding schemes and UG hardness was first pointed out 4 ] on Boolean 2-CSPs. Fleshing in the work of Austrin [ 0, there ε > and for every For every CSP Λ Theorem 1.1: 32 out this connection in its full generality, [ ] designed -CSP exists a polynomial time approximation algorithm for Λ rounding schemes that achieve the optimal approximation − ε c ε − ) that returns an assignment of value at least S ( Λ ratio for every CSP assuming the Unique Games Conjecture. c . The algorithm runs in with SDP value Φ on an instance kq Consequently, assuming the Unique Games conjecture, for ))). ε time exp(exp(poly( / every CSP, it is clear what the optimal SDP relaxation is, The above result also holds in the more general setting and how to round it. where predicates are replaced by bounded real valued payo ff Yet, the situation is not entirely satisfactory, since the Λ consisting of predicates, functions. For a traditional CSP optimality of these generic rounding schemes relies on the the above theorem implies the following corollary. -CSP with non-negative valued Corollary 1.2: Given a Λ Unique games conjecture. In other words, if the Unique games conjecture were to be false, there would be no payo ff functions, for every ε > 0, there exists a polynomial

3 time approximation algorithm for following hold: Firstly, μ is a valid probability distribution Λ -CSP with approximation P 3 0 , 1 } { ). Further, the inner products defined as, α over local assignments ( ratio at most the integrality gap Λ match the P of the vectors corresponding to variables in sdp( ) P def = . sup α Λ . The objective value to be maximized can distribution μ P opt( P ) P be written in terms of the local integral distributions μ as P kq The algorithm runs in time exp(exp(poly( ))). ε / follows: ∑ ∑ SDP As already pointed out, the relaxation is either gen ) x ( P μ x , P equivalent to or stronger than every SDP relaxation used 3 P ∈P } x 1 , 0 ∈{ to approximate a CSP. Hence, this work yields a generic algorithm that for every CSP, achieves an approximation ratio is an SDP We wish to point out that the relaxation gen at least as good as the best known algorithm in literature. extremely minimal SDP relaxation. For instance, if two On the downside, the proof of optimality of the rounding y variables ,y do not occur in a clause together, then SDP i j gen scheme is non-explicit. To show the optimality of the round- does not impose any constraints on the inner products of on ing scheme, we proceed as follows: given an instance = u u u , . Specifically, { the corresponding vectors } u , , 1) , i ( ( 0) , i ( 1) 0) , j ( , j which the rounding scheme only achieves an α approximation, u the inner product of u could take negative values and 0) , i 0) , j ( ( we exhibit an instance on which the integrality gap of the SDP in a feasible solution. While this might suggest that gen α . In particular, this yields no information on SDP is at least is too weak a relaxation, recall that no stronger relaxation has achieved by the rounding scheme. the approximation ratio α been used to approximate a CSP yet, and indeed no stronger However, the techniques in this work yield an algorithm to relaxation helps, under the Unique games conjecture. . compute the integrality gap of Λ SDP for any given CSP gen , we construct Given the SDP solution to the instance Φ ′ 3-S instance Φ a constant sized which serves as M , Theorem 1.3: For every constant ε > 0 and every CSP Λ a model for Φ . More specifically, we construct a parti- the integrality gap curve ) can be computed to an additive c ( S Λ ∪ tion S ∪ S ... S = V of the set of variables V in 2 m 1 kq ′ ))). ε approximation of / ε in time exp(exp(poly( m to m subsets for some constant m . The instance Φ is over Not only does our approach bypass the need for strong SDP variables corresponding to subsets { s , s ,..., s } S . ,..., S 1 m 1 m 2 ′ gaps for unique games, but it is in fact surprisingly simple. is obtained by merging all the Essentially, the instance Φ Underlying this work are two ideas: dimension reduction and S s . variables in each of the sets to a corresponding variable i i ′ discretization of SDP vectors. In fact, this work does not We will refer to . as a Φ of the instance Φ folding ′ ′ make use of any of the machinery from UG hardness results Observe that any assignment A Φ to yields a corre- like dictatorship tests, Fourier analysis, Hermite analysis or A to sponding assignment Φ by simple unfolding, i.e., ′ the invariance principle. A S y ) for every variable . s ( in the set assign = ) y ( A j i i j A Clearly, the fraction of clauses satisfied by assignment 2. P O ′ ′ on Φ Φ . is exactly the same as those satisfied by A on Underlying this work are two surprisingly simple ideas: Observe that any folding operation immediately yields a ′ dimension reduction and discretization of SDP vectors. In Φ rounding scheme— “Find the optimal assignment to by this section, we elucidate how these are employed to obtain Φ brute force, and unfold it to an assignment for .” rounding schemes for CSPs. We begin by describing the To show the optimality of this scheme, the crucial property for a well known CSP generic SDP relaxation SDP gen operation is that it preserves folding we require of the - Max3SAT. Fix a Max3SAT instance Φ consisting of the SDP value. Clearly, any operation can only folding { = P P P } . and clauses } ,...,y y { = V variables ,..., 1 m n 1 decrease the value of the optimum for the SDP relaxation, ′ ′ are as follows: The variables in SDP gen ( sdp 6 ) Φ Φ ( sdp i.e, ). We will exhibit a folded instance Φ ′ = u – For each variable y , introduce two vector variables i i ) Φ ( sdp ≈ ( ). More precisely, we will exhibit sdp Φ such that ′ } . In the intended solution, the assignment y , { u u = i i , i ( 0) 1) , ( with approximately the same clauses Φ a folded instance 1 is represented by = 1, while 0 and = 0 u = u y i , i , 1) 0) ( ( i as Φ , and roughly the same SDP value. Such a folded ′ implies u u 0. , 1 = = , i ( 0) , i ( 1) will serve as a certificate for the optimality Φ instance – For each clause we will introduce 8 variables to of the rounding scheme. Recall that the folded instance is an ′ erent states possible. For instance, with ff denote the 8 di ( ) integrality gap instance with a SDP value Φ ( sdp ≈ ) sdp Φ ′ P ( the clause = y ∨ y ∨ y ) we shall associate 8 3 2 1 ). By definition, the scheme returns Φ ( opt and optimum value ′ { μ variables ,μ μ = . In general, ,...,μ } 001) ( P , 111) P P ( P , 000) ( , ) on the instance Φ Φ ( opt with an assignment of value the variables form a probability distribution locally μ P SDP value Φ ). Thus the rounding scheme achieves an ( sdp over integral solutions. approximation no worse than the integrality gap of the SDP. – Let denote a variable representing the constant 1. u 0 At this juncture, we would like to draw a parallel between SDP has the minimal set of constraints The relaxation ] on this approach and the work of Frieze–Kannan [ 14 gen ∈ P necessary to ensure that for every clause , the P approximating dense instances of NP-hard problems. Given a

4 dense instance of , they construct a finite model that noise to the SDP vectors. Interestingly, the smoothing M C ́ the instance using the Szemer edi Regularity erent purpose in ff operation was applied for an entirely di approximates lemma. This finite model is nothing but a [ 32 ]. For every CSP instance, there is a canonical SDP of the folding M instance that preserves the optimum value for } corresponding to the uniform distribution . In solution C { u ( , b ) i over all possible integral solutions. Given an arbitrary contrast, we construct a finite model for arbitrary instances that need not be dense, while preserving an arguably simpler , the { -smoothed solution is defined ε SDP solution } u b , i ( ) √ √ ∗ property— the SDP optimum. − denotes the direct ⊕ = ⊕ , where 1 u ε u u by ε ( ( i , b ) i , b ) b ( , i ) Summarizing the discussion, the problem of rounding has sum of vectors. Clearly, the SDP objective value changes been reduced to finding an algorithm to merge variables in O ( ε ) due to smoothing. We observe that if the by at most the instance in to a few clusters, while preserving the SDP } vectors { u are close to satisfying a valid inequality (say i , b ) ( value. Intuitively, the most natural way to preserve the SDP triangle inequality) approximately, then by smoothing, the ∗ value would be to merge variables whose SDP vectors are { new solution satisfies the inequality. We present a } u b i ( ) , close to each other. In other words, we would like to cluster separate argument to handle the equality constraints in the } { the SDP vectors u into a constant number of clusters. SDP. i ( , b ) A first attempt at such a clustering would be as follows: , for every clause P , the inner Φ In the original instance , ε partition the ambient space in to bins of diameter at most products of the vectors involved match a local integral and merge all the SDP vectors that fall in to the same bin. . After random projection and discretization, μ distribution P The number of clusters created is at most the number of bins − for at least 1 , the corresponding Φ fraction of the clauses in ε in the partition. inner products match a local integral distribution up to an { In general, the optimum SDP vectors u lie in a space } fraction of the clauses as error ε . Let us refer to these 1 − ε , b ) i ( of dimension equal to the number of variables in the SDP . Apply the smoothing operation on the discretized SDP good n -dimensional sphere in to bins of (say ). A partition of the n clause, the smoothed SDP solution good solution. For each n bins, while ) /ε , would require roughly (1 ε diameter at most is consistent with a local integral distribution. To finish the our goal is to use a constant number of bins. Simply put, argument, we discard the ε clauses from bad -fraction of the ′ there is little chance that n vectors in a n -dimensional space the folded instance . By the definition of SDP , once a Φ gen are clustered in to a few clusters. To address this issue, we is dropped from the instance, it is no longer P clause bad pursue the most natural approach: first perform a dimension . P necessary to satisfy the SDP constraints corresponding to ′ reduction on the SDP vectors by using random projections, ). Φ sdp( ≈ ) Φ Hence, we conclude sdp( and then cluster them together. 3. P , when projected Heuristically, for large enough constant d 3.1. Constraint Satisfaction Problems -dimensional space, at least 1 ε − d in to a random fraction of the inner products would change by at most ε . Further, In this work, we consider a generalization of constraint ε merging variables within the same bin of diameter , could ff satisfaction problems where we allow payo functions taking a . Thus the SDP value ε ect the inner products by at most ff } . 1 , 0 { 1], instead of predicates taking values in values in [ − 1 , k ) of the original ε ( O of the folded instance should be within → [ ] q Formally, let Λ be a family of payo ff functions P : def SDP value. The number of variables in the folded instance } 1 Λ 1]. We say has ] q [ alphabet and k , 1 − [ . q arity ,..., = { d would be (1 /ε ) — a constant. V ′ type 1] has , 1 − [ A function ] q [ : → P Λ ∈ P if for some Λ Making the above heuristic argument precise forms the ′ and some i x ,..., i ,..., ∈ V , we have P ) for ( x ) = P ( x k 1 i i 1 k technical core of the paper. While this is easy for some 2- ′ V V ⊆ ) ( P V . We define to be the set of coordinates ] q [ ∈ x all CSPs like M C , extending it to CSPs of larger arity and ′ ′ = ), that P depends on. In other words, if P x ( x ) ,..., P ( x i i 1 k alphabet size is non-trivial. The central issue to be addressed ′ ′ 6 k for any ) | . In particular, } { i ,..., = P ( ) V | then V ( P i 1 k is how does one respect all the constraints of the SDP during ′ P function P -CSP instance Λ . A Λ of type with variable dimension reduction. In fact, for stronger SDP relaxations V is a distribution over payo [ 1] , V ] q [ − set : ff functions P 1 → such as the one in [ 3 ], it is unclear whether a dimension . of type Λ reduction can be carried out at all. For a subset of CSP C - Problem 3.1 ( Λ (CSP)): P S variables involved in a constraint P , the SDP relaxation gen and a distribution over payo P V Given a variable set ff requires the inner products of the corresponding SDP vectors V , the goal is to find Λ 1] of type functions P : [ q , ] → [ − 1 μ . This to be consistent with a local integral distribution V P so as to ] We . ) x ( P an assignment x ∈ [ q maximize E P ∼P translates in to the SDP vectors satisfying special constraints ) as P define the value opt( amongst themselves. For instance, even for a CSP of arity 3 def such as M 3-S , this implies the triangle inequalities on opt( P . ) = max x ( P E ) V P ∼P ] q [ ∈ x every 3-tuple of variables involved in a clause. To make the argument precise, we use the smoothing Observe that if the payo ff functions P are predicates, then ] which in some sense introduces 32 operation defined in [ maximizing the payo ff amounts to maximizing the number

5 P Fact 4.1: opt( P ) > opt( ). /φ of constraints satisfied. In general, the optimal value of the folded instance might 3.2. SDP Relaxation be significantly lower than the optimal value of the original We consider the following relaxation for -CSP: Given an Λ instance. However, we will show that we can always find a P V , the goal is to find a collection instance with variable set variable folding that approximately preserves the SDP value d } of of vectors í μ ⊆ u { } and a collection { , a i P ∈ ∈ [ supp( q P i ) a P ∈ V ] , (of an instance that is close to the original instance). distributions over local assignments. Each distribution μ is P -CSP instance Given Theorem 4.2: P Λ 0 and a , we ε > V ( ) P ′ )). P ( V (the set of assignments to the variable set ] over [ q ffi can e and a P -CSP instance Λ ciently compute another Pr We will write to denote the probability of an E { } x μ ∈ P such that φ variable folding μ . under the distribution event E P ′ is obtained by discarding an P ff s ε fraction of payo 1) ′ P ( V ) P . Formally, V ( from the instance P ) and ‖P− = S R SDP ′ gen P ‖ 6 ε , 1 ′ > ) ε P 2) sdp( /φ sdp( P ) − , (1) maximize E E P ( x ) x ∼ μ P ∼P ′ P kq ε / ( )). the variable set of poly ( exp has cardinality /φ P 3) } { u Pr subject to 〈 u , = x 〉 (2) = b = x , a j , , a j b i i ∼ x μ Given the above theorem, we can immediately show the P main results of the paper. supp( , a ∈ P ( , [ ∈ j , b ∈ i ) P V ( P ) , ]) q } { Proof of Theorem 1.1: Given a Λ -CSP instance P with (3) a x = , u u 〈 〉 = Pr i i , a 0 ′ x ∼ μ P = and P ], we first compute another instance n [ V variable set ′ ∈ a ∈ , supp( P ) q P ]) [ i ∈ . V , ( ( P ) φ according to Lemma 4.2. Since P /φ has a variable folding kq ε / )) variables, we can compute an optimal only exp ( poly ( d ′ kq can be any fixed unit vector in í , and Here d u can be ( ))). This assign- ( exp in time /φ exp P assignment for ( poly / ε 0 n ffi d = any su ciently large number, say q | V | . with the same [ q ] x ment can be unfolded to an assignment ∈ ′ ′ We denote by sdp ( P ) the maximum value of an SDP ε , the assignment . Since has value ‖P−P value for ‖ P 6 x 1 ′ solution for P . Clearly, sdp( P ) > opt( P ). . By definition of at least P /φ ) − ε for the instance P ( S opt , Λ ′ ′ is (equiv- SDP We claim that the optimization problem / Φ ) > we have opt sdp ( ( sdp ( P P / Φ )) > S ( ). Hence ε − ) P ( S gen Λ Λ n alent to) a semidefinite program of polynomial size and thus q ] the assignment has value at least S ∈ ( sdp ( P ) − ε [ − ε x ) Λ it can be solved in polynomial time (to arbitrary accuracy). as claimed. To show this claim, let us introduce additional real-valued By Theorem 4.2, to compute the Proof of Theorem 1.3: V P ) ( [ ∈ x ) and P ( supp ∈ P for μ variables . We add the ] q ε SDP integrality gap within , it is su ffi cient to go over all P , x ∑ kq = μ constraints 0 and 1. We can now make μ > P ) ( V ( exp instances of size )). Thus the algorithm would / ( poly ε P x x , P , q x ∈ ] [ kq the following substitutions to eliminate the distributions , μ / ε ( exp poly just discretize the space of instances with )) ( P ∑ many variables, and compute the SDP and optimum value x E ) x , ( P P μ ) ( = P , x for each instance. x μ ∼ P ( V ) P x ∈ [ q ] ∑ The rest of this section is devoted to the proof of } { ′ = = μ a , Pr x x i , P Theorem 4.2. The construction of is described below: /φ P μ x ∼ P ) P ( V ∈ q ] [ x a x = i ′ ∑ { } /φ P C = b . = μ = a , x x Pr j i P , x ∼ μ x P μ { , } u } { Let Dimension reduction: be V P supp( P ) i , ] q [ ∈ a , ∈ ∈ i a P V ( P ) [ ∈ x q ] x = , x = b a -CSP instance P on the variable an SDP solution for a Λ j i D [ ]. Suppose = u set . We apply the following n ∈ í V , i a μ After substituting the distributions by the scalar vari- P . D procedure to reduce the dimension from to d ables , we are left with a linear optimization problem μ x , P D 1) Φ × Gaussian matrix , where each entry d Sample a over the cone of positive semidefinite matrices—an SDP—of k is independently distributed according to the Gaussian ) , | supp( P | ). size poly( q 1 / , (0 N distribution d ). G CSP 4. R , compute its image 2) For every vector u under the u i a i , a , Λ be a -CSP instance with P Variable folding: Let Φ map , def W variable set V = [ n ]. For a mapping φ : , we V → u = Φ u . , a i , a i on the variable set Λ define a new -CSP instance P /φ W : u Furthermore, define . u Φ = 0 0 by identifying variables of P that get mapped to the same Let Discarding bad constraints: variable in W . Formally, the payo ff functions in P /φ are of the ) be the P ( supp ⊆ B ε W x . ,..., ] q [ ∈ x for ) and u Since any assignment such that the vectors x ( P form set of payo ff functions P a ) n ( φ , i (1) φ , we can note P for P /φ corresponds to an assignment for the distributions μ violate one of the SDP constraints P ′ corresponding to P by more than ε . Define the instance P the following fact.

6 4.3. Robustness of SDP Relaxation SDP ff functions on the set of variables V by removing all payo gen ′ in from is obtained by conditioning the B P P . Formally, ε To finish the proof of Theorem 4.2, we need to construct ′ distribution < P . P on the event B ε from the P / Φ a completely feasible SDP solution to -net for the Folding by Discretization: ε Let N be an vectors w which nearly satisfy all the constraints. , a i d d c for some absolute | ( N / ε ) unit ball in í 6 . We have | We will show the following theorem in Section 5. . For every vector denote its closest , let w constant c u , i , a i a SDP ): Theorem 4.6 (Robustness of -CSP P be a Λ Let gen ′ vector in that have the same P . We identify variables of N μ V . Suppose { u } , { } instance on variable set q [ P supp( ∈ P P , ] ) ∈ a , V ∈ i a i ′ vectors -CSP instance w . Formally, we output the /φ P Λ a , i - ε . Here, α of value P -infeasible SDP solution for ε is an q where φ : V → is defined as N – (3) of the relaxation infeasible means that all constraints (2) . ε are satisfied up to an additive error of at most SDP gen def i ) ,..., w w = ( ) . φ ( q , i , i 1 Then, √ poly( · ε ) − α > . P sdp( ) kq 4.4. Proof of Theorem 4.2 4.1. Property of Dimension Reduction Assuming Theorem 4.6 (Robustness of ) we can SDP gen The key property of the dimension reduction is that it now complete the proof of Theorem 4.2. preserves inner products between vectors approximately. { u } For simplicity, we assume that the SDP solution , } { μ i , a P Lemma 4.3 (Inner products are preserved approximately): ′ P that was used in the construction of has value sdp ( P ). /φ D ∈ in the unit ball, For any two vectors u í u , 2 1 (The proof also works if the value of this SDP solution is } { ∣ ∣ close to the optimal value.) ∣ ∣ t 2 1 √ ∣ ∣ ) ( . / t Pr 6 O u Φ u 〈 〉 , 〉−〈 u > Φ , u 1 2 1 2 d ff Recall that B ⊆ supp ( P ) is the set of payo functions P Φ ε whose constraints are violated by more than ε by the For the sake of completeness, we present the straight- 3 2 2 , Lemma 4.4 . For /ε dimension-reduced vectors q u k d , i a forward proof of this property in Appendix 6.1. It is clear ′ 6 . Note that ‖ ε implies that with high probability, ‖P−P 1 that the dimension-reduced vectors u together with the i , a the vectors u } together with the original local distributions { i , a distributions μ need not form a feasible SDP solution. P ′ { μ } form an ε -infeasible SDP solution for P . Hence, by P However, we can deduce from Lemma 4.3 that with good w } Lemma 4.5, the SDP solution μ -infeasible. ε { { , is 4 } a i P , probability most of the constraints will be near satisfied. It ′ The value of this SDP solution for the instance is at P P to follows that not too many payo ff s are discarded from ′ > − ) P sdp ( sdp The key observation is ‖ ε. least −‖P−P ( P ) 1 ′ construct . P is also a solution for the { w } now that the SDP solution μ } , { a P i , P ), supp( ∈ P ff For every payo Lemma 4.4: ′ ′ has a 4 . We see that P folded instance /φ /φ ε -infeasible P ) ( 2 2 SDP solution of value at least sdp ) − ε . Theorem 4.6 P ( q k . ∈ } P { 6 Pr B O ε 2 ε d (Robustness of SDP ) asserts that in this situation we can Φ gen √ ′ sdp ( P ) /φ ) > sdp ( P − conclude ε · poly ( kq ) Finally, we . 4.2. Discretization ′ /φ is at P observe that the cardinality of the variable set of dq q ) poly( kq /ε u and Consider two vectors in the unit ball. It is u c , , j b i a ) ( . = 2 ε / | 6 | N most clear that if we move the vectors to their closest point in N , 5. S & S their inner product changes by at most 2 ε . (Since N is an { - Λ -infeasible SDP solution for a ε be an } μ { , } u Let ε -net of the unit ball, each vector is moved by at most .) ε , P a i P ]. Recall that an n [ = CSP instance V on the variable set u A minor technical issue is that some of the points , i a j ∈ ε , i ), P ( supp ∈ P -infeasible SDP solution satisfies for all might be outside of the unit ball. However, vectors of norm √ ( P ), and a V b ∈ [ q ], , 1 more than + ε can be ignored, because they violate the ∣ ∣ { } ∣ ∣ u , . ε 〉 6 1 by more than constraint 〈 u a a i i , , ∣ ∣ ∣ ∣ 6 ε, (4) Pr b = 〈 x , = a x u , u 〉− a , i b , j j i ∣ ∣ In particular, the following lemma holds. ∼ μ x P 0, suppose the vec- ε > For small enough Lemma 4.5: [ ( ], ∈ q and for all i ∈ V a P ) and satisfy all constraints corresponding to some payo ff tors u , i a ∣ ∣ } { ∣ ∣ ′ ∣ ∣ . Then, the ε P supp ∈ P function ( ) up to an error of ∣ ∣ ε. 6 (5) = a u x Pr 〈 , u 〉− 0 a , i i ∣ ∣ x μ ∼ P P w satisfy all constraints corresponding to up vectors , i a ε . to an error of 4 We construct a feasible solution that is close to the given ∈ Here we are using the fact that for each payo ff P SDP solution in two steps. ′ P ( ), the corresponding constraints in the relaxation supp In the first step, called “surgery”, we construct vectors ′ ) involve just a single inner product. We also use the } ( P sdp { u that satisfy the equality constraints on SDP vectors, i , a ′ u and V ∈ for a variable ( P i ) ∈ V i ( P supp fact the vectors ) with P ∈ i.e., 〈 u ] and all q [ , u ∈ b , 〉 = 0 for all a i b i a , , i a , √ ∑ ε have norms at most . + 1 V ∈ i for all u u = . , a i 0 ] q [ ∈ a

7 In the second step, called “smoothing”, we construct a on taking convex combination with the center. The above ′ w , { μ } { } . In this step, the vectors feasible SDP solution intuition is formalized in the following lemma. , a i P and the local distributions are “smoothed” which allows us to The local distributions { } can Lemma 5.2 (Smoothing): μ P ′ modify the local distributions so that they match the vectors such that for all { P μ be transformed to distributions } ∈ P perfectly. supp( , j ∈ P ( P ), and a , b ∈ [ q ], i ), V } { u { The vectors } can be transformed to Lemma 5.1: i , a 1 (9) , u , u 〈 ) δ (1 = b = x − · , δ + 〉 a = x Pr j b , a , i j i 2 q i ∈ V , u such that for all } a , b ∈ [ q vectors { ] and all ′ i a , x ∼ μ P 4 2 (6) u , 〉 0 u 〈 = , i b a i , , δ . Furthermore, for every ), P q = k where ∈ supp( P η ′ V , and for all i ∈ ∑ ‖ 6 μ − μ ‖ 3 δ P 1 P = u . (7) u i , a 0 ∈ supp ( P ). Let function P ff Let us fix a payo Proof: a [ q ] ∈ S = V ( P . We can think of } k ,..., 1 { = S ). We may assume and V ∈ i Furthermore, for q a ∈ [ ], k ] q [ : f as a function x μ ( f such that í → ) is the probability P √ μ . For the case x under the distribution of the assignment ‖ ‖ u − u 6 (8) . ) q poly( · ε P a , a , i i (9) 2, the constraint = q translates to a condition on the } is } -infeasible for In particular, the SDP solution { u η , { μ a i P , degree-2 Fourier coe cients of f . For larger q , we introduce ffi √ = · ε ). q poly( η the following generalization of Fourier bases. 2 ‖ (4) it follows that ‖ u From Proof: 6 + ε and 1 i a , Let χ ,...,χ be an orthonormal basis of the vector space 1 q u |〈 , ]. Therefore, if we 〉| 6 ε. u a , b ∈ [ q for all b , i a , i { → χ such that } í 1. (Here, orthonormal means ≡ q [ : f ] 1 apply the Gram–Schmidt orthogonalization process on the ) a ( χ E ( χ a ) = δ for all i , j ∈ [ q ]). By tensoring this i [ i j j ] q ∈ a ′ ′ k , the resulting vectors u vectors u satisfy u u ,..., ,..., i i , 1 q , [ { | σ ∈ χ q ] basis, we obtain the orthonormal basis } of the i q , 1 , i σ ′ k k , we compute ) . For every variable i · ε ( ∈ V O u − u ‖ 6 ‖ q i , a í χ , we have ( ] q [ ∈ σ . For } x → = ] q [ : f { vector space ) i , a σ ∑ ′ k : α a rescaling factor such that is a unit α u = u , i i 0 i ] q [ ∈ a ( x q ) ··· [ : ] í → , we denote by f For a function . ) x ( χ χ i , a σ k σ 1 k 1 ∑ poly ). Furthermore, , q ( 〉 · u ± u 1 = α > vector. Note that 〈 ε ˆ ˆ 0 i i , 0 : f ( σ ) the χ -coe ffi cient of f , i.e., f ( χ σ ) . = ) x f ( ( x ) k σ σ √ [ ] q ∈ x ∠ ( ( u ε − 1 . · q ) , u poly ) q ( . poly · ε Therefore, the angle ) = ˆ , 0 0 i k . f σ ) χ Note that ( Therefore, if we let f again = E f σ ] ∈ σ [ q For every variable V , we define a rotation U which ∈ i i i , be the function corresponding to , then for all S ∈ j μ P and acts as the identity on the u maps the vector u to i 0 , i b [ q ] ∈ and a , space orthogonal to the plane { u , u } . We claim that span ∑ 0 i i , { } ′ ˆ : f ( ( χ ) ) (10) σ x E = x a Pr b = x , = = U satisfy the conditions of the u the vector u α σ i j i i , i a i a , k x μ ∼ P q ] σ ∈ [ k lemma. By construction, the vectors satisfy the constraints (6) x ] ∈ [ q b = x x a , = i j is a rotation by an angle of at most U . Since (7) and i √ √ ˆ = E , σ ( f χ ) a b ) . (11) ( · ) and therefore poly( ε q 6 ε · poly( q ), we have ‖ ‖ U I − σ i j i 2 √ [ q ] σ ∈ ′ 6 α − ‖ ‖ u u poly ). Previous observations imply q ε · ( , i a i a , i ′ ˆ ˆ u that ‖ u ‖ 6 ε · poly ( q ) . Thus, the vectors { u } satisfy α − f = cient ffi ) is defined as the coe t , , s s ( f σ Here, ) where σ ( i i a , i , a i i j i a , also the third condition (8). r } j , i \{ ] q [ ∈ . In the second equality 1 for all = σ σ = t and j r The existence of a local distribution imposes constraints μ j , i \{ ] q [ , , we used that for every σ with σ ∈ } 1 for some r P r V ( P on the vectors corresponding in ). Specifically, the inner in (10) vanishes. χ the sum over the values of σ 2 P ( products of vectors corresponding to ) must lie in a V ] q : g , let S ∈ j , i For every variable pair be the [ → í i j certain polytope Q of constant dimension, to ensure the → , q [ : function g Similarly, we let . 〉 g ( a u í , b ] u ) 〈 = P i j a j , i , b i μ existence of a matching local distribution . The SDP be the function 〉 . We define a u , u 〈 = 〉 u , u 〈 = ) a ( g P , 0 i i a a i , , i a k ′ has local distributions that match up to an } u { solution f → í as follows : [ q ] function a , i . In other words, for every payo η , the vectors error of P ff and [ 1 for all = σ q s = σ if r , ] ) s ( g ˆ ∈ i } \{ i i r P ) are within η V corresponding to ( distance from the ′ ˆ = ) σ ( f [ 1 for = σ j and t = ,σ s , = σ ) if t ] , s ( , } i \{ q ∈ r ˆ g corresponding polytope . Q r i j i j P ˆ The idea of smoothing is to take a convex combination of ( otherwise. ) f σ } the SDP solution u { , with the SDP solution corresponding i a , 1) for ) , s ( g The conditions (6) and (7) imply that ˆ g = ( s ˆ i i j to uniform distribution over all assignments. By a suitable ˆ 1 = ) . all i , j ∈ S . We also have g f ( 1 ) = ˆ 1 ( ( 1 ) = ˆ g i j i can be made full- basis change, the local polytopes Q P ′ shows that (10) – (11) applied to f Therefore, the identity in dimensional, in that they are defined by a set of inequalities [ ∈ ], S q a and ∈ b , for all i , j (no equations involved). The SDP solution corresponding to ∑ ∑ uniform distribution over all assignments, lies at the center ′ ′ ˆ . ) x ( (12) f ) E f ( σ ) χ ( x = 〉 = , 〈 u u σ j , b a , i k σ ∈ [ q ] is only . As } u away Q η of each of these local polytopes { a , i P k k ∈ ] [ x q q [ ] ∈ x from each of these polytopes, it moves in to the polytope = x = a , x b x , x = a = b j i i j

8 ′ ′ f where ⊕ denotes the direct sum of vectors and { u We could finish the proof at this point if the function are } a i , ′ k . ] q over assignments [ would correspond to a distribution μ vectors corresponding to the uniform average over all feasible P ∑ ′ ′ ′ ′ 2 ˆ 1 , u 〉 SDP solutions (which satisfy = 〈 / q u for all i , j ∈ V f ) ) satisfies = 1 However, in = 1 ( The function f ( x f k ∈ [ q ] x i a , jb ′ [ ∈ and all b ]). From Lemma 5.1 and Lemma 5.2 it follows , a q f general, the function might take negative values. We will ′ P { that is a feasible SDP solution for } { μ . , } w show that these values cannot be too negative and that the i , a P It remains to estimate the value of this feasible SDP function can be made to a proper distribution by smoothing. solution: K Let be an upper bound on the values of the functions ∑ ) ( . From the orthonormality of the functions, it χ ,...,χ 1 q ′ { } − E α = P x P ( x ) ( ) x ( E μ − ) x ( μ E ) √ ′ ∼P P P ∼P μ x ∼ V ( P ) q Pr . Let f ( a , . ) follows that K b 6 = = , x a b = x x μ ∼ i j i j x [ q ] ∈ P P ′ (11) the coe ffi f in the Recall that we computed in cients of i j ‖ μ − μ E ‖ − > α 1 P ∼P , . Since the SDP solution } ] q [ } ∈ t , s | u { is χ { basis } { μ P , s a t , i α ) . · η − kq > poly( -infeasible, we have η ∑ P | For the first inequality, we used that ) | 6 1. The second x ( s ˆ g ) b , g a ( a , b ) χ ( ( = , t ) i j st i j inequality follows from Lemma 5.2. (In the last calculation, ∈ b ] [ a , q ∑ we just verified that the value of SDP solutions is Lipschitz 2 2 2 2 ˆ K = f ( a , b ) χ ( a , b ) ± q η = η. f ( s , q K ± ) t i j st i j in the statistical distance of the local distributions.) , a ] q [ ∈ b R 2 ′ 2 k ˆ ˆ q ) | 6 K f ( σ [ σ q η ) − ∈ ] f | . Thus, ( Therefore, σ for all [1] N. Alon and A. Naor, “Approximating the cut-norm via Grothendieck’s inequality,” SIAM J. Comput. , vol. 35, no. 4, ′ ′ k ˆ ˆ ) σ ( ( f f δ/ q ) ( x E = ) x ( ) f ± E = ) x ( χ σ χ σ σ pp. 787–803, 2006. 1 k k σ σ ∈ [ q ] ] [ q ∈ S. Arora, E. Chlamtac, and M. Charikar, “New approximation [2] k ± (13) , δ/ q = f ) x ( , 2006, pp. 215–224. guarantee for chromatic number,” in STOC 1 2 4 2 ′ : (1 δ k = K q η . Hence, if we let h = where − δ ) · f + δ · U , [3] S. Arora, S. Rao, and U. V. Vazirani, “Expander flows, k k → í is the uniform distribution U [ q ] where q / : U ≡ 1 , STOC , 2004, geometric embeddings and graph partitioning,” in then pp. 222–231. 1, 4 ′ k ) f > 0 h = (1 − δ ) f − + δ/ q δ . > (1 [4] P. Austrin, “Towards sharp inapproximability for any 2-CSP,” , 2007, pp. 307–317. 2 in FOCS ′ μ over It follows that h corresponds to a distribution P [5] M. Charikar, K. Makarychev, and Y. Makarychev, “Near- k it follows that ] . Furthermore, from (12) q assignments [ , 2006, pp. optimal algorithms for unique games,” in STOC 205–214. 1, 2 S and a , b ∈ [ q ], for all i , j ∈ { } [6] ——, “Near-optimal algorithms for maximum constraint 1 Pr , u 〉 + δ · δ 〈 x . = a , x = b u = (1 − ) , b i , a j i j 2 , 2007, pp. 62–68. [Online]. satisfaction problems,” in SODA q ′ μ x ∼ P doi.acm.org // Available: http: 1283383.1283391 1, 2 / 10.1145 / ——, “On the advantage over random for maximum acyclic [7] Finally, let us estimate the statistical distance between the ′ FOCS subgraph,” in , 2007, pp. 625–633. 1 , distributions μ and μ P P M. Charikar and A. Wirth, “Maximizing quadratic programs: [8] ′ Extending grothendieck’s inequality,” in , 2004, pp. 54– FOCS ) f h ‖ = ‖ δ ( f − U − + (1 − δ )( f − f ‖ ) ‖ 1 1 60. 1, 2 ′ f 6 2 δ + ‖ f − ‖ (using triangle inequality) 1 [9] E. Chlamtac, “Approximation algorithms using hierarchies of 6 3 δ (using (13)) . semidefinite programming relaxations,” in , 2007, pp. FOCS 691–701. 1 ′ μ In this way, we can construct a suitable distribution for P [10] E. Chlamtac, K. Makarychev, and Y. Makarychev, “How to ), which proves the lemma. ∈ P every P supp( play unique games using embeddings,” in FOCS , 2006, pp. 687–696. 1, 2 ) SDP 5.1. Proof of Theorem 4.6 (Robustness of gen [11] E. Chlamtac and G. Singh, “Improved approximation guaran- APPROX- tees through higher levels of SDP hierarchies,” in } μ Let us consider an { -infeasible SDP solution { u ε } , , i P a RANDOM , 2008, pp. 49–62. 1 for a . Suppose that this SDP solution has P -CSP instance Λ B. Chor and M. Sudan, “A geometric approach to between- [12] . value α ness,” , vol. 11, no. 4, pp. 511–523 SIAM J. Discrete Math. } First, we construct vector { u as in Lemma 5.1. These a i , (electronic), 1998. 1 form vectors together with the original local distributions { μ } P [13] A. Frieze and M. Jerrum, “Improved approximation algorithms √ η -infeasible SDP solution for P , where η = an ε · poly ( q ). for MAX -CUT and MAX BISECTION,” k Algorithmica , ′ vol. 18, no. 1, pp. 67–81, 1997. 1 { μ } as in Lemma 5.2. Next, we construct local distributions P A. M. Frieze and R. Kannan, “Quick approximation to matrices [14] Define new vectors Combinatorica , vol. 19, no. 2, pp. 175–220, and applications,” √ √ def ′ · δ − 1 , ⊕ u · δ u 1999. 3 w = , i a a , i i , a

9 [15] [31] R. O’Donnell and Y. Wu, “An optimal SDP algorithm for M. X. Goemans and D. P. Williamson, “Improved approxima- max-cut, and equally optimal long code tests,” in STOC , 2008, tion algorithms for maximum cut and satisfiability problems J. ACM , vol. 42, no. 6, pp. pp. 335–344. 2 using semidefinite programming,” 1115–1145, 1995. 1, 2 P. Raghavendra, “Optimal algorithms and inapproximability [32] , 2008, pp. 245–254. 1, 2, 4 STOC results for every CSP?” in E. Halperin and U. Zwick, “Approximation algorithms for [16] MAX 4-SAT and rounding procedures for semidefinite pro- [33] P. Raghavendra and D. Steurer, “Integrality gaps for strong , vol. 40, no. 2, pp. 184–211, 2001. J. Algorithms grams,” SDP relaxations of unique games,” in FOCS , 2009, to appear. 2 1, 2 [17] APPROX- G. Hast, “Beating a random assignment,” in M. Singh and L. C. Lau, “Approximating minimum bounded [34] , 2005, pp. 134–145. 1, 2 RANDOM , STOC degree spanning trees to within one of optimal,” in K. Jain, “A factor 2 approximation algorithm for the general- [18] 2007, pp. 661–670. 1 , vol. 21, no. 1, ized steiner network problem,” Combinatorica Approximation algorithms V. V. Vazirani, [35] Berlin: Springer- . pp. 39–60, 2001. 1 Verlag, 2001. 1 D. R. Karger, R. Motwani, and M. Sudan, “Approximate graph [19] . Philadelphia, Primal-dual interior-point methods S. J. Wright, [36] , vol. 45, no. 2, J. ACM coloring by semidefinite programming,” PA: Society for Industrial and Applied Mathematics (SIAM), pp. 246–265, 1998. 1 1997. 1 and U. Zwick, “A 7 8-approximation algorithm [20] / ff H. J. Karlo , [37] STOC U. Zwick, “Finding almost-satisfying assignments,” in , 1997, pp. 406–415. 1, 2 FOCS for MAX 3SAT?” in 1998, pp. 551–560. 2 S. Khot, “On the power of unique 2-prover 1-round games,” [21] 6. A in STOC , 2002, pp. 767–775. 2 [22] ——, “On the power of unique 2-prover 1-round games,” in 6.1. Property of Dimension Reduction STOC , 2002, pp. 767–775. 2 Lemma 4.3 (Inner products are preserved approximately, S. Khot, G. Kindler, E. Mossel, and R. O’Donnell, “Optimal [23] D , u For any two vectors restated): in the unit ball, í ∈ u 2 1 inapproximability results for MAX-CUT and other 2-variable { } ∣ ∣ SIAM J. Comput. CSPs?” , vol. 37, no. 1, pp. 319–357, 2007. ∣ ∣ t 2 1 √ ∣ ∣ ) ( Pr / . t > 〉 u , u 〉−〈 u Φ , u 6 Φ 〈 O 2 1 2 1 2 d Φ [24] L S. Khot and A. Naor, “Linear equations modulo 2 and the 1 Proof: Note that we may assume that both vectors are , vol. 38, no. 4, diameter of convex bodies,” SIAM J. Comput. unit vectors (otherwise, we can normalize them). Suppose pp. 1448–1463, 2008. 1 u , = . By rotational invariance, we can assume that α 〈 〉 u [25] S. Khot and R. O’Donnell, “SDP gaps and UGC-hardness for 2 1 √ 2 , 2006, pp. 217–226. 2 MAXCUTGAIN,” in FOCS β ), where α,β ( = u 0) and , (1 = u α − 1 . Hence, = 2 1 S. Khot and N. K. Vishnoi, “The unique games conjecture, [26] Φ 〈 u 〉 has the same distribution as Φ , u 2 1 integrality gap for cut problems and embeddability of negative d ∑ type metrics into ` , 2005, pp. 53–62. 2 FOCS ,” in 1 ′ 2 1 , + βξ ξ αξ i i i d M. Lewin, D. Livnat, and U. Zwick, “Improved rounding [27] 1 = i techniques for the MAX 2-SAT and MAX DI-CUT problems,” ′ ′ , 2002, pp. 67–82. 1, 2 IPCO in are independent standard Gaussian ,ξ ,ξ ξ ,...,ξ where 1 d 1 d S. Matuura and T. Matsui, “0.863-approximation algorithm for [28] variables (mean 0 and standard deviation 1). 2 ′ MAX DICUT,” in , 2001, pp. 138–146. 2 RANDOM-APPROX α is equal to and ξ For each i , the expectation of αξ βξ + i i i Y. Nesterov, “Semidefinite relaxation and nonconvex quadratic [29] the variance is bounded (at most 2). Hence, the expectation optimization,” Universit catholique de Louvain, Center for Φ , of and the standard deviation is u α is equal to Φ 〉 〈 u 1 2 Operations Research and Econometrics (CORE), CORE √ 1 / O d ). The lemma follows from Chebychev’s inequality. ( Discussion Papers 1997044, Jun. 1997. [Online]. Available: cor http: // ideas.repec.org / p / 1997044.html 1 / louvco / [30] . New Numerical optimization J. Nocedal and S. J. Wright, York: Springer-Verlag, 1999. 1