# Balancing Vectors in Any Norm

## Transcript

1 Balancing Vectors in Any Norm Aleksandar (Sasho) Nikolov University of Toronto Based on joint work with Daniel Dadush, Kunal Talwar, and Nicole Tomczak-Jaegermann Sasho Nikolov (U of T) Balancing Vectors 1 / 25

2 Introduction Outline 1 Introduction 2 Volume Lower Bound 3 Factorization Upper Bounds 4 Conclusion Sasho Nikolov (U of T) Balancing Vectors 2 / 25

3 . ·‖ U ‖ Natural to consider arbitrary norms: any norm can be written as ∞ Introduction Discrepancy   − 1   1         1   1 1 1 1 0 0 0 0 0 0   − 1       0 1 1 0 1 1 0 0 0 0       1 =       0 0 0 0 1 0 0 1 0 0   − 1   − 1 0 0 0 1 1 1 1 1 1   1     1 − 1 ε ‖·‖ ‖ U ) = ‖ , disc( U min ∞ ∞ N ε ∈{± 1 } Sasho Nikolov (U of T) Balancing Vectors 3 / 25

4 Introduction Discrepancy   1 −   1         1   1 1 1 1 0 0 0 0 0 0   1 −       0 1 1 0 1 1 0 0 0 0       1 =       0 0 0 0 1 0 0 1 0 0   1 −   1 − 0 0 0 1 1 1 1 1 1   1     1 1 − U , ‖·‖ ‖ disc( min U ‖ ε ) = ∞ ∞ N } ∈{± ε 1 Natural to consider arbitrary norms: any norm can be written as ‖ U ·‖ . ∞ Sasho Nikolov (U of T) Balancing Vectors 3 / 25

5 n n B : For any u , there exist ,..., u 1] ∈ , Implied by 1 = [ − 1 N ∞ √ + . ‖ u ε ... + u ε ‖ s.t. } +1 , 1 n . ε ∈{− ,...,ε ∞ 1 1 1 N N N n N × U ∈{ 0 , 1 } with at most t [Beck and Fiala, 1981]: For any matrix t ones per column, disc( ) ≤ 2 U − 1 n Implied by u , there exist ,..., u : For any ∈ B 1 N 1 < 2. ,...,ε ‖ ε } u +1 , ... ‖ u ε ε + 1 ∈{− + s.t. 1 1 1 ∞ N N N Most combinatorial discrepancy bounds are implied by geometric vector balancing arguments. Introduction Basic Bounds n × N 1 , } , 0 ∈{ U [Spencer, 1985; Gluskin, 1989]: For any matrix √ . disc( n ) U Sasho Nikolov (U of T) Balancing Vectors 4 / 25

6 × n N ∈{ 0 , 1 } [Beck and Fiala, 1981]: For any matrix U with at most t 2 − t ≤ ) U ones per column, disc( 1 n : For any Implied by , there exist ,..., u B ∈ u 1 N 1 ,...,ε ε ∈{− 1 2. < ‖ , u +1 ε + ... + } u ε ‖ s.t. 1 1 ∞ 1 N N N Most combinatorial discrepancy bounds are implied by geometric vector balancing arguments. Introduction Basic Bounds n × N , 1 } U [Spencer, 1985; Gluskin, 1989]: For any matrix ∈{ , 0 √ U n . ) disc( n n ∈ u B , there exist u : For any Implied by 1] = [ − 1 , ,..., 1 N ∞ √ . n ‖ . ‖ ε u ,...,ε ε + ... + ∈{− u 1 ε , s.t. } +1 ∞ 1 1 1 N N N Sasho Nikolov (U of T) Balancing Vectors 4 / 25

7 n Implied by u , there exist ,..., u : For any ∈ B 1 N 1 < ε ,...,ε ∈{− 1 , +1 } s.t. ‖ ε u + ... + ε u ‖ 2. 1 1 1 ∞ N N N Most combinatorial discrepancy bounds are implied by geometric vector balancing arguments. Introduction Basic Bounds n × N 1 } ∈{ [Spencer, 1985; Gluskin, 1989]: For any matrix , U , 0 √ U disc( ) . n n n 1 u − ,..., , there exist , : For any Implied by 1] B ∈ = [ u 1 N ∞ √ . n ε u ,...,ε ∈{− 1 , +1 } s.t. ‖ ε u + . ... ‖ + ε ∞ 1 1 1 N N N × n N t with at most ∈{ 0 1 U } [Beck and Fiala, 1981]: For any matrix , ones per column, disc( ≤ t 2 − 1 ) U Sasho Nikolov (U of T) Balancing Vectors 4 / 25

8 Most combinatorial discrepancy bounds are implied by geometric vector balancing arguments. Introduction Basic Bounds N n × } ∈{ 1 U 0 [Spencer, 1985; Gluskin, 1989]: For any matrix , , √ n U ) . disc( n n u : For any ∈ B u Implied by = [ − 1 , 1] ,..., , there exist 1 N ∞ √ n . ‖ 1 s.t. ‖ ε ε u +1 + ... + ε ,...,ε u ∈{− } . , ∞ 1 1 1 N N N × n N t with at most U ∈{ 0 , 1 } [Beck and Fiala, 1981]: For any matrix ≤ 2 t − 1 ) U ones per column, disc( n Implied by ,..., u : For any ∈ B u , there exist 1 N 1 ,...,ε + ∈{− 1 , +1 } s.t. ‖ ε 2. u < ε ... + ε ‖ u ∞ 1 1 1 N N N Sasho Nikolov (U of T) Balancing Vectors 4 / 25

9 Introduction Basic Bounds n × N 1 } , ∈{ 0 U , [Spencer, 1985; Gluskin, 1989]: For any matrix √ ) . n U disc( n n Implied by ,..., u , there exist ∈ B : For any u = [ − 1 , 1] 1 N ∞ √ n . ‖ ∈{− , +1 } s.t. ‖ ε ε u ,...,ε + ... + ε . u 1 ∞ 1 1 1 N N N n × N with at most t } ∈{ U [Beck and Fiala, 1981]: For any matrix , 1 0 ones per column, disc( U ) ≤ 2 t − 1 n ∈ , there exist ,..., u Implied by u B : For any 1 N 1 ε 2. ,...,ε < ∈{− 1 , +1 } s.t. ‖ ε ‖ u u + ... + ε 1 1 ∞ 1 N N N Most combinatorial discrepancy bounds are implied by geometric vector balancing arguments. Sasho Nikolov (U of T) Balancing Vectors 4 / 25

10 N t ‖ x ‖ ). = inf { t : x ∈ tK } ; : = disc(( u ‖·‖ ) Minkowski Norm , i K K =1 i Vector Balancing Constant : worst case over sequences in C } { N , ) u = ( disc( U , ‖·‖ U ) : N ∈ N , u C ,..., u ∈ C ) = sup K , vb( 1 i N K i =1 Introduction The Vector Balancing Problem n n ,..., u ), ∈ R ⊂ , and symmetric convex body K u R Given ( K = − K 1 N find the smallest t such that ... + ∈ u } ε : u +1 , 1 ∈{− tK ,...,ε ε ε ∃ + 1 1 1 N N N − u + u u + u 2 1 2 1 u − u 2 1 − u u − 2 1 Sasho Nikolov (U of T) Balancing Vectors 5 / 25

11 C : worst case over sequences in Vector Balancing Constant } { N , u ) disc( U , ‖·‖ = ( ) : N ∈ N U u , ,..., u C ∈ K , C ) = sup vb( 1 i K N i =1 Introduction The Vector Balancing Problem n n K Given ⊂ ∈ R ,..., , and symmetric convex body K u R u ( K = − ), 1 N such that find the smallest t +1 ∈ tK ... + ε : } ε , 1 ∈{− u ,...,ε u ε ∃ + 1 1 1 N N N u + u − u + u 1 2 2 1 u − u 1 2 u u − − 1 2 N x ). ‖·‖ , ) Minkowski Norm u = disc(( t ; } tK ∈ : : t { = inf ‖ ‖ x i K K i =1 Sasho Nikolov (U of T) Balancing Vectors 5 / 25

12 Introduction The Vector Balancing Problem n n u Given ∈ R u , and symmetric convex body K ⊂ R ,..., ( K = − K ), 1 N find the smallest t such that ε tK ,...,ε ∈ ∈{− 1 , +1 ∃ : ε u u ε + ... + } 1 1 1 N N N − u + u u u + 1 2 2 1 u u − 2 1 − u u − 1 2 N ‖ x ‖ ). = inf { t : x ∈ tK } ; t = disc(( u Minkowski Norm ) : ‖·‖ , i K K =1 i C Vector Balancing Constant : worst case over sequences in } { N , ‖·‖ u ) : disc( ∈ N , u U ,..., u ) ∈ C , U = ( N , K ) = sup vb( C 1 i N K i =1 Sasho Nikolov (U of T) Balancing Vectors 5 / 25

13 √ n n n B . ) , [Spencer, 1985; Gluskin, 1989] vb( B ∞ ∞ n n < B , ) B [Beck and Fiala, 1981] vb( 2 ∞ 1 n ( B has , [Banaszczyk, 1998] ) vb ≤ 5 if K K 2 1 ≥ Gaussian measure ( γ K ) n 2 n n , ) B : Prove or disprove vb( . Koml ́os Problem 1. B ∞ 2 √ n n , . B log 2 Banaszczyk’s theorem implies vb( ) n B . ∞ 2 Introduction Questions and Prior Results )? K , K [Dvoretzky, 1963] “What can be said” about vb( ) K , . [B ́ar ́any and Grinberg, 1981] vb( K K for all n ≤ Sasho Nikolov (U of T) Balancing Vectors 6 / 25

14 n ( , ≤ ) B K vb [Banaszczyk, 1998] 5 if K has 2 1 ) ( γ ≥ Gaussian measure K n 2 n n B : Prove or disprove vb( Koml ́os Problem 1. B , . ) ∞ 2 √ n n n , B . log 2 ) . Banaszczyk’s theorem implies vb( B ∞ 2 Introduction Questions and Prior Results K , K )? [Dvoretzky, 1963] “What can be said” about vb( [B ́ar ́any and Grinberg, 1981] vb( K . K ) ≤ K n for all , √ n n ) , [Spencer, 1985; Gluskin, 1989] vb( B n B . ∞ ∞ n n 2 B < , B [Beck and Fiala, 1981] vb( ) ∞ 1 Sasho Nikolov (U of T) Balancing Vectors 6 / 25

15 Introduction Questions and Prior Results , K )? [Dvoretzky, 1963] “What can be said” about vb( K , K ) ≤ n for all K . [B ́ar ́any and Grinberg, 1981] vb( K √ n n B [Spencer, 1985; Gluskin, 1989] vb( ) . B n , ∞ ∞ n n , B [Beck and Fiala, 1981] vb( 2 ) < B ∞ 1 n , B [Banaszczyk, 1998] ( vb K ) ≤ 5 if K has 2 1 ≥ ( K ) Gaussian measure γ n 2 n n B . 1. , B Koml ́os Problem : Prove or disprove vb( ) ∞ 2 √ n n , B Banaszczyk’s theorem implies vb( . ) . B log 2 n ∞ 2 Sasho Nikolov (U of T) Balancing Vectors 6 / 25

16 Introduction Vector Balancing and Rounding N N , 1] C , any U = ( w For any ) ∈ [0 , and any symmetric convex , u ∈ u i i =1 i N , there exists a x ∈{ 0 , 1 } K such that C ‖ − Uw ‖ . ≤ vb( Ux , K ) K Sasho Nikolov (U of T) Balancing Vectors 7 / 25

17 C , K ) is tight up to a A natural volumetric lower bound on vb( ) factor. O (log n N The proof implies an efficient algorithm to compute ε 1 , 1 } ∈{− given K , ). C ) vb( n (1 + log . ∈ u ,..., u ‖ C , so that ‖ ε u + ... + ε u N 1 N 1 1 N K Also rounding version. ) is tight up to K An efficiently computable upper bound on vb( C , . n factors polynomial in log Based on an optimal application of Banaszczyks’ theorem. ). Implies an efficient approximation algorithm for vb( C , K The results extend to hereditary discrepancy with respect to arbitrary norms. Prior work [Bansal, 2010; Nikolov and Talwar, 2015] implies bounds which . deteriorate with the number of facets of K Introduction Our Results lower bounds ) and K , C on vb( We initiate a systematic study of and upper its computational complexity: Sasho Nikolov (U of T) Balancing Vectors 8 / 25

18 An efficiently computable upper bound on vb( C , K ) is tight up to factors polynomial in log n . Based on an optimal application of Banaszczyks’ theorem. C , K ). Implies an efficient approximation algorithm for vb( The results extend to hereditary discrepancy with respect to arbitrary norms. Prior work [Bansal, 2010; Nikolov and Talwar, 2015] implies bounds which K deteriorate with the number of facets of . Introduction Our Results K lower bounds on vb( C , We initiate a systematic study of upper and ) and its computational complexity: ) is tight up to a K , C A natural volumetric lower bound on vb( n (log O ) factor. N 1 The proof implies an efficient algorithm to compute ∈{− given } 1 , ε , K ). u ∈ C , so that ‖ ε u u + ... + ε u ‖ . (1 + log ,..., ) vb( C n N 1 N 1 N 1 K Also rounding version. Sasho Nikolov (U of T) Balancing Vectors 8 / 25

19 The results extend to hereditary discrepancy with respect to arbitrary norms. Prior work [Bansal, 2010; Nikolov and Talwar, 2015] implies bounds which deteriorate with the number of facets of K . Introduction Our Results , and lower bounds on vb( C upper K We initiate a systematic study of ) and its computational complexity: A natural volumetric lower bound on vb( C , K ) is tight up to a n (log ) factor. O N given } 1 , 1 ε The proof implies an efficient algorithm to compute ∈{− ε u ,..., + ... + ε u ‖ . (1 + log n ) vb( C , K ). u ∈ u , so that ‖ C N 1 N 1 N 1 K Also rounding version. ) is tight up to An efficiently computable upper bound on vb( C , K . factors polynomial in log n Based on an optimal application of Banaszczyks’ theorem. ). K , C Implies an efficient approximation algorithm for vb( Sasho Nikolov (U of T) Balancing Vectors 8 / 25

20 Prior work [Bansal, 2010; Nikolov and Talwar, 2015] implies bounds which deteriorate with the number of facets of K . Introduction Our Results on vb( K ) and lower bounds C and upper We initiate a systematic study of , its computational complexity: K ) is tight up to a A natural volumetric lower bound on vb( C , (log ) factor. n O N } given The proof implies an efficient algorithm to compute ε ∈{− 1 , 1 . , so that ε u C + ... + ε u ‖ ‖ (1 + log n ) vb( C , K ). ∈ ,..., u u 1 K N 1 N 1 N Also rounding version. K ) is tight up to An efficiently computable upper bound on vb( , C factors polynomial in log n . Based on an optimal application of Banaszczyks’ theorem. ). Implies an efficient approximation algorithm for vb( C , K The results extend to hereditary discrepancy with respect to arbitrary norms. Sasho Nikolov (U of T) Balancing Vectors 8 / 25

21 Introduction Our Results and on vb( C , K ) and upper lower bounds We initiate a systematic study of its computational complexity: A natural volumetric lower bound on vb( K ) is tight up to a C , (log n ) factor. O N ∈{− 1 , 1 } The proof implies an efficient algorithm to compute given ε + ,..., ∈ C , so that ‖ ε u u + ... u ε u ‖ . (1 + log n ) vb( C , K ). 1 N 1 N 1 K N Also rounding version. C , K ) is tight up to An efficiently computable upper bound on vb( factors polynomial in log n . Based on an optimal application of Banaszczyks’ theorem. Implies an efficient approximation algorithm for vb( C , K ). The results extend to hereditary discrepancy with respect to arbitrary norms. Prior work [Bansal, 2010; Nikolov and Talwar, 2015] implies bounds which deteriorate with the number of facets of K . Sasho Nikolov (U of T) Balancing Vectors 8 / 25

22 Volume Lower Bound Outline 1 Introduction 2 Volume Lower Bound 3 Factorization Upper Bounds 4 Conclusion Sasho Nikolov (U of T) Balancing Vectors 9 / 25

23 K . U vb( C , ) is more robust, but not about a specific matrix is a robust analog of discrepancy: Hereditary discrepancy disc( ) = max , ) K K , U hd( , U S S ⊆ [ N ] is the submatrix of . indexed by where U = ( u ) S U i S i ∈ S Observation : } { N u ,..., , u ∈ C ) , N ∈ . U u vb( C , K ) = sup = ( hd( U , K ) : N 1 i N =1 i Volume Lower Bound Hereditary Discrepancy U Issue : disc( U ) is , ‖·‖ , K ) = disc( K not robust to slight changes in U (e.g. repeat each column) hard to approximate [Charikar, Newman, and Nikolov, 2011] Sasho Nikolov (U of T) Balancing Vectors 10 / 25

24 is a robust analog of discrepancy: Hereditary discrepancy , , U hd( ) = max K disc( U K , ) S S ] ⊆ [ N = ( U . u ) is the submatrix of U indexed by S where i i S ∈ S Observation : { } N U ∈ C , U N = ( u ) : hd( ∈ N , ) . u ) = sup , ,..., u vb( C , K K 1 i N =1 i Volume Lower Bound Hereditary Discrepancy ) is : disc( U , Issue ) = disc( U , ‖·‖ K K not robust to slight changes in U (e.g. repeat each column) hard to approximate [Charikar, Newman, and Nikolov, 2011] , vb( C . K ) is more robust, but not about a specific matrix U Sasho Nikolov (U of T) Balancing Vectors 10 / 25

25 : Observation { } N U , C ∈ u ,..., u , N ∈ N ) : K , U hd( . ) = sup K , C vb( ) u = ( 1 i N =1 i Volume Lower Bound Hereditary Discrepancy U ) is U , K ) = disc( ‖·‖ : disc( Issue , K not robust to slight changes in (e.g. repeat each column) U hard to approximate [Charikar, Newman, and Nikolov, 2011] . U K , C vb( ) is more robust, but not about a specific matrix Hereditary discrepancy is a robust analog of discrepancy: disc( U U , K ) , K , ) = max hd( S ⊆ [ N ] S ) u . U where is the submatrix of U indexed by S = ( i S ∈ i S Sasho Nikolov (U of T) Balancing Vectors 10 / 25

26 Volume Lower Bound Hereditary Discrepancy ‖·‖ U ) = disc( U , K , ) is : disc( Issue K (e.g. repeat each column) not robust to slight changes in U hard to approximate [Charikar, Newman, and Nikolov, 2011] C , K ) is more robust, but not about a specific matrix U . vb( Hereditary discrepancy is a robust analog of discrepancy: U , K ) = max hd( , ) K , disc( U S N [ ⊆ ] S U . is the submatrix of u where ) S indexed by U = ( i S i S ∈ Observation : { } N K ) = sup u hd( U , K ) : N ∈ N , vb( C ,..., u , ∈ C , U = ( u ) . 1 i N i =1 Sasho Nikolov (U of T) Balancing Vectors 10 / 25

27 [Lov ́asz, Spencer, and Vesztergombi, 1986]: ⋃ N ( = hd( , K ), then [0 , 1] t ⊆ If x + tL ). U N x ∈{ } 1 , 0 [Banaszczyk, 1993]: N N tL ) ) ≥ vol( , ) = t 1 = vol([0 vol( L 1] Volume Lower Bound The Volume Lower Bound N ”. x : the set of “good } K ∈ Ux L Define R ∈ x { = : N U . ∅} = 6 disc( } 1 , 1 ∩{− tL : t { ) = min K , Sasho Nikolov (U of T) Balancing Vectors 11 / 25

28 [Banaszczyk, 1993]: N N vol( 1] , 1 = vol([0 ≥ tL ) = t ) vol( L ) Volume Lower Bound The Volume Lower Bound N } L ”. = { x : the set of “good Define K ∈ Ux : ∈ R x N ∅} disc( , K ) = min { t : tL ∩{− 1 , 1 } U 6 = . [Lov ́asz, Spencer, and Vesztergombi, 1986]: ⋃ N , 1] ( x + tL ). If ⊆ t = hd( U , K ), then [0 N } 1 , 0 ∈{ x Sasho Nikolov (U of T) Balancing Vectors 11 / 25

29 Volume Lower Bound The Volume Lower Bound N ∈ x ∈ R = : L { K } : the set of “good x ”. Define Ux N , K ) = min { t : tL ∩{− 1 , 1 } disc( 6 = ∅} . U [Lov ́asz, Spencer, and Vesztergombi, 1986]: ⋃ N = hd( U , K ), then [0 , 1] If ⊆ t ). tL + x ( N } 1 , 0 ∈{ x [Banaszczyk, 1993]: N N 1] ) 1 = vol([0 ≥ vol( tL ) = t , vol( L ) Sasho Nikolov (U of T) Balancing Vectors 11 / 25

30 Volume Lower Bound The Volume Lower Bound N ∈ x ∈ R L : Ux { K } : the set of “good x ”. Define = N , disc( ) = min { t : tL ∩{− 1 , 1 } U 6 = ∅} . K [Lov ́asz, Spencer, and Vesztergombi, 1986]: ⋃ N , = hd( ), then [0 , 1] t ⊆ If K ). ( x + tL U N ∈{ 0 , 1 } x [Banaszczyk, 1993]: N N − 1 / N vol( tL ) = t , vol( L ) ⇐⇒ hd( U , K ) ≥ vol( L ) 1] ) 1 = vol([0 ≥ . Sasho Nikolov (U of T) Balancing Vectors 11 / 25

31 Lower Bound on vb( C , K ): { } N u K , . ) : ,..., u ∈ C K , ) volLB( ≥ ) K C vb( u volLB(( , ) = sup C 1 i N i =1 Theorem n , C R , U K matrix N × n For any ⊂ , and any symmetric convex ) , U hd( ≤ ) . (1 + log n ) · K volLB( U U , K K ) volLB( , volLB( , K · ) n (1 + log . ) K C C vb( ≤ ) K , C volLB( ) , Volume Lower Bound A Hereditary Volume Lower Bound A simple strengthening: | S | / 1 − S } K ∈ x vol( U : ) R ∈ x { . hd( U , K ) ) = max K , U volLB( ≥ S S ⊆ [ N ] Sasho Nikolov (U of T) Balancing Vectors 12 / 25

32 Theorem n For any R ⊂ K , , , and any symmetric convex U matrix N × n C ≤ ) K ) U volLB( K K , U volLB( · ) n (1 + log . ) , U hd( , n (1 + log . ) K , C vb( ≤ · ) K , C volLB( volLB( ) C , K ) Volume Lower Bound A Hereditary Volume Lower Bound A simple strengthening: | S | S / 1 − , K ) ≥ volLB( U , K ) = max ∈ R vol( : U x ∈ . hd( K U { x ) } S ⊆ N [ ] S K , C Lower Bound on vb( ): } { N . u , ,..., u K ∈ C ) : ) u volLB(( C , ) ≥ vb( K , K ) = sup C volLB( 1 i N =1 i Sasho Nikolov (U of T) Balancing Vectors 12 / 25

33 Volume Lower Bound A Hereditary Volume Lower Bound A simple strengthening: | S | / 1 − S , hd( K . U , K vol( { x ∈ R U : U ) = max x ∈ K } ) volLB( ≥ ) S ] N [ ⊆ S C , K ): Lower Bound on vb( } { N . u ) : K , C ∈ u ,..., u ) volLB(( C ) = sup K , , volLB( ≥ vb( ) K C 1 i N =1 i Theorem n C N matrix For any , and any symmetric convex n , K ⊂ R × , U volLB( U , K ) ≤ hd( U , K ) . (1 + log n ) · volLB( U , K ) (1 + log volLB( , K ) ≤ vb( C , K ) . C n ) · volLB( C , K ) Sasho Nikolov (U of T) Balancing Vectors 12 / 25

34 Volume Lower Bound Rothvoß’s Algorithm n K ⊂ R , Algorithm [Rothvoß, 2014]: given 1 G ∼ N (0 , I Sample a standard Gaussian ); n 2 Output X G 2 n [ ‖ {‖ X : x ∈ K ∩ − − 1 , 1] G } . = arg min x 2 Goal : |{ i : X . ∈{− 1 , +1 }}|≥ α n for a constant α i X is a partial coloring .) ( n : If K is “big enough,” then in an average direction ∂ [ − 1 , 1] Intuition is closer to the origin than ∂ K and is more likely to be hit by X . Sasho Nikolov (U of T) Balancing Vectors 13 / 25

35 Volume Lower Bound Rothvoß’s Algorithm n K , R ⊂ Algorithm [Rothvoß, 2014]: given 1 ∼ N (0 , I Sample a standard Gaussian ); G n 2 Output X G 2 n G ‖ = arg min {‖ : x ∈ K ∩ [ − 1 , 1] x } . X − 2 Goal : |{ i : X . ∈{− 1 , +1 }}|≥ α n for a constant α i is a partial coloring .) X ( n K is “big enough,” then in an average direction ∂ [ − 1 , 1] is Intuition : If ∂ and is more likely to be hit by X . K closer to the origin than α there is a δ so that [Rothvoß, 2014] For any small enough − δ n γ , then ( K ) ≥ e if K has Gaussian measure n , |{ i : X . ∈{− 1 with high probability +1 }|≥ α n i Sasho Nikolov (U of T) Balancing Vectors 13 / 25

36 Volume Lower Bound Rothvoß’s Algorithm n ⊂ R , K Algorithm [Rothvoß, 2014]: given 1 ∼ N (0 , I Sample a standard Gaussian ); G n 2 Output X G 2 n G ‖ [ X : x ∈ K ∩ = arg min − 1 , 1] {‖ } . x − 2 α |{ i : X Goal ∈{− 1 , +1 }}|≥ : n for a constant α . i ( X is a partial coloring .) n : If K is “big enough,” then in an average direction ∂ [ − 1 , 1] Intuition is closer to the origin than ∂ X . K and is more likely to be hit by there is a δ so that [Rothvoß, 2014] For any small enough α (1 − δ ) n subspace W for which if there exists a dimension − δ n has Gaussian measure γ , then ( K ∩ W ) ≥ e K ∩ W W , |{ i : X . ∈{− 1 with high probability +1 }}|≥ α n i Sasho Nikolov (U of T) Balancing Vectors 13 / 25

37 Proof by an algorithm: ) and recurse. Find a partial coloring with discrepancy . volLB( U , K 1 I = ; Preprocess so that N = n , U n 2 , t , tK Apply Rothvoß’s algorithm to volLB(  ); K I n If conditions hold , gives a partial coloring X ∈ tK ; S 3 on K ; Project } 1 < X < 1 − : { = S and recurse. i R i Need a “recentered” variant of Rothvoß’s algorithm. k 1 n k so that 1 + log . iterations, we have X After ,... X n k 1 ∈{− } ; X ... + 1 1 X , + k 1 ‖ X ) ≤ kt . (1 + log n + ) volLB( I ... , K + . X ‖ n K : Show that the conditions of Rothvoß’s algorithm are Main Challenge satisfied. Volume Lower Bound Tightness of the Volume Lower Bound N n × n and symmetric convex K ⊂ R R ∈ U Need to show: for any K , U . hd( ) K , U volLB( · ) n (1 + log . ) Sasho Nikolov (U of T) Balancing Vectors 14 / 25

38 1 ; I = U , Preprocess so that = N n n 2 tK I volLB( ); t , Apply Rothvoß’s algorithm to K ,  n If conditions hold , gives a partial coloring X ∈ tK ; S 3 − : { = S and recurse. i R on K ; Project } 1 < X < 1 i Need a “recentered” variant of Rothvoß’s algorithm. k 1 X X ,... so that After iterations, we have n 1 + log . k k n 1 1 } X ; X + ... ∈{− 1 + , 1 k ≤ ‖ , K ) . X n . kt ) volLB( + ‖ I X + ... (1 + log n K : Show that the conditions of Rothvoß’s algorithm are Main Challenge satisfied. Volume Lower Bound Tightness of the Volume Lower Bound × n N n R Need to show: for any ∈ and symmetric convex U R ⊂ K U volLB( · ) n (1 + log . ) K , U hd( . ) K , Proof by an algorithm: U Find a partial coloring with discrepancy . volLB( ) and recurse. , K Sasho Nikolov (U of T) Balancing Vectors 14 / 25

39 1 k . 1 + log n iterations, we have X k ,... X After so that 1 k n + ... + X X ∈{− 1 ; } , 1 1 k ≤ ) + ... + X X ‖ K ‖ kt . (1 + log n ) volLB( I , . n K Main Challenge : Show that the conditions of Rothvoß’s algorithm are satisfied. Volume Lower Bound Tightness of the Volume Lower Bound n × N n and symmetric convex ∈ R K R ⊂ Need to show: for any U U , volLB( ) . (1 + log n ) · hd( U . ) K , K Proof by an algorithm: , U volLB( . Find a partial coloring with discrepancy ) and recurse. K 1 = N n , Preprocess so that = I ; U n 2 , K ); volLB( I  , t tK Apply Rothvoß’s algorithm to n ; tK ∈ X , gives a partial coloring If conditions hold S 3 and recurse. { i : − 1 < X S < 1 } ; Project K on R = i Need a “recentered” variant of Rothvoß’s algorithm. Sasho Nikolov (U of T) Balancing Vectors 14 / 25

40 Main Challenge : Show that the conditions of Rothvoß’s algorithm are satisfied. Volume Lower Bound Tightness of the Volume Lower Bound N n n × ⊂ R R ∈ and symmetric convex U Need to show: for any K K . ) ) · volLB( U , (1 + log ) . hd( K , U n Proof by an algorithm: , K ) and recurse. Find a partial coloring with discrepancy . volLB( U 1 n U = I Preprocess so that ; N = , n 2 K I tK ,  ); , t volLB( Apply Rothvoß’s algorithm to n tK ; ∈ X If conditions hold , gives a partial coloring S 3 : − 1 < X S < 1 } ; Project K on R = and recurse. { i i Need a “recentered” variant of Rothvoß’s algorithm. 1 k k After ,... X n so that iterations, we have X . 1 + log n k 1 X 1 , 1 } ... ; + + ∈{− X k 1 + X X ‖ ‖ ≤ kt . (1 + log n ) volLB( I + , K ) . ... n K Sasho Nikolov (U of T) Balancing Vectors 14 / 25

41 Volume Lower Bound Tightness of the Volume Lower Bound × N n n ⊂ K ∈ R and symmetric convex U R Need to show: for any , K hd( . (1 + log n ) · volLB( U , K ) . U ) Proof by an algorithm: . volLB( U , K ) and recurse. Find a partial coloring with discrepancy 1 Preprocess so that n , U = = N ; I n 2 tK , t  volLB( I Apply Rothvoß’s algorithm to , K ); n If conditions hold , gives a partial coloring X ∈ tK ; S 3 = { i : − 1 < X and recurse. < 1 } ; Project K on R S i Need a “recentered” variant of Rothvoß’s algorithm. k 1 n iterations, we have X . ,... k 1 + log so that After X 1 k n X X + ∈{− 1 , 1 } ... ; + 1 k ‖ + ... + X X ‖ . ≤ kt . (1 + log n ) volLB( I ) , K n K Main Challenge : Show that the conditions of Rothvoß’s algorithm are satisfied. Sasho Nikolov (U of T) Balancing Vectors 14 / 25

42 From the definition of volLB( , K ): I n S S ⊆ [ n ] : vol((volLB( I · , K ) K ) ∩ R ∀ ) ≥ 1 . n Theorem (Structural result) = ) For any δ there exists a m m ( δ so that the following holds. S ⊆ S for all 1 ≥ ) R be a symmetric convex body s.t. . ] n [ Let L ∩ vol( L − for which n ) δ (1 of dimension W There exists a subspace − n δ . e ≥ ) W ∩ ) mL (( γ W K K I Apply to , L = volLB( ) · to get that the conditions of Rothvoß’s n algorithm are satisfied. Volume Lower Bound From Volume To Gaussian Measure For Rothvoß’s algorithm, we need that on some subspace of large K , t I volLB(  ), has large Gaussian measure. , tK dimension, the body n Sasho Nikolov (U of T) Balancing Vectors 15 / 25

43 Theorem (Structural result) m so that the following holds. ) δ ( For any = m there exists a δ S be a symmetric convex body s.t. Let for all 1 ≥ ) S R ∩ L vol( ⊆ L [ n ] . ) (1 − δ of dimension W There exists a subspace n for which n − δ . ) ∩ W ) γ e (( mL ≥ W Apply to L = volLB( I to get that the conditions of Rothvoß’s , K ) · K n algorithm are satisfied. Volume Lower Bound From Volume To Gaussian Measure For Rothvoß’s algorithm, we need that on some subspace of large dimension, the body tK , t  volLB( I ), has large Gaussian measure. , K n From the definition of volLB( I , K ): n S S . ≥ ) ∀ R ∩ ) K · ) K , 1 I ] : vol((volLB( n [ ⊆ n Sasho Nikolov (U of T) Balancing Vectors 15 / 25

44 Volume Lower Bound From Volume To Gaussian Measure For Rothvoß’s algorithm, we need that on some subspace of large K volLB( I dimension, the body ,  ), has large Gaussian measure. t tK , n , K ): I From the definition of volLB( n S ∀ n ] : vol((volLB( I S , K ) · K ) ∩ R ⊆ ) ≥ 1 . [ n Theorem (Structural result) ( there exists a m = m δ δ ) so that the following holds. For any S L be a symmetric convex body s.t. vol( L ∩ R Let ) ≥ 1 for all S ⊆ [ n ] . There exists a subspace of dimension (1 − δ ) n for which W n δ − e mL W ) ≥ ∩ (( . γ ) W Apply to L = volLB( I to get that the conditions of Rothvoß’s , K ) · K n algorithm are satisfied. Sasho Nikolov (U of T) Balancing Vectors 15 / 25

45 2 Approximate a general convex body L by an appropriate ellipsoid. -ellipsoid, [Milman, 1986; Pisier, 1989]) Theorem (Regular M n L ⊆ R For any symmetric convex there exists an ellipsoid E such that for any ≥ t 1 t cn / ) ) e }≤ tL , E ( N , max { N ( L , tE , where c is a constant. K , . K N ( needed to cover L L ) = number of translates of L . E preserves “large scale” information about S S L has large volume. E ⇒ has large volume = R ∩ R ∩ has large Gaussian W ∩ L ⇒ has large Gaussian measure = W ∩ E measure. Volume Lower Bound Proof Ideas Generally applicable strategy: n 1 ). B ( T = E Prove the theorem for an ellipsoid 2 Reduces to linear algebra! Sasho Nikolov (U of T) Balancing Vectors 16 / 25

46 S S ∩ L has large volume = ⇒ E ∩ R has large volume. R E ∩ W has large Gaussian measure = ⇒ L ∩ W has large Gaussian measure. Volume Lower Bound Proof Ideas Generally applicable strategy: n 1 ( B = T Prove the theorem for an ellipsoid E ). 2 Reduces to linear algebra! 2 L Approximate a general convex body by an appropriate ellipsoid. Theorem (Regular M -ellipsoid, [Milman, 1986; Pisier, 1989]) n L For any symmetric convex ⊆ R there exists an ellipsoid E such that for 1 ≥ t any t / cn tE ) , N ( E , tL ) }≤ e max { N , ( L , where is a constant. c needed to cover L ) = number of translates of . K ( N , K L E L . preserves “large scale” information about Sasho Nikolov (U of T) Balancing Vectors 16 / 25

47 Volume Lower Bound Proof Ideas Generally applicable strategy: n 1 ). ( B Prove the theorem for an ellipsoid = E T 2 Reduces to linear algebra! 2 Approximate a general convex body L by an appropriate ellipsoid. M -ellipsoid, [Milman, 1986; Pisier, 1989]) Theorem (Regular n L R there exists an ellipsoid E such that for ⊆ For any symmetric convex t ≥ 1 any cn / t ( L , tE ) , N ( E , tL ) }≤ e max { N , where c is a constant. K ( , L ) = number of translates of N needed to cover K . L E preserves “large scale” information about L . S S R has large volume = ⇒ E ∩ R L has large volume. ∩ E ∩ W has large Gaussian measure = ⇒ L ∩ W has large Gaussian measure. Sasho Nikolov (U of T) Balancing Vectors 16 / 25

48  volLB( U , K )? Is the hereditary discrepancy of partial colorings The hereditary discrepancy of partial colorings is U . volLB( , K ). A lower bound would follow from Conjecture n K 1 Suppose ⊂ R ≤ is a symmetric convex body of volume . Then there √ S | exists a . ) diam . R ∩ K ( | S s.t. S ⊆ [ n ] ` 2 True for ellipsoids and reduces to the Restricted Invertibility Principle. S K R True for general bodies with an arbitrary subspace if we replace . | S | with dim W W and Volume Lower Bound Partial Colorings . K , U ) volLB( n (1 + log ) is in general tight. ) K , U The bound hd( Sasho Nikolov (U of T) Balancing Vectors 17 / 25

49 ). K , The hereditary discrepancy of partial colorings is . volLB( U A lower bound would follow from Conjecture n K 1 . Then there Suppose is a symmetric convex body of volume ⊂ R ≤ √ S | R ∩ K ( diam s.t. ] n [ ⊆ S ) exists a . | S . ` 2 True for ellipsoids and reduces to the Restricted Invertibility Principle. S True for general bodies K if we replace R with an arbitrary subspace W . | S | with dim W and Volume Lower Bound Partial Colorings ) volLB( The bound hd( U , K ) . ) is in general tight. K , U (1 + log n )? Is the hereditary discrepancy of partial colorings  volLB( U , K Sasho Nikolov (U of T) Balancing Vectors 17 / 25

50 A lower bound would follow from Conjecture n R is a symmetric convex body of volume Suppose . Then there ⊂ 1 K ≤ √ S ] n [ ⊆ . | S | S . ) ( R ∩ K exists a diam s.t. ` 2 True for ellipsoids and reduces to the Restricted Invertibility Principle. S True for general bodies K if we replace R with an arbitrary subspace . | S | with dim W W and Volume Lower Bound Partial Colorings ) is in general tight. The bound hd( U , K ) . (1 + log n ) volLB( U , K Is the hereditary discrepancy of partial colorings  volLB( U , K )? The hereditary discrepancy of partial colorings is ). . volLB( U , K Sasho Nikolov (U of T) Balancing Vectors 17 / 25

51 True for ellipsoids and reduces to the Restricted Invertibility Principle. S R if we replace K True for general bodies with an arbitrary subspace and W | S | with dim W . Volume Lower Bound Partial Colorings K , K ) is in general tight. U ) volLB( n (1 + log . ) The bound hd( U , K U )? Is the hereditary discrepancy of partial colorings  , volLB( U , K . The hereditary discrepancy of partial colorings is ). volLB( A lower bound would follow from Conjecture n 1 Suppose ≤ is a symmetric convex body of volume . Then there R ⊂ K √ S | S | . ⊆ [ n ] s.t. diam exists a S ( . R ∩ K ) ` 2 Sasho Nikolov (U of T) Balancing Vectors 17 / 25

52 Volume Lower Bound Partial Colorings ) is in general tight. , ) . (1 + log n ) volLB( U , U K The bound hd( K  volLB( U , K )? Is the hereditary discrepancy of partial colorings . volLB( U , K ). The hereditary discrepancy of partial colorings is A lower bound would follow from Conjecture n ⊂ R is a symmetric convex body of volume Suppose ≤ 1 . Then there K √ S ⊆ [ n ] s.t. diam . | S | exists a S R . ∩ K ( ) ` 2 True for ellipsoids and reduces to the Restricted Invertibility Principle. S K R True for general bodies if we replace with an arbitrary subspace W and | S | with dim W . Sasho Nikolov (U of T) Balancing Vectors 17 / 25

53 Factorization Upper Bounds Outline 1 Introduction 2 Volume Lower Bound 3 Factorization Upper Bounds 4 Conclusion Sasho Nikolov (U of T) Balancing Vectors 18 / 25

54 K ). , C We do not know how to efficiently compute volLB( We need a natural K , C ). on vb( upper bound Recall [Banaszczyk, 1998]: 1 n n ≥ γ 5. R ⊂ K For any convex ≤ ( ) such that ) , vb( B K , K n 2 2 : Observations 1 I . 1 for G ≥ ) K (2 ≤ γ ), then ‖ ‖ , (0 N ∼ If E G n n K 2 n vb( ‖ ) . G K , ‖ B E . K 2 ). vb( diam · ) C ‖ G ‖ C ( ( . ) K , E K ` 2 Last bound can be very loose! Can we do better? Factorization Upper Bounds Upper Bounds from Banaszczyk’s Theorem We showed how to efficiently compute near optimal signs u ε ,...,ε ∈{− 1 , 1 } . for any u ,..., 1 1 N N But what if we want to compute vb( C K ) or hd( )? U , , K Sasho Nikolov (U of T) Balancing Vectors 19 / 25

55 Recall [Banaszczyk, 1998]: 1 n n K , For any convex R B , vb( K ≤ 5. ) K ( ⊂ γ such that ) ≥ n 2 2 : Observations 1 (0 If . E ) (2 ‖ γ ), then G I , K N ∼ G 1 for ≤ ‖ ≥ n n K 2 n . . B vb( E ) K , G ‖ ‖ K 2 E C ( diam · vb( ‖ G ‖ ). ( . ) K , C ) K ` 2 Last bound can be very loose! Can we do better? Factorization Upper Bounds Upper Bounds from Banaszczyk’s Theorem We showed how to efficiently compute near optimal signs ε ,..., ,...,ε u ∈{− 1 , 1 } for any u . 1 1 N N K , C But what if we want to compute vb( )? K , U ) or hd( C , K ). We do not know how to efficiently compute volLB( upper bound We need a natural ). on vb( C , K Sasho Nikolov (U of T) Balancing Vectors 19 / 25

56 Observations : 1 ) ‖ G ‖ . ≤ 1 for G ∼ N (0 , E K ), then γ (2 If ≥ I n n K 2 n ) K , ‖ G B vb( ‖ E . . K 2 ‖ C vb( C , K ) . ( E ( G ‖ diam ) · ). K ` 2 Last bound can be very loose! Can we do better? Factorization Upper Bounds Upper Bounds from Banaszczyk’s Theorem We showed how to efficiently compute near optimal signs ε ∈{− . 1 , 1 } ,...,ε u u ,..., for any 1 1 N N But what if we want to compute vb( C )? K , U ) or hd( K , We do not know how to efficiently compute volLB( K , ). C ). upper bound C We need a natural K on vb( , Recall [Banaszczyk, 1998]: 1 n n , vb( ) K ≥ ) K ( , γ such that 5. R ⊂ K For any convex ≤ B n 2 2 Sasho Nikolov (U of T) Balancing Vectors 19 / 25

57 ‖ vb( , K ) . ( E ‖ G C ). ) · diam C ( ` K 2 Last bound can be very loose! Can we do better? Factorization Upper Bounds Upper Bounds from Banaszczyk’s Theorem We showed how to efficiently compute near optimal signs ∈{− 1 , 1 } for any u ε ,..., u ,...,ε . 1 1 N N But what if we want to compute vb( )? C , , ) or hd( K K U We do not know how to efficiently compute volLB( C , ). K , C on vb( upper bound We need a natural ). K Recall [Banaszczyk, 1998]: 1 n n ( K ) ≥ ⊂ R , vb( B For any convex such that , K ) ≤ 5. γ K n 2 2 : Observations 1 ∼ . E ‖ G ‖ ≤ 1 for G If N (0 , I ), then ) K (2 ≥ γ n n K 2 n , ‖ E . ) K . ‖ vb( G B K 2 Sasho Nikolov (U of T) Balancing Vectors 19 / 25

58 Factorization Upper Bounds Upper Bounds from Banaszczyk’s Theorem We showed how to efficiently compute near optimal signs } ∈{− 1 , 1 ε for any u ,..., u ,...,ε . 1 1 N N , K ) or hd( U , K C But what if we want to compute vb( )? C , K ). We do not know how to efficiently compute volLB( We need a natural on vb( C , K ). upper bound Recall [Banaszczyk, 1998]: 1 n n K such that γ 5. ( K ) For any convex ⊂ ≤ , vb( B R ) , K ≥ n 2 2 Observations : 1 If G ‖ . ≤ 1 for G ∼ N (0 , I ‖ ), then γ E (2 K ) ≥ n n K 2 n B ) . , K vb( . E ‖ G ‖ K 2 C , K ) . ( E vb( G ‖ C ) · diam ). ( ‖ ` K 2 Last bound can be very loose! Can we do better? Sasho Nikolov (U of T) Balancing Vectors 19 / 25

59 λ , C ( K achieving T Take a linear map ); ( )) = 1, so G ‖ E ); C ( T ‖ , diam Can assume = λ ( C K ` ( ) T K 2 , ) C ( T ) = vb( K , C vb( )) and apply Banaszczyk’s theorem. K ( T Factorization Upper Bounds A Better Upper Bound n C into B Idea using a linear map. : Map 2 T . } a linear map T )) : ( ( C ) ‖ G ‖ E diam { ) = inf K , C ( λ · ( ` T ( K ) 2 ( C , K ). Claim : vb( C , K ) . λ Sasho Nikolov (U of T) Balancing Vectors 20 / 25

60 C vb( , K ) = vb( T ( C ) , T ( K )) and apply Banaszczyk’s theorem. Factorization Upper Bounds A Better Upper Bound n using a linear map. C into B Idea : Map 2 )) : ( C T a linear map } . ( T · ( ‖ { E ‖ G ) = inf K , C ( λ ) diam ` ) K ( T 2 : vb( C , K ) . λ ( C , K ). Claim Take a linear map T achieving λ ( C ); K , ); )) = 1, so E = λ ( C , K ( ( ‖ G diam Can assume ‖ T C ` ) K ( T 2 Sasho Nikolov (U of T) Balancing Vectors 20 / 25

61 Factorization Upper Bounds A Better Upper Bound n : Map B using a linear map. Idea C into 2 C } a linear map T )) : ( T ( . G ‖ λ ( C , ) · diam K ) = inf { ( E ‖ ` ) K ( T 2 Claim : vb( C , K ) . λ ( C , K ). Take a linear map λ ( C , K ); achieving T diam , ); ( T K C )) = 1, so E ‖ G ‖ Can assume = λ ( C ( ` K ) T ( 2 ) C , K ) = vb( T ( C vb( , T ( K )) and apply Banaszczyk’s theorem. Sasho Nikolov (U of T) Balancing Vectors 20 / 25

62 Proof outline : 1 Formulate λ ( C , K ) as a convex minimization problem; 2 Derive the Lagrange dual: an equivalent maximization problem; 3 Relate dual solutions to the volume lower bound. Factorization Upper Bounds Tightness of the Upper Bound Theorem n C , K ⊂ R For any symmetric convex , λ ) K , C ( C , K . . vb( C , K ) . λ ( ) 2 / 5 (1 + log ) n and a vertex K Moreover, given membership oracle access to ( C , K ) . representation of C , we can efficiently compute λ × n N , and then U ∈ R ± For a matrix } , we can take C = conv {± ,..., u u 1 N C , U ) approximates hd( K , K ( λ ). Sasho Nikolov (U of T) Balancing Vectors 21 / 25

63 Factorization Upper Bounds Tightness of the Upper Bound Theorem n For any symmetric convex R C , ⊂ K , , K ) C λ ( K . , . vb( C , K ) . λ ( C ) 5 / 2 ) n (1 + log Moreover, given membership oracle access to K and a vertex representation of C , we can efficiently compute λ ( C , K ) . n × N {± R For a matrix , we can take C = conv ∈ u , and then ,..., ± u } U 1 N λ ( C , K ) approximates hd( U , K ). Proof outline : 1 Formulate λ ( C , K ) as a convex minimization problem; 2 Derive the Lagrange dual: an equivalent maximization problem; 3 Relate dual solutions to the volume lower bound. Sasho Nikolov (U of T) Balancing Vectors 21 / 25

64 − 1 ∗ T T ‖ : G ‖ is defined entirely by A = T , because the E Observation K − 1 − 1 G covariance of A T is given by . Formulation : ( λ C ) A ( f ) = inf K , ∈ 〈 s.t. x , Ax 〉≤ 1 C ∀ x A  0 . ∗ 1 − ; f T T T such that T for any A ‖ G = ‖ E ) = A ( K A ; is well defined over positive definite f C diam ( The first constraint encodes 1: ≤ )) T ( ` 2 ∗ 2 = 〈 , x 〈 〉 〉 Ax , x 〈 Tx , Tx Tx 〉 = ‖ Tx ‖ . = T 2 Factorization Upper Bounds Convex Formulation 1 − ‖ ‖ x T = ‖ x ‖ K ) K ( T − 1 ≤ )) C ( T ( } 1 : : inf ‖ G diam First attempt T ‖ E { ` K 2 Not convex : the objective is ∞ for T = 0 and finite for any invertible 1 T − )). + ( ( T , but 0 = T 2 Sasho Nikolov (U of T) Balancing Vectors 22 / 25

65 : Formulation C ) = inf ( λ K , f ( A ) ∀ x x , Ax 〉≤ 1 〈 s.t. ∈ C A  0 . ∗ − 1 for any A ) = E ‖ T = f G ‖ T such that ; A T T ( K is well defined over positive definite A ; f ( diam 1: The first constraint encodes T ≤ C )) ( ` 2 2 ∗ Ax x , . 〉 = 〈 x , T = Tx 〉 Tx , Tx 〉 = ‖ Tx ‖ 〈 〈 2 Factorization Upper Bounds Convex Formulation 1 − ‖ ‖ x T = ‖ x ‖ K ) K ( T − 1 )) } C ( 1 ≤ T ( G E diam First attempt : inf ‖ : T ‖ { ` K 2 T for ∞ : the objective is Not convex = 0 and finite for any invertible 1 T ( T + ( − )). , but 0 = T 2 1 − ∗ , because the Observation : G ‖ E is defined entirely by A = T ‖ T T K − 1 − 1 covariance of G is given by A T . Sasho Nikolov (U of T) Balancing Vectors 22 / 25

66 C The first constraint encodes diam ( T ( 1: )) ≤ ` 2 ∗ 2 , Ax 〉 = 〈 . , T 〈 Tx 〉 = 〈 Tx , Tx 〉 = ‖ Tx ‖ x x 2 Factorization Upper Bounds Convex Formulation 1 − = ‖ ‖ x T x ‖ ‖ K T ( K ) − 1 ( T ( C )) ≤ 1 } G : diam ‖ T First attempt : inf { E ‖ ` K 2 : the objective is Not convex = 0 and finite for any invertible T for ∞ 1 + ( )). T ( T − T , but 0 = 2 ∗ 1 − , because the T : E ‖ T G Observation T ‖ = A is defined entirely by K 1 1 − − covariance of T G is given by A . Formulation : ( A ) λ ( C , K ) = inf f C s.t. 〈 , Ax 〉≤ 1 ∀ x ∈ x A  0 . ∗ − 1 ; ( f ) = T such that T for any A ‖ G = T T ‖ E A K ; A f is well defined over positive definite Sasho Nikolov (U of T) Balancing Vectors 22 / 25

67 Factorization Upper Bounds Convex Formulation − 1 x ‖ x = ‖ T ‖ ‖ K ) ( K T 1 − } ( T ( C )) ≤ 1 ‖ : T E { G ‖ First attempt : inf diam ` K 2 : the objective is ∞ for Not convex = 0 and finite for any invertible T 1 ( T + ( − T )). T , but 0 = 2 − 1 ∗ T Observation E G ‖ , because the ‖ A = T T : is defined entirely by K − 1 − 1 is given by covariance of T G . A Formulation : λ ( C , K ) = inf f ( A ) s.t. 〈 , Ax 〉≤ 1 ∀ x ∈ C x  . A 0 − 1 ∗ E ‖ T f ( G ‖ A for any T such that T ) = T = A ; K f is well defined over positive definite A ; The first constraint encodes diam 1: ≤ ( T ( C )) ` 2 ∗ 2 , Ax 〉 = 〈 x , T = Tx 〉 〈 〈 Tx , Tx 〉 = ‖ Tx ‖ x . 2 Sasho Nikolov (U of T) Balancing Vectors 22 / 25

68 ), and, K , C Each dual solution gives a lower bound on volLB( , K therefore, on vb( ); C -convexity, and Sudakov minoration; Tools: K ) gives a lower bound on vb( K , = ( λ ⇒ ). K , C C Computation : The convex optimization problem can be solved using the ellipsoid method, given a membership oracle for K and a vertex . representation of C Factorization Upper Bounds Properties of the Formulation The function f ( A ) is convex in A , and the constraints are also convex; dual maximization equivalent : there exists an Lagrange Duality ); problem, whose value also equals λ ( U , C Sasho Nikolov (U of T) Balancing Vectors 23 / 25

69 Computation : The convex optimization problem can be solved using the ellipsoid method, given a membership oracle for K and a vertex representation of C . Factorization Upper Bounds Properties of the Formulation A The function f ( , and the constraints are also convex; A ) is convex in dual maximization Lagrange Duality equivalent : there exists an ( U , C ); problem, whose value also equals λ ), and, K C Each dual solution gives a lower bound on volLB( , K ); therefore, on vb( C , -convexity, and Sudakov minoration; K Tools: λ , K ). C ( K ⇒ = , ) gives a lower bound on vb( C Sasho Nikolov (U of T) Balancing Vectors 23 / 25

70 Factorization Upper Bounds Properties of the Formulation f ( ) is convex in A , and the constraints are also convex; A The function equivalent dual maximization : there exists an Lagrange Duality problem, whose value also equals ( U , C ); λ Each dual solution gives a lower bound on volLB( C , K ), and, therefore, on vb( C , K ); Tools: -convexity, and Sudakov minoration; K = ⇒ λ ( C , K ) gives a lower bound on vb( C , K ). Computation : The convex optimization problem can be solved using the ellipsoid method, given a membership oracle for K and a vertex representation of C . Sasho Nikolov (U of T) Balancing Vectors 23 / 25

71 Conclusion Outline 1 Introduction 2 Volume Lower Bound 3 Factorization Upper Bounds 4 Conclusion Sasho Nikolov (U of T) Balancing Vectors 24 / 25

72 : Open questions ) give lower bounds on partial colorings? Does volLB( C , K , .) vb( ` )? (True for K K volLB(  ) K , K p ) be improved? Can the bounds for λ ( C , K Conclusion Conclusion In this work : Tightness of natural upper and lower bounds for vector balancing. Efficient algorithms to find nearly optimal vector balancing signs, and K , C to compute vb( ), and hereditary discrepancy with respect to any norm. Our results strongly use the geometry of the underlying discrepancy problem. Sasho Nikolov (U of T) Balancing Vectors 25 / 25

73 Conclusion Conclusion : In this work Tightness of natural upper and lower bounds for vector balancing. Efficient algorithms to find nearly optimal vector balancing signs, and to compute vb( C , K ), and hereditary discrepancy with respect to any norm. Our results strongly use the geometry of the underlying discrepancy problem. Open questions : Does volLB( C , K ) give lower bounds on partial colorings? vb( K , K )  volLB( K , K )? (True for ` .) p , λ ( C Can the bounds for K ) be improved? Sasho Nikolov (U of T) Balancing Vectors 25 / 25

74 References W. Banaszczyk. Balancing vectors and gaussian measures of n-dimensional Random Structures & Algorithms convex bodies. , 12(4):351–360, 1998. Wojciech Banaszczyk. Balancing vectors and convex bodies. Studia , 106(1):93–100, 1993. ISSN 0039-3223. Math. Nikhil Bansal. Constructive algorithms for discrepancy minimization. In 51st Annual IEEE Symposium on Foundations of Computer Science (FOCS), 2010 , pages 3–10. IEEE, 2010. Nikhil Bansal, Moses Charikar, Ravishankar Krishnaswamy, and Shi Li. Better algorithms and hardness for broadcast scheduling via a discrepancy approach. In SODA , pages 55–71, 2014. I. B ́ar ́any and VS Grinberg. On some combinatorial questions in Linear Algebra and its Applications , 41:1–9, finite-dimensional spaces. 1981. J. Beck and T. Fiala. Integer-making theorems. Discrete Applied Mathematics , 3(1):1–8, 1981. J ́ozsef Beck. Balanced two-colorings of finite sets in the square i. Combinatorica , 1(4):327–335, 1981. Sasho Nikolov (U of T) Balancing Vectors 25 / 25

75 References Moses Charikar, Alantha Newman, and Aleksandar Nikolov. Tight hardness SODA results for minimizing discrepancy. In , pages 1607–1614, 2011. Aryeh Dvoretzky. Problem. In Proc. Sympos. Pure Math., Vol. VII . Amer. Math. Soc., Providence, R.I., 1963. Efim Davydovich Gluskin. Extremal properties of orthogonal parallelepipeds and their applications to the geometry of banach spaces. , 64(1):85, 1989. Mathematics of the USSR-Sbornik Rebecca Hoberg and Thomas Rothvoss. A logarithmic additive integrality gap for bin packing. In Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms , pages 2616–2625. SIAM, Philadelphia, PA, 2017. doi: 10.1137/1.9781611974782.172 . URL . https://doi.org/10.1137/1.9781611974782.172 Kasper Green Larsen. On range searching in the group model and combinatorial discrepancy. SIAM J. Comput. , 43(2):673–686, 2014. doi: 10.1137/120865240 . URL http://dx.doi.org/10.1137/120865240 . L. Lov ́asz, J. Spencer, and K. Vesztergombi. Discrepancy of set-systems and matrices. European Journal of Combinatorics , 7(2):151–160, 1986. Sasho Nikolov (U of T) Balancing Vectors 25 / 25

76 References Jiri Matousek. Approximations and optimal geometric divide-and-conquer. , 50(2):203–208, 1995. Journal of Computer and System Sciences Vitali D. Milman. In ́egalit ́e de Brunn-Minkowski inverse et applications `a la th ́eorie locale des espaces norm ́es. C. R. Acad. Sci. Paris S ́er. I Math. , 302(1):25–28, 1986. ISSN 0249-6291. Alantha Newman, Ofer Neiman, and Aleksandar Nikolov. Beck’s three permutations conjecture: a counterexample and some consequences. In 2012 IEEE 53rd Annual Symposium on Foundations of Computer Science—FOCS 2012 , pages 253–262. IEEE Computer Soc., Los Alamitos, CA, 2012. Aleksandar Nikolov. An improved private mechanism for small databases. In Magn ́us M. Halld ́orsson, Kazuo Iwama, Naoki Kobayashi, and Bettina Speckmann, editors, Automata, Languages, and Programming - 42nd International Colloquium, ICALP 2015, Kyoto, Japan, July 6-10, 2015, , volume 9134 of Lecture Notes in Computer Science , Proceedings, Part I . 82 10.1007/978-3-662-47672-7 pages 1010–1021. Springer, 2015. doi: URL http://dx.doi.org/10.1007/978-3-662-47672-7_82 . Sasho Nikolov (U of T) Balancing Vectors 25 / 25

77 References Aleksandar Nikolov and Kunal Talwar. Approximating hereditary Proceedings of the discrepancy via small width ellipsoids. In , Twenty-Sixth Annual ACM-SIAM Symposium on Discrete Algorithms pages 324–336. SIAM, Philadelphia, PA, 2015. doi: 10.1137/1.9781611973730.24 . URL . https://doi.org/10.1137/1.9781611973730.24 Aleksandar Nikolov, Kunal Talwar, and Li Zhang. The geometry of differential privacy: The small database and approximate cases. SIAM J. Comput. , 45(2):575–616, 2016. doi: 10.1137/130938943 . URL http://dx.doi.org/10.1137/130938943 . Gilles Pisier. A new approach to several results of V. Milman. J. Reine Angew. Math. , 393:115–131, 1989. ISSN 0075-4102. doi: 10.1515/crll.1989.393.115 . URL https://doi.org/10.1515/crll.1989.393.115 . Thomas Rothvoss. The entropy rounding method in approximation algorithms. In Symposium on Discrete Algorithms (SODA) , pages 356–372, 2012. Thomas Rothvoß. Constructive discrepancy minimization for convex sets. Sasho Nikolov (U of T) Balancing Vectors 25 / 25

78 Conclusion In 55th IEEE Annual Symposium on Foundations of Computer Science, , pages FOCS 2014, Philadelphia, PA, USA, October 18-21, 2014 10.1109/FOCS.2014.23 . 140–145. IEEE Computer Society, 2014. doi: URL . http://dx.doi.org/10.1109/FOCS.2014.23 Joel Spencer. Six standard deviations suffice. Trans. Amer. Math. Soc. , 289:679–706, 1985. Zhewei Wei and Ke Yi. The space complexity of 2-dimensional Proceedings of approximate range counting. In Sanjeev Khanna, editor, the Twenty-Fourth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2013, New Orleans, Louisiana, USA, January 6-8, 2013 , pages 252–264. SIAM, 2013. ISBN 978-1-61197-251-1. doi: 10.1137/1.9781611973105.19 . URL http://dx.doi.org/10.1137/1.9781611973105.19 . Sasho Nikolov (U of T) Balancing Vectors 25 / 25

### Related documents ### privacybook

R ⃝ Foundations and Trends in Theoretical Computer Science Vol. 9, Nos. 3–4 (2014) 211–407 c ⃝ 2014 C. Dwork and A. Roth DOI: 10.1561/0400000042 The Algorithmic Foundations of Differential Privacy Cyn... ### Balancing Vectors in Any Norm

2018 IEEE 59th Annual Symposium on Foundations of Computer Science Balancing Vectors in Any Norm † ∗ ‡ § , Nicole Tomczak-Jaegermann , Aleksandar Nikolov , Kunal Talwar Daniel Dadush ∗ Centrum Wiskund... ### ICA 2017 01

Background to “Assessing Russian Activities and Intentions in Recent US Elections”: The Analytic Process and Cyber Incident Attribution 6 January 2017 ### A Survey of Current Link Discovery Frameworks

Undefined 0 (2015) 1–0 1 IOS Press A Survey of Current Link Discovery Frameworks a , b ∗ a a , Michael Hartung Axel-Cyrille Ngonga Ngomo and Erhard Rahm Markus Nentwig a Database Group, University of ... ### LaneEtAlPrivacyBigDataAndThePublicGood

This is a prel version of the book Privacy, Big Data , and the Public Good : iminary er, and for Lane, Victoria Stodden, Stefan Bend nt, ed. Julia Helen Niss enbaum Frameworks Engageme Unive rsity Pre... ### Nearly Optimal Private LASSO

∗ Nearly-Optimal Private LASSO Abhradeep Thakurta Kunal Talwar (Previously) Yahoo! Labs Google Research [email protected] [email protected] Li Zhang Google Research [email protected] A... ### iu teich 1 zu iri.pdf

̈ INTER-UNIVERSAL TEICHM ULLER THEORY I: CONSTRUCTION OF HODGE THEATERS Shinichi Mochizuki February 2019 The present paper is the first in a series of four papers, the Abstract. Teichm ̈ uller theory ... ### VolzBizerGaedkeKobilarov ISWC2009 Silk

Discovering and Maintaining Links on the Web of Data 2 1 2 1 , and Georgi Kobilarov , Martin Gaedke Julius Volz , Christian Bizer 1 Chemnitz University of Technology Distributed and Self-Organizing Sy... ### u7112 connectplus directories 2019

UCare Connect + Medicare Provider and Pharmacy Directory Introduction This Provider and Pharmacy Directory includes information about the provider and pharmacy types in UCare Connect + Medicare and li... ### U7112 UCARE CONNECT + MEDICARE PROVIDERDIR MAY 2019 DATA.sv

UCare Connect + Medicare Provider and Pharmacy Directory Introduction This Provider and Pharmacy Directory includes information about the provider and pharmacy types in UCare Connect + Medicare and li... ### esfandiari18a

Parallel and Streaming Algorithms for K-Core Decomposition 1 1 1 Silvio Lattanzi Vahab Mirrokni Hossein Esfandiari Abstract problem (Lee et al., 2010; Bahmani et al., 2012; Epasto et al., 2015; Esfand... 