Todos los cursos serán en inglés / All the courses will be given in english.
«Euclidean and hyperbolic models of random geometric graphs”
Dieter Mitsche, Université Jean Monnet, France
Teaching assistant: Amitai Linker, Université Jean Monnet, France.
Abstract: The study of random geometric graphs started in the 1960s by Gilbert. On the one hand, from a mathematical point of view, the basic model in the Euclidean plane is simple to state and study, but yet rich enough to discover many properties not observed in the G(n,p) model: the perhaps most basic difference is that edges are not independent of each other in geometric graph models. On the other hand, from a more applied point of view, the infinite-space version of such graphs (continuum percolation) mainly is motivated by interest in the statistical physics of inhomogeneous materials, whereas applications of the finite version of random geometric graphs can be found in the study of wireless communication networks.
Hyperbolic random geometric graphs were introduced by Krioukov et al. about 10 years ago in order to model the existing Internet architecture and social networks: properties that can only be found in hyperbolic space allow to capture new phenomena such as inhomogeneity of vertex degrees more appropriately. In this course we define both models and analyze a few classic properties of them.
Main topics: Euclidean random geometric graphs: edge count, connectivity, Hamiltonicity; Hyperbolic random geometric graphs: degree distribution, diameter, emergence of a giant component, spectral gap.
Slides:
- lecture_DMitsche_1
- Exercises_DMitsche_1
- Lecture_DMitsche_2
- Exercises_DMitsche_2
- Lecture_DMitsche_3
- Exercises_DMitsche_3
- Lectures_DMitsche_4
- Slides_DMitsche_4
- Exercises_DMitsche_4
- Lectures_DMitsche_5
«Sample complexity and uniform convergence for learning and data analysis»
Eli Upfal, Brown University, USA.
Teaching assistant: Patricio Foncea, MIT, USA.
Abstract: Sampling is a powerful technique, which is at the core of statistical data analysis and machine learning. Using a finite, often small, set of observations, we attempt to estimate properties of an entire sample space. How good are estimates obtained from a sample? Any rigorous application of sampling requires an understanding of the sample complexity of the problem – the minimum size sample needed to obtain the required results. In this course we will cover some of the rich mathematical theory that has been develop in recent years to address this question, in particular in the context of statistical machine learning and rigorous data analysis.
Main topics: Review of large deviation bounds for iid and martingale random variables; Uniform convergence; VC-dimension; The ϵ-net and ϵ-sample theorem; Rademacher complexity; Sample complexity through Rademacher complexity; Applications in machine learning, data analysis, and multi-comparisons statistics.
Slides:
1. Part-1
2. Part-2
3. Learning
4. VC Dimension
Homework:
1. Exercises 1.1 and 1.3.
2. Exercises 2.3 and 2.7.
Note: The random graph G_{n,N} is a graph with n vertices and N edges. The N edges are chosen at random without replacement among the n(n-1)/2 total possible edges.
3. Exercises 3.2 and 3.6.
4. Exercises 3.1, 3.7 and 3.8.
Please hand in the homework before 10:30 on Friday.
«The traveling salesman: Classical tools and recent Advances”
Anke van Zuylen, College of William & Mary, USA
Teaching assistant: Sebastián Pérez-Salazar, Georgia Tech, USA.
Abstract: The traveling salesman problem (TSP) is one of the most famous and widely studied combinatorial optimization problems. Given a set of cities and pairwise distances, the goal is to find a tour of minimum length that visits every city exactly once. Even if we require the distances to form a metric, the problem remains NP-hard. The classic Christofides-Serdyukov algorithm finds a tour that has length at most 3/2 times the length of the optimal tour. Despite significant effort, no efficient algorithm has been found in over 40 years that improves on the approximation guarantee of 3/2 achieved by the Christofides-Serdyukov algorithm. A well-known, natural direction for making progress which has also defied improvement is the use of a linear programming (LP) relaxation. The famous «four-thirds» conjecture states that there always exists a tour of length at most 4/3 times the optimal value of the so-called sub-tour LP relaxation for the TSP. Although this conjecture has been open for decades, recent years have seen some exciting progress toward resolving the conjecture, including improved algorithms for approximating the optimal value for several variants such as the graph-TSP (where the input metric is the shortest path metric on an unweighted graph) and the s-t path TSP (where the start and end of the tour are distinct).
In this course, we cover some of the recent advances for the TSP and s-t path TSP. We discuss classical (and widely used) techniques in polyhedral combinatorics and combinatorial optimization on trees, matroids, matchings, T-joins, flows and shortest paths, as well as new techniques that have been developed, building on and enhancing these classical techniques.
Slides:
- Lecture1_handout, Lecture1_annotated
- Lecture2_handout, Lecture2_annotated
- Lecture3_handout, Lecture3_annotated
- Lecture4_handout, Lecture4_annotated
- Lecture5_handout
Exercises: