Abstract: Billions of years of evolutionary forces have shaped life as we know it: diverse, complex and fascinating. Over the last two centuries there have been tremendous scientific and mathematical advances in our understanding of evolution, life and its mysteries. Recently, the relatively new and powerful tool of computation has joined forces to develop this understanding further: the underlying tenet is that several natural processes, including evolution itself, can be viewed as a form of computation. Not only does this viewpoint gives us fundamental insights into life, it holds the promise that, in this quest, we will unveil new computational models and techniques. In this talk, we will present some vignettes on this interplay between evolution and computation.
Abstract: Inspired by computational abilities of the slime mold, several researchers proposed dynamical systems aimed at solving fundamental problems such as min-cost flow and, more generally, linear programming. In this talk, I will present some of our recent work on Physarum dynamics which proves that these dynamics provably work and, surprisingly, can be interpreted as an interior point method in linear programming and an iteratively reweighted least square method in sparse recovery; implying the convergence of these methods. Together, these results not only shed some light on how nature might be solving instances of perhaps the most complex problem in P, but also suggest new algorithmic approaches to them.
Abstract: Game theory is used widely in economics, sociology, psychology and political science where a large number of individuals independently take decisions which result in a collective outcome. Moreover, individuals rarely interact directly with all other members, rather just with their friends, leading to significant network effects on the end outcome. Understanding how and why an individual takes decisions becomes even more crucial in the realm of computer science where we must design algorithms and systems that function in such strategic environments. In this talk I will introduce the tenets of algorithmic game theory and discuss some vignettes on how these principles relate to social networks.
In many optimization problems, the form of the underlying objective function is unknown (and even if it is, the function is expensive to evaluate or compute gradients of). In such problems, all we can get is a small number of function evaluations. Examples of such problems are abound in many applications, such as design of complex systems that require simultaneously tuning hundreds or thousands of parameters, or designing deep learning models consisting of a large number of hyperparameters. Traditional optimization methods that require computing expensive gradients and function evaluations are not suitable for such problems. Bayesian Optimization allows us to perform optimization of such functions in a "query optimized" way, i.e. requiring as few function evaluations as possible, and "learning" the function along the way. In this talk, we will look at the basic ideas behind Bayesian Optimization and look at some examples of how Bayesian Optimization problems are formulated and solved.
The field of convex optimization has a rich history of deep and beautiful results that have made possible innumerable advances in diverse areas in science and manufacturing. However, the post modern areas of large scale signal processing, machine learning, and data analytics often present problems that are best modeled as non-convex optimization problems. Examples include recommendation systems, learning in the presence of corruptions and high dimensional signal processing. It is often tempting to modify or "relax" these problems into convex problems so that existing techniques can be used. In this talk I will present evidence that this is a sub-optimal approach. I will also introduce a few "non-convex" optimization techniques that have been applied to these problems with great success.