Algorithms, a random walk: A conversation with Nikhil Bansal

Bansal is known for his use of a broad mathematical toolset to find algorithmic solutions to problems once thought unsolvable.
Nikhil Bansal, Patrick C. Fischer Professor of Theoretical Computer Science

Nikhil Bansal joined the faculty at the University of Michigan as the Patrick C. Fischer Professor of Theoretical Computer Science in 2021. His work spans the field of algorithm design, with past and current work on combinatorics, mathematical optimization, approximation algorithms, discrete mathematics, and scheduling. His research has garnered a great deal of attention in the fields of mathematics and theoretical CS, and he has proposed a number of algorithmic approaches to hard problems once thought intractable.

Bansal earned his PhD at Carnegie Mellon University, and has held positions as a researcher at IBM and faculty at the Technical University of Eindhoven in the Netherlands. We sat down with him to learn about the main thrusts of his work and motivations as a professor.

What are the key research problems that motivate you?

I work in an area called algorithm design. It’s a broad spectrum; algorithms are used all over the place. I focus more on the theoretical aspects, the mathematical questions underlying the theory. 

There are a few fields in algorithms; one is approximation algorithms. This includes problems which are very hard, that you can’t solve with brute force. Say for example that you want to design a course timetable for the department with various constraints. You want to find a feasible schedule of who can teach when. If you try to brute force this problem, and find every possibility, then even our fastest computers will take millions of years to finish.

So you need smarter ideas to reduce the solution’s search space. This, in turn, ties in to a lot of beautiful mathematics that have been developed over the last few years, connecting high dimensional geometry and combinatorics. These types of problems bring together a variety of mathematical fields.

I work across this whole spectrum. Sometimes you need to develop new mathematical techniques to understand challenging questions. Sometimes it’s about using existing techniques to solve algorithmic problems that arise in computer science.

Do the problems you solve tend to spring from specific practical problems, or do you work more on open mathematical questions?

My work includes both. Sometimes you hear a nice question from somebody working in practice. This happened a lot when I was at IBM, for instance. There are various research groups at IBM, some whose work is very applied, and when they ask a question you start thinking about it. Then you realize it’s connected to all of these other questions in the realm of theory.

There is the other direction too. There are some questions which seem very removed, having been abstracted over the years, that are long-standing questions for theoretical computer science or mathematics. They’re like prize bounties – a question may have been unanswered for twenty years,  due to some inherent difficulty where others are also stuck, and if you solve it, other researchers take notice and build on those ideas.

What are some of your most recent research projects?

One thing I’ve been focusing on is what’s called discrepancy theory. It’s a classical area in mathematics, but many people have conjectured that problems in this area cannot be solved algorithmically.

Ten years ago I actually came up with such an algorithm for a problem in this area, and it was a big surprise to the community. [“Constructive algorithms for discrepancy minimization,” at the 2010 IEEE Symposium on Foundations of Computer Science.] It led to a whole new research area called algorithmic discrepancy theory, which is still being studied actively.

One way to think about discrepancy theory is to imagine that you have a bunch of people with a variety of different attributes — age, sex, education, and so on. Discrepancy theory is concerned with splitting these people into groups, such that the groups turn out to be as similar to each other as possible. The name comes from wanting to minimize the discrepancy between the resulting parts.

There are a lot of problems that follow this sort of process, like dividing a network into two similar parts. And what mathematicians showed was that, somehow, there is always a solution with low discrepancy; you can always divide something into two groups where the parts of each group are as similar as possible. 

But, of course, doing this quickly or efficiently is a different matter. You can try all possible ways of splitting things into two groups, but that takes exponential time. It was believed that there was no better way than brute force, trying every possibility.

I made use of techniques from another area of mathematics called semidefinite programming to show that it could be done. It involved probability, optimization, and other techniques that had been overlooked. Since then the study of algorithmic discrepancy has led to several new connections between these areas, which is of interest to both computer scientists and mathematicians. I was invited to talk about discrepancy at the International Congress of Mathematicians, for example.

What is a past project that stands out in your career?

Another area I’m working in is how to deal with uncertainty. For example, say you’re assigning jobs to machines with a scheduling algorithm. You don’t know all the input you’re going to receive up front, and jobs arrive over time. But you have to make a decision based on what you know right now. You can’t just wait until the end of the day to make an optimal scheduling decision after all of the jobs enter the queue, you have to do things right when each job presents itself.

This is a more traditional problem within computer science or algorithm design called online algorithms. It essentially deals with how to do things on the fly. For many years there was no systematic theory for how to deal with these problems, people just had lots of clever ideas. So, one thing I’m proud of over the last 10-15 years is developing a systematic way to analyze and design algorithms for these problems. Using these techniques we could solve a central problem in the area that was open for over 20 years  [“A polylogarithmic-competitive algorithm for the k-server problem”]. This work won the Best Paper Award at the IEEE Symposium on Foundations of Computer Science  in 2011.  

What are some of your aspirational research goals?

There are many problems in mathematics where we don’t yet know if efficient algorithms exist for them. In computer science, when people already know how to complete a task algorithmically, you can see if it can be done faster — instead of n3 time, could it be done in n2 time?

But we have no clue whether some tasks can be done algorithmically at all. I find this area quite fascinating. As far as algorithms go, it’s the great unknown. 

This includes a diverse set of problems, and at this stage of our understanding it’s hard to say whether any of them have a good deal of overlap with each other or with any solved problems. But once you understand several problems, you can start to see a common pattern, for example why they are all hard or why they are all easy. For certain kinds of problems, we understand those similarities quite well by now. There are likely to be many more 20 years from now.

How do you set out to tackle a new problem, or keep up your research momentum in general?

In some sense theoretical research can be more frustrating than certain kinds of applied work. If you’re writing a program, then you can see when you’re making progress.

In mathematics or theory, it could be that you’re stuck on something for six months or years. Sometimes something will suddenly click, or it may never click. There are months when nothing happens and you wonder, what have I been doing for the past few months? That’s the nature of the job.

I think the way to keep sane is, as long as you feel you are learning something new, then  even if you’re not solving a problem you have some positive trajectory.

And then, of course, there is always a lot going on in the field. Other people are doing very nice things, so you spend a lot of time trying to understand that work. On a day to day basis, you learn a lot by talking to different people, or reading papers. You talk to students and colleagues, and work together to brainstorm ideas.

What are some of the most important things that you hope to give graduate students in your lab?

Ultimately you want your students to mature into well rounded, broad researchers who enjoy what they do. The main goal is to get them excited. We as faculty are passionate about something and, hopefully, the next generation can take this further. You learn a lot from your students as well, so it’s a working relationship — you’re sharing the journey with them in a way.

During this PhD process, what I would like best is that they enjoy the process and they come out successfully on the other end.