I am working on the theory of scalable data management. One of my goals is to extend the capabilities of modern data management systems in generic ways as to allow them to support novel functionalities that seem hard at first. Examples of such functionalities are managing provenance, trust, explanations, uncertain or inconsistent data. To support these functionalities, I am interested in understanding the fundamental algebraic properties that allow algorithms to scale to large amounts of data: Given a large data or knowledge base, what types of questions can be answered efficiently? And what do we do about those that cannot?
For the hard questions, our work tries to find ways to change the objective in a way that qualitatively preserves the original motivation, yet installs those nice algebraic properties (something we call ''algebraic cheating''). Our work has shown that approaches that leverage those properties look at the overall end-to-end goal in a more holistic way can often work with smaller training data and achieve remarkable speed-ups.
Our DATA lab is growing and we are actively looking for students with strong foundations in algorithms, theory, discrete math, data management, and machine learning. Please visit our research opportunities. Notice I am a big fan of Ray Dalio's principles applied to research.Current PhD students: Neha Makhija, Nikos Tziavelis