Professor Pawan Mudigonda is an Associate Professor of Engineering Science and a Tutorial Fellow at Lady Margaret Hall.
He spent 3 years (2009-2011) as a postdoc with Professor Daphne Koller at the Computer Science Department of Stanford University. He was a Ph.D. student (2003-2007) at the Computer Vision Group in Oxford Brookes University and a postdoc (2008) at Oxford's Visual Geometry Group, where he was supervised by Profesor Philip Torr and Professor Andrew Zisserman.
As an undergraduate student (1999-2003) at the International Institute of Information Technology, Hyderabad, Pawan worked at the Center for Visual Information Technology, under the supervision of Professor C.V. Jawahar and Professor P.J. Narayanan.
- B1: Engineering Computation (at the University of Oxford)
- C19: Machine Learning (at the University of Oxford)
- Optimization (at CDT in AIMS, University of Oxford)
- Guest Lecture in Learning from Big Data (at CDT in AIMS, University of Oxford)
- Guest Lectures in Optimization and Learning (at CDT in AIMS, University of Oxford)
- Introduction to Discrete Optimization (at Ecole Centrale Paris)
- Polyhedral Combinatorial Optimization (at Ecole Centrale Paris)
- Discrete Optimization for Vision and Learning (at Ecole Normale Superieure de Cachan with Nikos Komodakis)
- Optimization (at Ecole Centrale Paris with Paul-Henry Cournede)
- Discrete Inference and Learning for Artificial Vision (online course on Coursera with Nikos Paragios)
- Probabilistic Inference (at Ecole Centrale Paris)
Pawan is Principal Researcher at the Optimization for Vision and Learning group, part of the Information Engineering division.
The group's research focuses on the design and analysis of optimization algorithms for problems that arise in computer vision and machine learning.
P. Dokania, A. Behl, C. V. Jawahar and M. Pawan Kumar
The team consider the problem of using high-order information (for example, persons in the same image tend to perform the same action) to improve the accuracy of ranking (specifically, average precision). They develop two learning frameworks. The high-order binary SVM (HOB-SVM) optimizes a convex upper bound of the surrogate 0-1 loss function. The high-order average precision SVM (HOAP-SVM) optimizes a difference-of-convex upper bound on the average precision loss function.
A. Behl, C. V. Jawahar and M. Pawan Kumar
Developing a novel latent AP-SVM that minimizes a carefully designed upper bound on the AP-based loss function over a weakly supervised dataset. This approach is based on the hypothesis that in the challenging setting of weakly supervised learning, it becomes crucial to optimize the right accuracy measure. Using publicly available datasets, they demonstrate the advantage of their approach over standard loss-based binary classifiers on challenging problems in computer vision.
M. Pawan Kumar, B. Packer and D. Koller
The group developed an accurate iterative algorithm for learning the parameters of a latent variable model such as latent structural SVM. Their approach uses the intuition that the learner should be presented with the training samples in a meaningful order: easy samples first, hard samples later. At each iteration, their approach simultaneously chooses the easy samples and updates the parameters.
M. Pawan Kumar and D. Koller
The group consider the problem of simultaneously dividing an image into coherent regions and assigning labels to regions using a global energy function. They form a large dictionary of putative regions using bottom-up over-segmentation techniques and formulate the problem of selecting the regions and their labels as an integer program. They provide an efficient dual decomposition method to solve an accurate linear programming relaxation of the integer program.
M. Pawan Kumar and P. Torr
The team developed a new st-MINCUT based move-making method for MAP estimation of discrete MRFs with arbitrary unary potentials and truncated convex pairwise potentials. They prove that their method provides the best known multiplicative bounds (same as the bounds obtained by solving the standard linear programming relaxation followed by randomized rounding) for these problems in polynomial time. They demonstrate the efficacy of their approach using synthetic and real data experiments.
M. Pawan Kumar, V. Kolmogorov and P. Torr
Analysis of several convex relaxations for MAP estimation of discrete MRFs. The team show that the standard linear programming relaxation dominates (provides a tighter approximation than) a large class of quadratic programming and second order cone programming relaxations. Their analysis leads to new second order cone programming relaxations that are tighter than the linear programming relaxation.
M. Pawan Kumar, P. Torr and A. Zisserman
Given an image containing an instance of a known object category, the team obtain an accurate, object-like segmentation automatically. They match a parts-based object category model to the image. Each sample of the model provides cues about the shape of the particular instance, which are incorporated in a global energy function. The segmentation is obtained by minimizing the energy using a single st-MINCUT.
M. Pawan Kumar, P. Torr and A. Zisserman
Given a video, they learn a layered representation of the scene for motion segmentation in an unsupervised manner. The layered representation consists of the shape and appearance of the various rigidly moving segments in the scene, their occlusion ordering as well as their framewise transformations. These parameters are estimated from a video by minimizing a global energy function using block coordinate descent.
I am always looking for motivated PhD students. Please read some of my recent papers to learn more about my research. Due to time constraints, I cannot accept any internship applications.