Robust Visual Tracking via Exclusive Context Modeling

Robust Visual Tracking via Exclusive Context Modeling

Tianzhu Zhang, Bernard Ghanem, Si Liu, Narendra Ahuja
"Robust Visual Tracking via Exclusive Context Modeling"
IEEE Transactions on Cybernetics 2015

Tianzhu Zhang, Bernard Ghanem, Si Liu, Narendra Ahuja
Tracking
2015
​​In this paper, we formulate particle filter-based object tracking as an exclusive sparse learning problem that exploits contextual information. To achieve this goal, we propose the context-aware exclusive sparse tracker (CEST) to model particle appearances as linear combinations of dictionary templates that are updated dynamically. Learning the representation of each particle is formulated as an exclusive sparse representation problem, where the overall dictionary is composed of multiple group dictionaries that can contain contextual information. With context, CEST is less prone to tracker drift. Interestingly, we show that the popular L1 tracker [1] is a special case of our CEST formulation. The proposed learning problem is efficiently solved using an accelerated proximal gradient method that yields a sequence of closed form updates. To make the tracker much faster, we reduce the number of learning problems to be solved by using the dual problem to quickly and systematically rank and prune particles in each frame. We test our CEST tracker on challenging benchmark sequences that involve heavy occlusion, drastic illumination changes, and large pose variations. Experimental results show that CEST consistently outperforms state-of-the-art trackers.