Motion estimation has been studied extensively in neurosciences in the last two decades The general consensus that has evolved from the studies in the primate vision is that it is done in a two stage process involving cortical areas V1 and MT in the brain Spatio temporal filters are leading contenders in terms of models that capture the characteristics exhibited in these areas Even though there are many models in the biological vision literature covering the optical flow estimation problem based on the spatio-temporal filters little is known in terms of their performance on the modern day computer vision datasets such as Middlebury In this paper we start from a mostly classical feedforward V1-MT model introducing a additional decoding step to obtain an optical flow estimation Two extensions are also discussed using nonlinear filtering of the MT response for a better handling of discontinuities One essential contribution of this paper is to show how a neural model can be adapted to deal with real sequences and it is here for the first time that such a neural model is benchmarked on the modern computer vision dataset Middlebury Results are promising and suggest several possible improvements \codeWe think that this work could act as a good starting point for building bio-inspired scalable computer vision algorithms For that reason we propose to also share the code in order to encourage research in this direction
from HAL : Dernières publications http://ift.tt/12YLAWB
from HAL : Dernières publications http://ift.tt/12YLAWB
0 commentaires:
Enregistrer un commentaire