CMU-CS-22-129
Computer Science Department
School of Computer Science, Carnegie Mellon University



CMU-CS-22-129

Foveated Attention for Neural Nets

Chittesh Thavamani

M.S. Thesis

August 2022

CMU-CS-22-129.pdf


Keywords: Attention, Fovea, Retina, Foveation, Adaptive Downsampling, Dynamic Neural Networks, Object Detection, Autonomous Navigation, Warp Inversion, Streaming Perception, 3D Object Detection, Semantic Segmentation, COCO, Argoverse, Cityscapes, NuScenes, BDD100K, Faster-RCNN, PSP-Net, RetinaNet, YOLOF, FCOS3D

Efficient processing of high-res video streams is safety-critical for many robotics applications such as autonomous driving. To maintain real-time performance, many practical systems downsample the video stream. But this can hurt downstream tasks such as (small) object detection. Instead, we take inspiration from biological vision systems that allocate more foveal "pixels" to salient parts of the scene. We introduce FOVEA, an approach for intelligent downsampling that ensures salient image regions remain "magnified" in the downsampled output. Given a high-res image, FOVEA applies a differentiable resampling layer that outputs a small fixed-size image canvas, which is then processed with an object detector, whose output is then differentiably backward mapped onto the original image size. In order to maintain overall efficiency, FOVEA makes use of cheap and readily available saliency cues, including dataset-specific spatial priors or temporal priors computed from recent object predictions. On the autonomous driving datasets Argoverse-HD and BDD100K, our proposed method boosts the detection AP over standard Faster-RCNN, both with and without finetuning. Without any noticeable increase in compute, we improve accuracy on small objects by over 2x without degrading performance on large objects. Finally, FOVEA sets a new record for streaming AP (from 17.8 to 23.0 on a GTX 1080 Ti GPU), a metric designed to capture both accuracy and latency. However, FOVEA is designed specifically for 2D object detection. To generalize to arbitrary spatial tasks, in our followup work, we "learn to zoom" in on the input image, computespatial features, and then "unzoom" to revert any deformations (LZU). To enable efficient and differentiable unzooming, we approximate the zooming warp with a piecewise bilinear mapping that is invertible. LZU can be applied to any task with spatial input and any model with spatial features, and we demonstrate this versatility by evaluating on a variety of tasks and datasets: object detection on Argoverse-HD and a synthetic video COCO, semantic segmentation on Cityscapes, and RGB-based 3D detection on NuScenes. Interestingly, we observe boosts in performance even when high-resolution sensor data is unavailable, implying that LZU can be used to "learn to upsample" as well.

67 pages

Thesis Committee:
Deva Ramanan (Chair)
Deepak Pathak

Srinivasan Seshan, Head, Computer Science Department
Martial Hebert, Dean, School of Computer Science


Return to: SCS Technical Report Collection
School of Computer Science

This page maintained by reports@cs.cmu.edu