Abstract
The existing fusion-based RGB-D salient object detection methods usually adopt the bistream structure to strike a balance in the fusion trade-off between RGB and depth (D). While the D quality usually varies among the scenes, the state-of-the-art bistream approaches are depth-quality-unaware, resulting in substantial difficulties in achieving complementary fusion status between RGB and D and leading to poor fusion results for low-quality D. Thus, this paper attempts to integrate a novel depth-quality-aware subnet into the classic bistream structure in order to assess the depth quality prior to conducting the selective RGB-D fusion. Compared to the SOTA bistream methods, the major advantage of our method is its ability to lessen the importance of the low-quality, no-contribution, or even negative-contribution D regions during RGB-D fusion, achieving a much improved complementary status between RGB and D. Our source code and data are available online at https://github.com/qdu1995/DQSD.
| Original language | English |
|---|---|
| Article number | 9334419 |
| Pages (from-to) | 2350-2363 |
| Number of pages | 14 |
| Journal | IEEE Transactions on Image Processing |
| Volume | 30 |
| DOIs | |
| State | Published - 2021 |
Keywords
- RGB-D salient object detection
- weakly supervised learning
Fingerprint
Dive into the research topics of 'Depth-Quality-Aware Salient Object Detection'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver