Log in
Enquire now
‌

Mobile-Seed: Joint Semantic Segmentation and Boundary Detection for Mobile Robots

OverviewStructured DataIssuesContributors

Contents

Is a
‌
Academic paper
0

Academic Paper attributes

arXiv ID
2311.126510
arXiv Classification
Computer science
Computer science
0
Publication URL
arxiv.org/pdf/2311.1...51.pdf0
Publisher
ArXiv
ArXiv
0
DOI
doi.org/10.48550/ar...11.126510
Paid/Free
Free0
Academic Discipline
Computer Vision
Computer Vision
0
Artificial Intelligence (AI)
Artificial Intelligence (AI)
0
Computer science
Computer science
0
Robotics
Robotics
0
Submission Date
November 21, 2023
0
November 23, 2023
0
Author Names
Yang Liu0
Zhen Dong0
Yun Liu0
Xieyuanli Chen0
Youqi Liao0
Bisheng Yang0
Jianping Li0
Shuhao Kang0
Paper abstract

Precise and rapid delineation of sharp boundaries and robust semantics is essential for numerous downstream robotic tasks, such as robot grasping and manipulation, real-time semantic mapping, and online sensor calibration performed on edge computing units. Although boundary detection and semantic segmentation are complementary tasks, most studies focus on lightweight models for semantic segmentation but overlook the critical role of boundary detection. In this work, we introduce Mobile-Seed, a lightweight, dual-task framework tailored for simultaneous semantic segmentation and boundary detection. Our framework features a two-stream encoder, an active fusion decoder (AFD) and a dual-task regularization approach. The encoder is divided into two pathways: one captures category-aware semantic information, while the other discerns boundaries from multi-scale features. The AFD module dynamically adapts the fusion of semantic and boundary information by learning channel-wise relationships, allowing for precise weight assignment of each channel. Furthermore, we introduce a regularization loss to mitigate the conflicts in dual-task learning and deep diversity supervision. Compared to existing methods, the proposed Mobile-Seed offers a lightweight framework to simultaneously improve semantic segmentation performance and accurately locate object boundaries. Experiments on the Cityscapes dataset have shown that Mobile-Seed achieves notable improvement over the state-of-the-art (SOTA) baseline by 2.2 percentage points (pp) in mIoU and 4.2 pp in mF-score, while maintaining an online inference speed of 23.9 frames-per-second (FPS) with 1024x2048 resolution input on an RTX 2080 Ti GPU. Additional experiments on CamVid and PASCAL Context datasets confirm our methods generalizability. Code and additional results are publicly available at <https://martin-liao.github.io/Mobile-Seed/>.

Timeline

No Timeline data yet.

Further Resources

Title
Author
Link
Type
Date
No Further Resources data yet.

References

Find more entities like Mobile-Seed: Joint Semantic Segmentation and Boundary Detection for Mobile Robots

Use the Golden Query Tool to find similar entities by any field in the Knowledge Graph, including industry, location, and more.
Open Query Tool
Access by API
Golden Query Tool
Golden logo

Company

  • Home
  • Press & Media
  • Blog
  • Careers
  • WE'RE HIRING

Products

  • Knowledge Graph
  • Query Tool
  • Data Requests
  • Knowledge Storage
  • API
  • Pricing
  • Enterprise
  • ChatGPT Plugin

Legal

  • Terms of Service
  • Enterprise Terms of Service
  • Privacy Policy

Help

  • Help center
  • API Documentation
  • Contact Us
By using this site, you agree to our Terms of Service.