Log in
Enquire now
‌

Towards Unsupervised Representation Learning: Learning, Evaluating and Transferring Visual Representations

OverviewStructured DataIssuesContributors

Contents

Paper abstractTimelineTable: Further ResourcesReferences
Is a
‌
Academic paper
1

Academic Paper attributes

arXiv ID
2312.001011
arXiv Classification
Computer science
Computer science
1
Publication URL
arxiv.org/pdf/2312.0...01.pdf1
Publisher
ArXiv
ArXiv
1
DOI
doi.org/10.48550/ar...12.001011
Paid/Free
Free1
Academic Discipline
Artificial Intelligence (AI)
Artificial Intelligence (AI)
1
Computer graphics
Computer graphics
1
Machine learning
Machine learning
1
Computer Vision
Computer Vision
1
Computer science
Computer science
1
Submission Date
November 30, 2023
1
Author Names
Bonifaz Stuhr1
Paper abstract

Unsupervised representation learning aims at finding methods that learn representations from data without annotation-based signals. Abstaining from annotations not only leads to economic benefits but may - and to some extent already does - result in advantages regarding the representations structure, robustness, and generalizability to different tasks. In the long run, unsupervised methods are expected to surpass their supervised counterparts due to the reduction of human intervention and the inherently more general setup that does not bias the optimization towards an objective originating from specific annotation-based signals. While major advantages of unsupervised representation learning have been recently observed in natural language processing, supervised methods still dominate in vision domains for most tasks. In this dissertation, we contribute to the field of unsupervised (visual) representation learning from three perspectives: (i) Learning representations: We design unsupervised, backpropagation-free Convolutional Self-Organizing Neural Networks (CSNNs) that utilize self-organization- and Hebbian-based learning rules to learn convolutional kernels and masks to achieve deeper backpropagation-free models. (ii) Evaluating representations: We build upon the widely used (non-)linear evaluation protocol to define pretext- and target-objective-independent metrics for measuring and investigating the objective function mismatch between various unsupervised pretext tasks and target tasks. (iii) Transferring representations: We contribute CARLANE, the first 3-way sim-to-real domain adaptation benchmark for 2D lane detection, and a method based on prototypical self-supervised learning. Finally, we contribute a content-consistent unpaired image-to-image translation method that utilizes masks, global and local discriminators, and similarity sampling to mitigate content inconsistencies.

Timeline

No Timeline data yet.

Further Resources

Title
Author
Link
Type
Date
No Further Resources data yet.

References

Find more entities like Towards Unsupervised Representation Learning: Learning, Evaluating and Transferring Visual Representations

Use the Golden Query Tool to find similar entities by any field in the Knowledge Graph, including industry, location, and more.
Open Query Tool
Access by API
Golden Query Tool
Golden logo

Company

  • Home
  • Press & Media
  • Blog
  • Careers
  • WE'RE HIRING

Products

  • Knowledge Graph
  • Query Tool
  • Data Requests
  • Knowledge Storage
  • API
  • Pricing
  • Enterprise
  • ChatGPT Plugin

Legal

  • Terms of Service
  • Enterprise Terms of Service
  • Privacy Policy

Help

  • Help center
  • API Documentation
  • Contact Us
By using this site, you agree to our Terms of Service.