Log in
Enquire now
‌

Auditing and Generating Synthetic Data with Controllable Trust Trade-offs

OverviewStructured DataIssuesContributors

Contents

Paper abstractTimelineTable: Further ResourcesReferences
Is a
‌
Academic paper
1

Academic Paper attributes

arXiv ID
2304.108191
arXiv Classification
Computer science
Computer science
1
Publication URL
arxiv.org/pdf/2304.1...19.pdf1
Publisher
ArXiv
ArXiv
1
DOI
doi.org/10.48550/ar...04.108191
Paid/Free
Free1
Academic Discipline
Machine learning
Machine learning
1
Artificial Intelligence (AI)
Artificial Intelligence (AI)
1
Computer science
Computer science
1
Statistics
Statistics
1
Submission Date
May 2, 2023
2
April 21, 2023
2
Author Names
Radhika Vedpathak1
Youssef Mroueh1
Yair Schiff1
Pierre Dognin1
Richard A. Young1
Adam Ivankay1
Aleksandra Mojsilovic1
Apoorva Nitsure1
...
Paper abstract

Data collected from the real world tends to be biased, unbalanced, and at risk of exposing sensitive and private information. This reality has given rise to the idea of creating synthetic datasets to alleviate risk, bias, harm, and privacy concerns inherent in the real data. This concept relies on Generative AI models to produce unbiased, privacy-preserving synthetic data while being true to the real data. In this new paradigm, how can we tell if this approach delivers on its promises? We present an auditing framework that offers a holistic assessment of synthetic datasets and AI models trained on them, centered around bias and discrimination prevention, fidelity to the real data, utility, robustness, and privacy preservation. We showcase our framework by auditing multiple generative models on diverse use cases, including education, healthcare, banking, human resources, and across different modalities, from tabular, to time-series, to natural language. Our use cases demonstrate the importance of a holistic assessment in order to ensure compliance with socio-technical safeguards that regulators and policymakers are increasingly enforcing. For this purpose, we introduce the trust index that ranks multiple synthetic datasets based on their prescribed safeguards and their desired trade-offs. Moreover, we devise a trust-index-driven model selection and cross-validation procedure via auditing in the training loop that we showcase on a class of transformer models that we dub TrustFormers, across different modalities. This trust-driven model selection allows for controllable trust trade-offs in the resulting synthetic data. We instrument our auditing framework with workflows that connect different stakeholders from model development to audit and certification via a synthetic data auditing report.

Timeline

No Timeline data yet.

Further Resources

Title
Author
Link
Type
Date
No Further Resources data yet.

References

Find more entities like Auditing and Generating Synthetic Data with Controllable Trust Trade-offs

Use the Golden Query Tool to find similar entities by any field in the Knowledge Graph, including industry, location, and more.
Open Query Tool
Access by API
Golden Query Tool
Golden logo

Company

  • Home
  • Press & Media
  • Blog
  • Careers
  • WE'RE HIRING

Products

  • Knowledge Graph
  • Query Tool
  • Data Requests
  • Knowledge Storage
  • API
  • Pricing
  • Enterprise
  • ChatGPT Plugin

Legal

  • Terms of Service
  • Enterprise Terms of Service
  • Privacy Policy

Help

  • Help center
  • API Documentation
  • Contact Us
By using this site, you agree to our Terms of Service.