Log in
Enquire now
‌

US Patent 11270201 Communication optimizations for distributed machine learning

Patent 11270201 was granted and assigned to Intel on March, 2022 by the United States Patent and Trademark Office.

OverviewStructured DataIssuesContributors

Contents

Patent abstractTimelineTable: Further ResourcesReferences
Is a
Patent
Patent
1

Patent attributes

Patent Applicant
Intel
Intel
1
Current Assignee
Intel
Intel
1
Patent Jurisdiction
United States Patent and Trademark Office
United States Patent and Trademark Office
1
Patent Number
112702011
Patent Inventor Names
Dipankar Das1
Karthikeyan Vaidyanathan1
Mikhail E. Smorkalov1
Srinivas Sridharan1
Chandrasekaran Sakthivel1
Date of Patent
March 8, 2022
1
Patent Application Number
158591801
Date Filed
December 29, 2017
1
Patent Citations
‌
US Patent 10891538 Sparse convolutional neural network accelerator
‌
US Patent 10860922 Sparse convolutional neural network accelerator
‌
US Patent 10528864 Sparse convolutional neural network accelerator
Patent Citations Received
‌
US Patent 11977971 Data volume sculptor for deep learning acceleration
6
‌
US Patent 11610362 Data volume sculptor for deep learning acceleration
7
‌
US Patent 11645507 Providing models to client devices
8
‌
US Patent 11687762 Acceleration unit for a deep learning engine
9
‌
US Patent 11853897 Neural network training with decreased memory consumption and processor utilization
10
‌
US Patent 11915147 Large model support in deep learning
11
‌
US Patent 11941528 Neural network training in a distributed system
12
‌
US Patent 11586907 Arithmetic unit for deep learning acceleration
Patent Primary Examiner
‌
Shane D Woolwine
1
Patent abstract

Embodiments described herein provide a system to configure distributed training of a neural network, the system comprising memory to store a library to facilitate data transmission during distributed training of the neural network; a network interface to enable transmission and receipt of configuration data associated with a set of worker nodes, the worker nodes configured to perform distributed training of the neural network; and a processor to execute instructions provided by the library, the instructions to cause the processor to create one or more groups of the worker nodes, the one or more groups of worker nodes to be created based on a communication pattern for messages to be transmitted between the worker nodes during distributed training of the neural network.

Timeline

No Timeline data yet.

Further Resources

Title
Author
Link
Type
Date
No Further Resources data yet.

References

Find more entities like US Patent 11270201 Communication optimizations for distributed machine learning

Use the Golden Query Tool to find similar entities by any field in the Knowledge Graph, including industry, location, and more.
Open Query Tool
Access by API
Golden Query Tool
Golden logo

Company

  • Home
  • Press & Media
  • Blog
  • Careers
  • WE'RE HIRING

Products

  • Knowledge Graph
  • Query Tool
  • Data Requests
  • Knowledge Storage
  • API
  • Pricing
  • Enterprise
  • ChatGPT Plugin

Legal

  • Terms of Service
  • Enterprise Terms of Service
  • Privacy Policy

Help

  • Help center
  • API Documentation
  • Contact Us
By using this site, you agree to our Terms of Service.