PyTorch is an open-source machine learning framework designed to accelerate the path from research prototyping to production deployment. PyTorch was created to provide flexibility and speed during the development and implementation of deep learning neural networks. Examples of deep learning software built on top of PyTorch include Tesla's Autopilot, Uber’s Pyro, HuggingFace’s Transformers, PyTorch Lightning, and Catalyst.
In September 2022, PyTorch moved to the Linux Foundation as a top-level project under the name PyTorch Foundation. Members and the governing board of the PyTorch Foundation include Meta, Amazon Web Services (AWS), Google Cloud, AMD, Microsoft Azure, and NVIDIA.
PyTorch is an optimized tensor library for deep learning that uses GPUs and CPUs to greatly accelerate computation speed. It is a Python-based package that provides two high-level features: tensor computation (like NumPy) with strong GPU acceleration and deep neural networks built on a tape-based autograd system. PyTorch provides a wide variety of tensor routines to accelerate and fit scientific computation needs, such as slicing, indexing, mathematical operations, linear algebra, and reductions.

PyTorch Structure
PyTorch was developed by Facebook’s AI Research lab (FAIR), which is now Meta. PyTorch development began in 2016 as an internship project by Adam Paszke. He was working under one of Torch's core developers Soumith Chintala. As PyTorch grew, it got two more core developers and around one hundred alpha testers from different companies and universities. PyTorch's original authors were its founder Adam Paszke and Soumith Chintala, as well as Sam Gross and Gregory Chanan.
Timeline
Further Resources
Automatic differentiation in PyTorch
Adam Paszke, Sam Gross, Soumith Chintala, Soumith Chintala, Edward Yang, Zachary DeVito, Zeming Lin, Zachary DeVito, Luca Antiga and Adam Lerer
Academic paper
Deep Learning with PyTorch
Eli Stevens, Luca Antiga
Web