Lamini is a large language model (LLM) platform for developers to customize models on their own data and infrastructure.
Lamini is a large language model (LLM)large language model (LLM) platform for developers to customize models on their own data and infrastructure. The Lamini platform aims to run and optimize LLMs for the user; this includes fine-tuning, reinforcement learning from human feedback (RLHF), retrieval-augmented training, data augmentation, and GPU optimization. The company markets its technology as akin to making "ChatGPT from GPT-3" or "Github Copilot from Codex."
Lamini is ana large language model (LLM) platform for developers to customize models on their own data and infrastructure.
Lamini is ana large language model (LLM) platform for developers to customize models on their own data and infrastructure. The Lamini platform aims to run and optimize LLMs for the user,; this includes fine-tuning, reinforcement learning from human feedback (RLHF), retrieval-augmented training, data augmentation, and GPU optimization. The company markets its technology as akin to making "ChatGPT from GPT-3" or "Github Copilot from Codex."
Lamini builds on a range of the latest generation LLMs, including any LLM on HuggingFace and OpenAI's models. The company works with customers to determine the most appropriate model to use as a starting point, depending on their use case and data constraints. Lamini takes the initial model and adapts it to their customer's specific use cases using their data to improve performance compared to a general-purpose model. The customized model can be deployed to any cloud service or on-premises infrastructure.
A free version of Lamini is available for training small models, and each new account receives a $20 credit for inference. A paid version of Lamini is available for a monthly subscription of $99 called Lamini ProLamini Pro. Customers receive access to large models with up to 1 million tokens per job and 10,000 inference calls per month. Users also get hypertuning and retrieval augmented generation (RAG), hosted fast inference, access to the full SDK, evaluation results, and email/slack support. In addition to Lamini Pro, the company offers customized subscriptions for larger organizations with greater capacity and additional features.
Lamini was founded in 2022 by Sharon Zhou (CEO) and Gregory Diamos (ML Sys). Zhou received her PhD at Stanford in generative AI, under Dr. Andrew Ng. She was also a faculty member at Stanford University in computer science, leading a research group of over 50fifty Stanford PhD and undergraduate students in AI. Before her doctorate at Stanford, Zhou was a product manager at Google. Diamos holds a PhD from the Georgia Institute of Technology in high-performance computing. Before Lamini, Diamos co-founded MLPerf and MLCommons, developing indsutryindustry standards for benchmarking machine learning systems. Diamos also led LLM scaling at Baidu and was an architect at Nvidia.
Lamini emerged from stealth on April 28, 2023, allowing users to try its hosted data generator to train their own LLMs, test an open-source LLM fine-tuned Lamini library generatedlibrary-generated data, and sign up for early access to the company's full LLM training module. Lamini released a public alpha of its platform on July 12, 2023. On September 26, 2023, Lamini announced it has been running LLMs on AMD InstinctTM GPUs over the past year, and that the company will exclusively run on AMD Instinct GPUs. On the same day, they announced LLM Superstation combining Lamini's enterprise LLM infrastructure with AMD Instinct MI210 and MI250 accelerators.
August 23, 2023
Lamini is an LLM platform for developers to customize models on their own data and infrastructure.
Lamini is an LLM platform for developers to customize models on their own data and infrastructure. The Lamini platform aims to run and optimize LLMs for the user, this includes fine-tuning, reinforcement learning from human feedback (RLHF), retrieval-augmented training, data augmentation, and GPU optimization. The company markets its technology as akin to making "ChatGPT from GPT-3" or "Github Copilot from Codex."
Lamini builds on a range of the latest generation LLMs, including any LLM on HuggingFace and OpenAI's models. The company works with customers to determine the most appropriate model to use as a starting point depending on their use case and data constraints. Lamini takes the initial model and adapts it to their customer's specific use cases using their data to improve performance compared to a general-purpose model. The customized model can be deployed to any cloud service or on-premises infrastructure.
Lamini enables customers to use their private data in their own secure environment, own the LLM they train, swap out models as new ones are released, and control the cost and latency of the model using an interface. Lamini is partnered with Databricks, Nutanix, Snowflake, and AMD. Lamini exclusively uses AMD's Instinct GPUs.
A free version of Lamini is available for training small models and each new account receives a $20 credit for inference. A paid version of Lamini is available for a monthly subscription of $99 called Lamini Pro. Customers receive access to large models with up to 1 million tokens per job and 10,000 inference calls per month. Users also get hypertuning and retrieval augmented generation (RAG), hosted fast inference, access to the full SDK, evaluation results, and email/slack support. In addition to Lamini Pro, the company offers customized subscriptions for larger organizations with greater capacity and additional features.
Lamini was founded in 2022 by Sharon Zhou (CEO) and Gregory Diamos (ML Sys). Zhou received her PhD at Stanford in generative AI, under Dr. Andrew Ng. She was also a faculty member at Stanford University in computer science, leading a research group of over 50 Stanford PhD and undergraduate students in AI. Before her doctorate at Stanford, Zhou was a product manager at Google. Diamos holds a PhD from the Georgia Institute of Technology in high-performance computing. Before Lamini, Diamos co-founded MLPerf and MLCommons, developing indsutry standards for benchmarking machine learning systems. Diamos also led LLM scaling at Baidu and was an architect at Nvidia.
Lamini emerged from stealth on April 28, 2023, allowing users to try its hosted data generator to train their own LLMs, test an open-source LLM fine-tuned Lamini library generated data, and sign up for early access to the company's full LLM training module. Lamini released a public alpha of its platform on July 12, 2023. On September 26, 2023, Lamini announced it has been running LLMs on AMD InstinctTM GPUs over the past year and that the company will exclusively run on AMD Instinct GPUs. On the same day, they announced LLM Superstation combining Lamini's enterprise LLM infrastructure with AMD Instinct MI210 and MI250 accelerators.
September 26, 2023
September 26, 2023
August 23, 2023
The course was co-created by Lamini CEO Sharon Zhou, and Stanford colleague, Andrew Ng.
July 12, 2023
April 28, 2023
Upon launch Lamini allows users to try its hosted data generator to train their own LLMs, test an open-source LLM fine-tuned Lamini library generated data, and sign up for early access to the company's full LLM training module.
2022
Lamini is an LLM platform for developers to customize models on their own data and infrastructure.
Lamini is an LLM platform for developers to customize models on their own data and infrastructure.
Lamini emerged from stealth on April 28, 2023, allowing users to try its hosted data generator to train their own LLMs, test an open-source LLM fine-tuned Lamini library generated data, and sign up for early access to the company's full LLM training module.
Lamini is a large language model (LLM) platform for developers to customize models on their own data and infrastructure.
Lamini is a large language model (LLM) platform for developers to customize models on their own data and infrastructure.