Skip to content Skip to footer

Lamini Raises $25M to Help Enterprises Deploy Generative AI with High Accuracy and Scalability

Lamini, a Palo Alto-based startup, has secured $25 million in funding from investors, including Stanford computer science professor Andrew Ng. The company is building a platform designed to help enterprises effectively deploy and leverage generative AI technology across their organizations.

Lamini’s co-founders, Sharon Zhou and Greg Diamos, argue that many existing generative AI platforms are too general-purpose and lack the necessary solutions and infrastructure to meet the specific needs of corporations. In contrast, Lamini’s platform is built from the ground up with enterprises in mind, focusing on delivering high generative AI accuracy and scalability.

The company claims that its technology stack has been optimized for enterprise-scale generative AI workloads, from hardware to software, including model orchestration, fine-tuning, running, and training engines. Lamini is pioneering a technique called “memory tuning,” which aims to reduce hallucinations (instances where a model makes up facts) by training models on proprietary data to recall specific facts, numbers, and figures with high precision.

Lamini’s platform can operate in highly secured environments, including air-gapped ones, and allows companies to run, fine-tune, and train models on various configurations, from on-premises data centers to public and private clouds. The platform can scale workloads “elastically,” reaching over 1,000 GPUs if required by the application or use case.

With investors like AMD Ventures, First Round Capital, and Amplify Partners, Lamini plans to triple its team, expand its compute infrastructure, and further develop technical optimizations for its platform. The company aims to put control back into the hands of enterprises, enabling them to leverage their proprietary data securely and effectively with generative AI technology.