Amazon’s cloud computing arm AWS is going all-in on making its Bedrock service the premier destination for enterprises looking to host, customize and deploy generative artificial intelligence models. Today, AWS announced the launch of Custom Model Import, a pivotal new capability in Bedrock that allows companies to import their proprietary, in-house generative AI models and serve them as fully managed APIs alongside models like Meta’s Llama and Anthropic’s Claude that are already available in Bedrock’s library.
The Custom Model Import feature addresses a growing trend where many enterprises are taking matters into their own hands by building tailored generative AI models tuned for their specific use cases, rather than solely relying on off-the-shelf foundation models. According to a recent industry survey, the biggest roadblock these companies face is securing the requisite infrastructure to cost-effectively train, run experiments on, and ultimately deploy their AI models at scale.
By bringing their custom models into AWS Bedrock via the new import functionality, organizations gain access to the same high-powered infrastructure, compute resources, and suite of model optimization tools used to serve Bedrock’s pre-loaded AI model offerings. This includes the ability to expand models’ knowledge bases, fine-tune them for improved performance, and implement AI safety guardrails to mitigate issues like bias, toxicity, or leakage of sensitive information in outputs.
Vasi Philomin, VP of generative AI at AWS, positioned Bedrock’s breadth and depth of model customization capabilities as a key differentiator versus cloud competitors like Google Cloud’s Vertex AI and Databricks’ offerings. He highlighted Bedrock’s Guardrails for filtering unsafe model outputs and its Model Evaluation tools for benchmarking performance across different criteria.
Another touted advantage is Bedrock’s integration with AWS’ proprietary Titan family of generative AI models, which also saw some major updates. The Titan Image Generator model, which can generate new images from text prompts or manipulate existing images, reached general availability after months in preview. AWS claims the GA version exhibits increased “creativity” in its outputs compared to the preview, though details were scant.
On the transparency front, Philomin revealed that AWS uses a combination of proprietary data sources and licensed third-party data sets to train its Titan models like the Image Generator, paying licensing fees to some copyright owners. However, he declined to provide much additional detail around the training data used, likely due to intensifying legal battles over whether text-to-image models violate intellectual property rights by regurgitating copyrighted imagery.
AWS is standing by its policy to indemnify customers if its models output verbatim copies of potentially copyrighted training examples. It’s also doubling down on countermeasures against deepfakes by upgrading the “tamper-resistant” watermarking system for images generated by Titan to be more resilient against compression and editing.
Rounding out the Titan updates, AWS released version 2 of its Titan Text Embeddings model, which converts text into numerical representations usable for search and recommendations. AWS claims the new version reduces storage requirements by up to 4x while retaining 97% accuracy compared to the prior model.
While AWS did not announce any video generation capabilities, a technology area garnering immense interest, Philomin teased that the company is “constantly looking” at new generative AI use cases that customers want explored.
The slew of Bedrock enhancements underscores AWS’ ambitions to cement its cloud as the preferred platform for enterprises embracing generative AI, from importing custom models to leveraging AWS’ Titan capabilities. With the torrid pace of advancements in generative AI showing no signs of slowing, the battle between cloud providers to become enterprises’ partner of choice is heating up.