Meta has strengthened its connection with Amazon Web Services, Inc. (AWS), as a key cloud provider. Meta will expand its use of AWS compute, storage, databases, and security services to provide privacy, dependability, and scale in the cloud, leveraging AWS’s proven architecture and comprehensive capabilities to complement its existing on-premises infrastructure.

Meta will leverage AWS to host third-party collaborations and to enable acquisitions of firms that are already using AWS. It will also employ AWS’ computational services to speed up its Meta AI group’s artificial intelligence (AI) research and development. In addition, Meta and AWS will collaborate to improve the performance of clients using PyTorch on AWS and to speed up the development, training, deployment, and operation of AI/ML models.

Jason Kalich, Vice President of Production Engineering at Meta, said, “We are excited to extend our strategic relationship with AWS to help us innovate faster and expand the scale and scope of our research and development work. The global reach and reliability of AWS will help us continue to deliver innovative experiences for the billions of people around the world that use Meta products and services and for customers running PyTorch on AWS.”

AWS and Meta will aid machine learning researchers and developers by improving PyTorch performance and integrating it with core managed services like Amazon Elastic Compute Cloud (Amazon EC2) and Amazon SageMaker for building, training, and deploying AI models. The businesses have enabled PyTorch on AWS to organize large-scale training jobs over a distributed system of AI accelerators, making it easier for developers to build large-scale deep learning models for natural language processing and computer vision.

Kathrin Renz, Vice President of Business Development and Industries at Amazon Web Services, Inc., said, “Meta and AWS have been expanding our collaboration over the last five years. With this agreement, AWS will continue to help Meta support research and development, drive innovation, and collaborate with third parties and the open-source community at scale. Customers can rely on Meta and AWS to collaborate on PyTorch, making it easier for them to build, train, and deploy deep learning models on AWS.”

The firms will collaborate to develop native tools for PyTorch that will increase performance, explainability, and cost of inference. The businesses will continue to improve TorchServe, a native PyTorch serving engine that makes it easy to deliver trained PyTorch models at scale, to make it easier to deploy models in production. AWS and Meta offer to help enterprises carry massive deep learning models from research to production faster and easier with improved performance on AWS, based on these open-source contributions.