{"id":21385,"date":"2026-02-24T13:05:22","date_gmt":"2026-02-24T13:05:22","guid":{"rendered":"https:\/\/scannn.com\/meta-and-amd-partner-for-longterm-ai-infrastructure-agreement\/"},"modified":"2026-02-24T13:05:22","modified_gmt":"2026-02-24T13:05:22","slug":"meta-and-amd-partner-for-longterm-ai-infrastructure-agreement","status":"publish","type":"post","link":"https:\/\/scannn.com\/lv\/meta-and-amd-partner-for-longterm-ai-infrastructure-agreement\/","title":{"rendered":"Meta and AMD Partner for Longterm AI Infrastructure Agreement"},"content":{"rendered":"<p> <br \/>\n<\/p>\n<div>\n<p><span style=\"font-weight: 400\">Today, we\u2019re <\/span><a href=\"https:\/\/www.amd.com\/en\/newsroom\/press-releases\/2026-2-24-amd-and-meta-announce-expanded-strategic-partnersh.html\"><span style=\"font-weight: 400\">announcing a multi-year agreement with AMD<\/span><\/a><span style=\"font-weight: 400\"> to power our AI infrastructure with up to 6GW of AMD Instinct GPUs, the silicon computing technology used to support modern AI models. <\/span><\/p>\n<p><span style=\"font-weight: 400\">At Meta, we\u2019re working to build the next generation of AI and enable personal superintelligence for all. To do this, we need massive, scalable compute power that can handle the growing demands of our AI workloads. <\/span><span style=\"font-weight: 400\">Our partnership with AMD, which builds on our existing collaboration, will help us meet those needs. <\/span><\/p>\n<h2>Working With an Industry Leader<\/h2>\n<p><span style=\"font-weight: 400\">Under our new agreement, we will also be working with AMD on alignment with our roadmaps across silicon, systems and software enabling vertical integration across our infrastructure stack. This collaboration across both software and hardware will enable us to innovate quickly and at scale.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400\">\u201cWe are proud to expand our strategic partnership with Meta as they push the boundaries of AI at unprecedented scale,\u201d said Dr. Lisa Su, chair and CEO, AMD. \u201cThis multi-year, multi-generation collaboration across Instinct GPUs, EPYC CPUs and rack-scale AI systems aligns our roadmaps to deliver high-performance, energy-efficient infrastructure optimized for Meta\u2019s workloads, accelerating one of the industry\u2019s largest AI deployments and placing AMD at the center of the global AI buildout.\u201d<\/span><\/p>\n<p><span style=\"font-weight: 400\">Shipments to support the first GPU deployments will begin in the second half of 2026, and will be built on the Helios rack-scale architecture, a rack that <\/span><a href=\"https:\/\/about.fb.com\/news\/2025\/10\/open-hardware-future-data-center-infrastructure\/\"><span style=\"font-weight: 400\">we developed and announced<\/span><\/a><span style=\"font-weight: 400\"> at last year\u2019s Open Compute Project Global Summit in collaboration with AMD.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400\">\u201cWe\u2019re excited to form a long-term partnership with AMD to deploy efficient inference compute and deliver personal superintelligence,\u201d said Mark Zuckerberg, Founder and CEO of Meta. \u201cThis is an important step for Meta as we diversify our compute. I expect AMD to be an important partner for many years to come.\u201d<\/span><\/p>\n<h2>Our Portfolio-Based Approach<\/h2>\n<p><span style=\"font-weight: 400\">Our agreement with AMD is part of our Meta Compute initiative, an effort to massively scale our infrastructure for the era of personal superintelligence, future-proofing our leadership in AI. By diversifying our partnerships and technology stack, we\u2019re building a more resilient and flexible infrastructure. We\u2019re combining hardware sourced from a range of partners with our own rapidly advancing Meta Training and Inference Accelerator (MTIA) silicon program.<\/span><\/p>\n<p><span style=\"font-weight: 400\">We believe this portfolio approach will enable us to advance and innovate at an unmatched pace, rolling out powerful, efficient new hardware co-designed with our software stack to handle massive growth. We look forward to working with AMD to power our AI innovations and secure our ability to deliver world-class AI experiences to billions of people globally.<\/span><\/p>\n<p><i><span style=\"font-weight: 400\">This post contains forward-looking statements, including about Meta\u2019s business.You should not rely on these statements as predictions of future events. Additional information regarding potential risks and uncertainties can be found in our most recent Form 10-K filed with the Securities and Exchange Commission. Meta undertakes no obligation to update these statements as a result of new information or future events.<\/span><\/i><\/p>\n<\/p><\/div>\n<p><script async defer crossorigin=\"anonymous\" src=\"https:\/\/connect.facebook.net\/en_US\/sdk.js#xfbml=1&#038;version=v5.0\"><\/script><br \/>\n<br \/><br \/>\n<br \/><a href=\"https:\/\/about.fb.com\/news\/2026\/02\/meta-amd-partner-longterm-ai-infrastructure-agreement\/\">Source link <\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Today, we\u2019re announcing a multi-year agreement with AMD to power our AI infrastructure with up to 6GW of AMD Instinct GPUs, the silicon computing technology used to support modern AI models. At Meta, we\u2019re working to build the next generation of AI and enable personal superintelligence for all. To do this, we need massive, scalable [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":21386,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[123],"tags":[],"class_list":["post-21385","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-facebook"],"_links":{"self":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/posts\/21385","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/comments?post=21385"}],"version-history":[{"count":0,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/posts\/21385\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/media\/21386"}],"wp:attachment":[{"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/media?parent=21385"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/categories?post=21385"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/scannn.com\/lv\/wp-json\/wp\/v2\/tags?post=21385"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}