Speaker 1: Welcome to everyone joining us today in Taipei and from around the world as we open Compex 2024 at A MD. We love gaming. Today I’m excited to show you what’s next for PC gaming with ryzen. Our new Ryzen 9,000 CPUs are the world’s fastest consumer PC processors bringing our new Zen five cores to the AM Five platform. With support for the latest IO and memory technologies including PCIE Five (00:00:30) and DDR Five, I’m happy to show you that our brand new Z Five Core Zen Five is truly the next big step in high performance CPUs. . And when you look at the technology behind this, we have a lot of new technology. We have a new parallel dual pipeline front end, and what this does is improve branch prediction accuracy and reduce latency. It also enables us to deliver more performance for each clock cycle. We also designed Zen (00:01:00) Five with a large CPU engine and notification window to run more instructions in parallel for leadership, computation, throughput and efficiency. As a result, compared to the Zen Four, we get double the instruction bandwidth, and double the data bandwidth between floating point units and double the AI performance with full AV X Five 12 throughput. All of this comes together in RISE to produce 9,000 series and we’re delivering an average of 16% more IPC (00:01:30) across a broad range of application benchmarks and games compared to the Zen Four. Speaker 1: So now I’m going to show you the top of the 999, 50 x rising line for the first time. There you go. Speaker 1: We have 16 Zen Five cores, 32 threads up to 5.67 GHz boost, (00:02:00) a large 80 MB cache at 170 Watt TDP. This is the world’s fastest consumer CPU. We know all our fans love gaming. 99 50 x offers best in class gaming performance across a wide range of popular games. Now with desktops, we know that enthusiasts want to have an infrastructure that allows you to upgrade across multiple product generations. And we’re with Verizon. We just did it (00:02:30). Our original Ryzen platform Socket AM was launched in 2016 and we are now entering our ninth year and have 145 CPUs and apus across 11 different product families in Socket AM four. And we are actually still launching new products. We actually have a few Ryzen 5,000 CPUs coming next month, and we’re taking the same strategy with Socket AM Five, which we now plan to support through 2027 and beyond. (00:03:00) So you’ll be seeing AM Five processors from us for many years to come. Now in addition to the top of the stack 99 50 x increasing, we’re also announcing 12, eight and six core versions that will bring Zen Five’s leadership performance to a mainstream price point, all of which will go on sale in July. Speaker 1: We’re very excited today to announce the third generation of growth in AI processors. Our news addition to the AI (00:03:30) category is a really significant increase in compute and AI performance, and Copilot sets the bar for what a PC should be able to do. Thanks, drawn. Here we go. These are STRs. Strix is our next-generation processor for ultra-thin and premium notebooks, and combines our new N Five CPU with faster RDNA 3.5 (00:04:00) graphics and the new XD NNA to NPU. thank you Speaker 1: And when you look at what we have, it’s really all the best technology on one chip. We have a new NPU that delivers the top 50 industry leaders. Today we are going to talk a lot about tops. 50 Tops of Compute That Can Power New AI Experiences on Very Low Power We have our new N Five Core (00:04:30) that enables all compute performance for ULTRATHIN notebooks and we have fast RDNA 3.5 graphics that bring truly in class application acceleration as well as console level gaming to notebooks. Now we have a few skews. The flagship is up nine, the HX three 70 has 12 and five cores, 24 threads, 36 megabytes of cache. The industry’s most integrated NPU and our latest RDNA graphics. Strix is simply the best (00:05:00) mobile CPU. So let’s find out what is special about these new NPUN PS are really new and they are really there for all these AI applications and workloads. Speaker 1: Now has a wider array of 32 AI tiles with double the multitasking performance compared to our previous generation XD A. It’s also a highly efficient architecture that (00:05:30) delivers twice the energy efficiency of our previous generation when running general AI workloads. In addition to Microsoft, we are also working with leading software developers like Adobe, epic Games, SolidWorks, Sony, zoom and many others to accelerate the adoption of AI enabled PC apps. And by the end of 2024, we are on track to have more than 150 ISVs developing MD AI platforms in content creation, consumer gaming and productivity applications. Now (00:06:00) to give us a look at some of these upcoming Copilot Plus PCs, let me welcome our next guest, a very close partner and good friend Enrique Lores, HP President and CEO. hi hi Speaker 2: And we’re super excited about a new family of products that we’ll be launching in a few weeks because Speaker 1: I think you’ve got something to show us. Is that true? exactly I hope so. Speaker 2: And this is actually a new (00:06:30) generation because we’ve done it together, we can show it together. It’s amazing. This is the next generation Omni Book when we integrate the latest AI 300 series which, as Lisa said before, will be the first product to have a Top 50 integrated into a device. And performance, as Pavin was saying, is important because (00:07:00) will enable us to continue to deliver incredible experiences to our customers. Speaker 1: Let’s welcome Luca Rossi, President of Lenovo Intelligent Devices Group. Now Luca, you are holding onto something and I think you will show us what it is. Is that true? Speaker 3: Well, I wasn’t even supposed to show this yet because it’s something we won’t be announcing until later this year, but Lisa, you’re right. I thought it was very exciting (00:07:30) for you Speaker 1: Very happy indeed. Speaker 3: Yes, very happy. So straight from our r and d, straight from our r and d lab. Here’s a first look at our new yoga laptop powered by third-generation Rising AI. It’s Speaker 1: Beautiful. Speaker 3: Maybe we want to do left and right. Yeah, so I can (00:08:00) share more for today. i can do It is very beautiful. thank you I can tell you that this device represents a significant leap forward in the next generation AI computing I mentioned, and this is just the beginning. Speaker 1: Next, I’d like to welcome one of the most important visionaries and innovators in the Taiwan ecosystem and a very close partner, Johnny Te Jesus, the chairman. Speaker 4: (00:08:30) I think at 4:00 this evening we’ll be unveiling a range of state-of-the-art IPCs in our portfolio with the brand new Zen Book Pro Art. We book content and IG laptops powered by third generation A MD Rising AI processors. The new Land Labs is equipped with the world’s most powerful MPU with 50 tops (00:09:00) and the best MB xFi architecture that leads the industry in compute and AI performance. The growing third-generation AI processor is the catalyst to bring personal computing to everyone from content creators to gamers and business professionals and empower them like never before. This advancement gives the new Handbook more AI performance than the MacBook (00:09:30) while also making it thinner and lighter. In fact, it is a matter of great pride and honor to be the first OEM partner to make third-generation AI systems available to customers. It will be ready for purchase in July. Isn’t that incredible? thank you Speaker 1: So today I want to show you something. It’s actually a preview of our upcoming fifth generation Epic processor code called Turn. Speaker 1: (00:10:00) So please take a look at the curve for the first time. It has 192 cores and 384 threads and has 13 different chips built in three and six nanometer process technology. There is a lot of technology on the Turin. It supports all the latest memory and IO standards and is a drop in replacement for our existing (00:10:30) fourth generation Epic platforms turn that will extend Epic’s leadership in general purpose and high performance computing workloads. So let’s take a look at that performance. namd is a very compute intensive scientific software that simulates complex molecular systems and structures. When performing 20 million simulations on a model, the 128 core version of Turin is three times faster than the best of the competition, enabling researchers to complete models faster (00:11:00) that could lead to breakthroughs in drug research, materials. Science and other fields. Now Turin also excels in AI inference performance when running small and large language models. Speaker 1: So I want to show you a demo here. Now. What this demo compares is Turin’s performance when running a typical enterprise deployment of LAMA two virtual assistants with minimum guaranteed latency to ensure a high quality user experience. Both servers start by loading multiple LAMA two instances (00:11:30) and each assistant is asked to upload a document immediately You can see that the turn server on the right is doubling the number of sessions in the same amount. Time when responding to user requests significantly faster than the competition. And when other servers reach the maximum number of sessions, you will see them shut down soon. It basically can no longer support latency requirements. Turn continues scaling and delivers a constant throughput (00:12:00) of approximately four times more tokens per second. Our customers are very excited about the turnaround and I know many of our partners are actually in the audience today. And I would like to say that we are on track to launch in the second half of this year. Thanks for being such a great audience and for having a great Compex 2024. thank you Speaker 5: Thank you.