Intel has unveiled its latest 14nm Cascade Lake processors, including the new Cascade Lake-AP family expected to scale up to 56 CPU cores. This is significantly more than previously anticipated; the company was expected to unveil 48-core solutions, not 56-cores.
These new Cascade Lake chips will arrive in two flavors. The 8200 series will continue what we might call the standard Xeon family, with up to 28 cores. The 9200 series (Cascade Lake-AP) will offer increased core counts, with up to 56 cores and a 400W TDP. These CPUs
The mainstream Cascade Lake chips use the same architecture as the previous Skylake platform and use the same Purley motherboard standard. Total memory capacity has been sharply increased, however, with the family now supporting 1.5TB of memory per socket at a minimum, up from 768GB the last generation. Clock speeds and core counts have also been tweaked, which makes some SKUs slightly faster or means they offer more cores for the same amount of money. Intel is also introducing a new naming scheme, details of which are shown below:
“L” CPUs with Optane support can handle up to 4.5TB of memory altogether — 3TB of Optane and 1.5TB of DRAM. This is part of Intel’s push to roll out Optane Persistent Memory, which kicks off with Cascade Lake.
Other Cascade Lake features include partial hardware support for various Spectre and Meltdown fixes (some of these will continue to be baked into firmware) and the addition of AVX-512 VNNI instructions, which Intel markets as DL Boost. DL Boost is a method of improving inference performance by issuing a single instruction for operations that used to require three. Intel has previously stated that inference workloads on Cascade Lake will be up to 11x faster compared with original Xeon Scalable performance at launch. At least some of these gains, however, are due to software optimizations.
The exact performance uplift attributable to VNNI varies, but appears to be between 2x – 4x depending on the task.
Intel’s goal with Cascade Lake isn’t just to kick another iteration of Xeon into the channel. The company is building a platform for dense compute and AI on the one hand, with features like AVX-512 VNNI, and doubling down on the idea of using Optane and its additional memory capacity to enable new types of enterprise computing.
In memory mode, Optane is treated like a larger, slower DRAM cache (being backed by 1.5TB of DDR4 will keep latency from being a problem and the large amount of memory should provide its own benefits). In App Direct mode, Optane is mounted like a storage device and used as a RAM disk. This mode may require applications to be modified to take advantage of it, which means App Direct support will be slower to appear.
AMD’s 7nm ‘Rome’ CPU will challenge Intel directly when it launches later this year. Rome isn’t expected to support features like AVX-512 or Optane, but it will offer eight additional cores and unknown improvements in clock speed compared with AMD’s previous generation of Epyc processors.
Let’s block ads! (Why?)