Category Archives: Parallel Computing

About parallel computing, programming models, infrastructure, tools, libraries, parallel computer architecture, etc.

Intel’s New 10 nm Process: The Wind in our Sails

Welcome to the 10 nm CMOS era! In its first Technology and Manufacturing Day event (press kit, all presentations), Intel unveiled and detailed the highlights of their forthcoming 10 nm process technology node. It’s better than I expected. Intel’s new 10 nm process almost triples the capacity of new integrated circuits, so that the performance and capabilities of our systems can again “leap ahead” and deliver new platforms and experiences. It’s like a three year contract extension. Laissez les bon temps rouler!

The Autumn of Moore’s Law

Computer performance necessarily surfs transistor technology scaling trends. (See The Autumn of Moore’s Law: Scaling up Computing Performance 2011-2020.) For the past 50 years, transistor scaling has been the wind in the sails of the computer industry. Every couple of years transistors per chip double and cost per transistor halves, and this powers disruptive innovations like iPhones, wireless broadband, datacenters with 100 Gbps networking, self-driving cars and self-piloting drones, mixed reality, and deep learning.

These scaling trends have slowed somewhat in the 2010s. We have left the Dennard scaling era. Transistor performance (area times delay) and energy efficiency (gates switched per unit energy) no longer double as feature widths and heights each scale down by a factor of 1/√2 ≈ 0.7. And now each such lithographic feature shrink is taking more than two years.

This decade, Intel’s innovations in high volume manufacturing with high-K metal gates, FinFETs and SADP (self-aligned double patterning) lithography have kept Moore’s Law ticking along. But now Intel has been stuck on the 14 nm technology node for at least one year longer than we expected. The familiar yearly tick, tock of process and architecture advances has been a tick, tock, tock, tock. Has Intel manufacturing lost its mojo?

Scaling Down

Emphatically: no. As introduced by EVP Stacy Smith and detailed in Intel Senior Fellow Mark Bohr’s presentation and CVP Kaizad Mistry’s presentation, Intel’s new 10 nm process, announced 32 months after the 14 nm process launch, achieves a remarkable 2.7× transistor density improvement over its predecessor. This is achieved by combining pure lithography scaling with new transistor topology and circuit layout advances that together scale the area of an average logic cell, not at 0.5× but at 0.37×. (This compounds with the impressive 0.37× scaling Intel achieved moving from 22 nm to 14 nm.) Unfortunately SRAM cell scaling at ~0.6× from 14 nm is less dramatic.

Intel 10 nm lithography now requires self-aligned quad patterning (SAQP) at least for 36 nm pitch metal interconnect. Apparently soft x-ray Extreme UV litho is still not ready for prime time. So the challenge is to pattern very narrow rows of lines on the die to lay out FET structures and wires. How can you image such narrow lines using fuzzy 193 nm deep UV laser light? You first pattern the finest lines you can optically using every trick in the book, high numerical aperture immersion, phase shift masks, and computational lithography, then etch/deposit sidewall spacers on the lines that (after more processing steps) become masks, to pattern twice as many lines at half the line pitch. (Multiple patterning (“sidewall image transfer”). I understand that SAQP is SADP and then SADP again. These ultra fine features are achieved only with many extra (expensive) lithographic processing steps.

Beyond lithography, the third generation FinFET transistors themselves are taller (54 nm) and narrower (34 nm pitch), as compared to 42×42 nm in 14 nm, and 34×60 nm in 22 nm; and now the gate contact can be placed atop the gate (“contact over active gate”) that achieves a 10% area savings versus prior nodes’ alongside-the-gate. Unspecified process innovations also enable a new standard cell layout with a shared single dummy gate per cell border which achieves a ~20% area savings. Together with litho scaling these one-time “hyper scaling” improvements boost density scaling from 2.0× to 2.7×. This slide summarizes the improvements.

In the context of an exemplary microprocessor with its mix of logic and regular SRAM, Intel expects overall area scaling at 0.43×, reducing a (complex) processor + cache tile from 17.7 mm2 to 7.6 mm2. This portends a feasible doubling in core counts and cache areas in both client processors and Xeon servers, and a doubling in programmable logic and embedded SRAM resources in future 10 nm FPGAs.

As transistors scale down, the big challenge in spending them to scale up system performance is energy. If you keep doubling transistors per die without doubling gate energy efficiency, eventually you can’t afford to power or cool your integrated circuit, or you have to run it at a lower frequency that it is capable of. This is the dark silicon problem (and for FPGAs, dark fabric). Here too Intel’s 10 nm process makes great strides. Compared to 14 nm, you can get 25% faster switching, or get same performance for 0.55× the power. (I’ll take the latter, thank you.) Furthermore, Intel anticipates follow on nodes 10+ and 10++ with additional performance or power savings. This is welcome news and just significant as the headline 2.7× density scaling.

Despite good progress on gate switching energy scaling, the best way forward is still to selectively run serial bottlenecks at higher voltages/frequencies but devote most of the computation to slower, but more energy efficient, parallel compute fabrics. For a perfectly parallel workload, for the same power, you can run the same cores 25% faster, or spend some of your new transistor budget windfall on more processing elements, in pursuit of 81% (1/0.55) greater throughput.

Apples to Apples

Intel underscores their process technology lead versus competing fabs, who are also underway on so-called 10 nm and even 7 nm nodes. Much like FPGA industry’s “marketing system logic cells” (of which there are zero in any FPGA – go open an FPGA device view and see for yourself – none) vs. real delivered 6-LUTs, in process technology specs one-upsmanship there is Intel 10 nm and then there is everybody else’s 10 nm.

In his editorial Let’s Clear Up the Node Naming Mess, Mark Bohr proposes a benchmark of transistors per square millimeter implementing logic standard cells of 60% NAND2 and 40% SFF (scan flip-flop).

Using this metric, the new 10 nm process achieves 100.8 M transistors per square millimeter. This compares to 37.5 MTr/mm2 in today’s 14 nm and just 7.5 MTr/mm2 in 2010’s 32 nm process. That’s a big leap forward that underscores that Moore’s Law is not dead – not yet.

Agility and Heterogeneous Integration: More than Moore

Slides 37-42 of Bohr’s slides underscore Intel’s EMIB (Embedded Multi-Die Interconnect Bridge) technology, which enables cost effective, high bandwidth, low latency composition of heterogeneous dice in an SiP (system in package). EMIB enables the forthcoming Stratix-10 MX FPGA with HBM2 DRAM die stacks in package, targeting up to 1 TB/s of DRAM bandwidth.

At FPGA 2017, Andrew Putnam of Microsoft Research pointed out that if you already have an EMIB- or SSI-interposer- FPGA, it’s straightforward to build a new FPGA + ASIC (or CPU + ASIC) SiP. Better than a standalone ASIC, a SiP-ASIC doesn’t need PCIe or QPI interface to the FPGA/CPU, doesn’t need high powered 10-28 Gb/s multi-gigabit serial transceivers with clock-data recovery, but rather will employ many hundreds of ~2 GHz low-voltage-swing nets to the FPGA/CPU.

EMIB SiP enables a new kind of agilty that Intel should leverage, both in Xeon-ASIC and Xeon-FPGA-ASIC SiP solutions. For example if a particular binary weight neural network machine learning platform catches on, Intel’s Altera asset enables them to 1) rapidly develop and ship an acceleration solution on a CPU+FPGA SiP (plus a software library version for down level systems); and concurrently develop a BNN-ASIC bare die then 2) assemble and ship a CPU+FPGA+BNN-ASIC SiP, without impact to CPU or FPGA dies, costs, or schedules. Compared to a four year product cycle of new feature pathfinding and value-proposition-proving and architecture review, and so forth — finally achieving production silicon but typically missing first mover advantage — an EMIB-powered ASIC-SiP methodology could cut two years from the process, capturing new business, and providing new work for older fab lines.

What Intel Didn’t Say

The elephant in the room is cost per transistor (CPT) scaling. As per-fab equipment costs rise and per-wafer processing costs rise with multiple patterning, CPT no longer halves as transistors per mm2 doubles. A few years back, NVIDIA complained that transitioning to partner TSMC’s then-new 20 nm planar process could see negligible CPT improvement due to these increased costs. Here though Intel states that their lithographic shrink plus application of one-time hyper scaling techniques (here, contact over active gate and single dummy gate standard cells) overcomes the increasing cost per mm2 to continue the trend of expoentially cheaper transistors — “hyper scaling allows the economics of Moore’s Law to continue”.

I am also curious to whether and what extent (as Intel’s Shekar Borkar discussed 10 years ago) transistor variability across the die and across chips has become a problem that requires (e.g. microarchitectural) attention. Are FinFETs less susceptible to dopant distribution variability than planar transistors?

It is unclear how quickly Intel will be able to ramp up high volume production in this process, what yields they expect in 2018, and how it compares with the competing TSMC 7 nm process that will power the next generation of Xilinx FPGAs.

Also, no sign of silicon photonics in the mainstream.

Into the Grand and Glorious Future

Intel is well positioned with leadership manufacturing, processors, memory, FPGAs, SOC and networking and wireless infrastructure, with its Software and Solutions Group assets, and with new business investments like machine learning and ADAS. Not content to merely fill x86 ISA sockets until oblivion, it is investing and striving to climb up the technology stack and capture more value in new markets.

Next year Intel will crank out several hundred million 10 nm processors, and soon FPGAs and other chips. As an FPGA technologist I am particularly excited about the opportunity to integrate FPGAs into processors — whether monolithic die designs or via EMIB bridges. For forty years increasing transistor budgets have brought integration and democratization of new functions into the platform. By 1989 as transistors doubled not only did Intel pipeline their 386 core but they added a paged MMU and FPU to make the 486. This quickly became a standard platform that software stacks take for granted. Similarly, rather than double the client CPU from 4 cores to 8, or a server from 16 to 32, it may make sense to spend some of the new transistor and power budget to add some FPGA fabric into the system. We’ll see.

My career was built on Intel processors, and my work today still relies upon them. Beyond that, Intel’s remarkable process scaling and manufacturing leadership has led the industry forward. When I use Xilinx Virtex UltraScale+ (16 nm TSMC FinFET) FPGAs, running at 0.72 V at 50 A, I appreciate many of the requisite lithography, process, and circuit technologies involved were invented, nurtured, or perfected at Intel first.

Thank you, Intel. Well done. Hope to see you again in 2020.

GRVI Phalanx joins The Kilocore Club

The work-in-progress GRVI Phalanx massively parallel accelerator framework has been ported to the Xilinx Virtex UltraScale+ XCVU9P.

On Dec. 30, 2016, a design with 30 rows by 7 columns of clusters of 8 GRVI RISC-V cores + 128 KB CRAM (cluster RAM) + a 300-bit Hoplite NOC router — a total of 1680 cores and 26 MB of SRAM — booted up and tested successfully, running a message passing matrix multiply workload on all 1680 cores, in a XCVU9P-FLGA2104-2L-E-ES1 device in a Xilinx VCU118 evaluation kit.

This 1680 core GRVI Phalanx is the first operational kilocore RISC-V, the first kilocore 32b RISC in an FPGA, and the most 32b RISC cores on a chip in any technology.

1 core, 32 cores, 1680 cores -- RISC-V scales up! A 1-core Si-Five HiFive-1, a 2x2x8=32-core GRVI Phalanx in a Digilent Arty / XC7A35T, and a 30x7x8=1680-core GRVI Phalanx in a Xilinx VCU118 / XCVU9P

1 core, 32 cores, 1680 cores — RISC-V scales up! A 1-core Si-Five HiFive-1, a 2x2x8=32-core GRVI Phalanx in a Digilent Arty / XC7A35T, and a 30x7x8=1680-core GRVI Phalanx in a Xilinx VCU118 / XCVU9P.

Here is the basic cluster tile architecture redesigned for UltraScale+ and its new 288 Kb UltraRAM jumbo-SRAM blocks. The present design includes 210 instances of this tile.

A GRVI Cluster tile with 8 GRVI RISC-V cores, 128 KB multiported bank interleaved shared cluster RAM, optional accelerators (here, none), and a 300-bit wide Hoplite NOC router.

A GRVI cluster tile with 8 GRVI RISC-V cores, 128 KB multiported bank interleaved shared cluster RAM, optional accelerators (here, none), message passing NOC interface, and a 300-bit wide Hoplite NOC router.

An example 1680 GRVI system implemented in a Xilinx Virtex UltraScale+ VU9P. This GRVI Phalanx comprises NX=7 x NY=30 = 210 clusters, each cluster with 8 GRVI cores and a 8-ported 128 KB cluster shared memory. The clusters are interconnected on a Hoplite NOC, with the Hoplite routers configured with 290b data payloads (including 32b address and 256b data), achieving a bandwidth of about 70 Gb/s/link and a NOC bisection bandwidth of 900 Gb/s. Each cluster can send or receive 32 B per cycle into the NOC. The GRVI Phalanx architecture anticipates a variety of configurable accelerators coupled to the processors, the cluster shared RAM, or the NOC.

An example 1680 GRVI system implemented in a Xilinx Virtex UltraScale+ VU9P. This GRVI Phalanx comprises NX=7 x NY=30 = 210 clusters, each cluster with 8 GRVI cores and a 8-ported 128 KB cluster shared memory. The clusters are interconnected on a Hoplite NOC, with the Hoplite routers configured with 290b data payloads (including 32b address and 256b data), achieving a bandwidth of about 70 Gb/s/link and a NOC bisection bandwidth of 900 Gb/s. Each cluster can send or receive 32 B per cycle into the NOC. The GRVI Phalanx architecture anticipates a variety of configurable accelerators coupled to the processors, the cluster shared RAM, or the NOC.

An extended abstract with additional detail on this work has been submitted to, and hopefully will be presented at, the OLAF’17 workshop at FPGA’17.

‘Computing on Programmable Logic’ at Microsoft Research Faculty Summit 2016

Yesterday I had the privilege of speaking on Computing on Programmable Logic (slides, video) in the ‘Computing with Exotic Technologies and Platforms’ session at the Microsoft Research Faculty Summit 2016.

Abstract: “We have seen the birth of many exotic architectures in recent years, from a quantum computer that promises to achieve exponential speed-ups over conventional computers, to DNA computation that performs disease diagnostics and therapy, to Field Programmable Gate Arrays (FPGAs) that provide a flexible toolkit for implementing architectures such as Microsoft’s Catapult fabric for large-scale datacenters. Each of these exotic technologies enable novel solutions to challenging problems and require equally novel methods to program and design them. We will highlight the advances in their applications and the challenges behind developing their toolchains and programming environments.”

GRVI Phalanx Update

An update on the work-in-progress GRVI Phalanx.

Conferences

An extended abstract and brief talk on GRVI Phalanx was presented at the 2nd International Workshop on Overlay Architectures (OLAF-2) at FPGA 2016.

GRVI Phalanx was discussed in the short talk Software-First, Software Mostly: Fast Starting with Parallel Programming for Processor Array Overlays at the Arduino-like Fast-Start for FPGAs pre-conference workshop at FCCM 2016. [Slides]

The first refereed paper on GRVI Phalanx was presented yesterday at the 24th IEEE International Symposium on Field-Programmable Custom Computing Machines (FCCM 2016): GRVI Phalanx: A Massively Parallel RISC-V FPGA Accelerator Accelerator and received the FCCM 2016 Best Short Paper Award. [PDF]

Hardware Changes: Version 0.2

Here are some of the changes made to the GRVI Phalanx design since it was first described at the 3rd RISC-V Workshop. This is now version 0.2.

GRVI

  • LB/LBU/LH/LHU/SB/SH: Load/store byte and halfword alignment functionality is now configured OFF in the GRVI PEs. The LdMux and StMux units have been factored out of GRVI and into the GRVI cluster, each set of muxes shared by a pair of cores.
  • MUL/MULH/MULHU/MULHSU: The multiply instructions from the RISC-V “M” extension are now enabled by default and are implemented in the GRVI cluster. Each pair of processors shares one DSP-based multiplier. This consumes 200 DSP48s in the 400 PE GRVI Phalanx for Kintex UltraScale 040, leaving 1720 DSP48s for use by accelerators.
  • SL*/SR*: By default, fast left and right shift instructions are also implemented in these DSP-based multipliers.
  • LR/SC: These atomic instructions from the RISC-V “A” extension are now enabled by default. Part of the implementation is in the GRVI core and part in the GRVI cluster memory arbiters. The implementation considerations were discussed on the RISC-V mailing lists here.

Phalanx

  • A Phalanx system may be configured to replace the cluster at (NX-1,NY-1) with a character mode VGA cluster with a 32 KB text frame buffer.
  • Hoplite multicast message routing is now enabled by default. An agent can sent a message to every cluster on a given row, given column, or to every cluster on the NOC. If desired, all IRAMs in all clusters in a Phalanx may be updated with a single burst of 1024 XY-multicast messages.

Introducing GRVI Phalanx: A Massively Parallel RISC-V FPGA Accelerator Accelerator

GRVI is an FPGA-efficient RISC-V RV32I soft processor core, hand technology mapped and floorplanned for best performance/area as a processing element (PE) in a parallel processor. GRVI implements a 2 or 3 stage single issue pipeline, typically consumes 320 6-LUTS in a Xilinx UltraScale FPGA, and currently runs at 300-375 MHz in a Kintex UltraScale (-2) in a standalone configuration with most favorable placement of local BRAMs.

Phalanx is massively parallel FPGA accelerator framework, designed to reduce the effort and cost of developing and maintaining FPGA accelerators. A Phalanx is a composition of many clusters of soft processors and accelerator cores with extreme bandwidth memory and I/O interfaces on a Hoplite NOC.

GRVI Phalanx was introduced today at the 3rd RISC-V Workshop at Redwood Shores, CA.

A work-in-progress 5x10x8 = 400 processor configuration in a KU040 in a Xilinx KCU105 and a 2x2x8 = 32 processor configuration in a Xilinx Artix-7 35T in an Digilent Arty were demonstrated in the demo/poster session.

A 10x5x8 = 400 processor GRVI Phalanx

For more information please visit the GRVI Phalanx page.

Introducing Hoplite: An FPGA-Optimal Router for Extreme Bandwidth NOCs

Hoplite is a configurable, general purpose, FPGA-optimal 2D router and tools for implementation of efficient network on chip (NOC) interconnection of diverse processors, accelerators, other client cores, and extreme bandwidth (100+ Gb/s) interfaces.

The first paper on Hoplite is presented today at FPL 2015:

Hoplite: Building Austere Overlay NoCs for FPGAs, Nachiket Kapre, Jan Gray.
25th International Conference on Field-Programmable Logic and Applications, Sept. 2015. Received the Michael Servit Best Paper Award. [PDF]

For more information please visit the Hoplite page.

The Past and Future of FPGA Soft Processors

Earlier this month I had the privilege of giving a keynote at Reconfig 2014. I decided to speak on the past and future of FPGA soft processors. This is my twentieth anniversary of working (on and off) in this field so this seemed an apt time and opportunity to share my perspective on where FPGA soft processors came from and what their continuing utility and prospects might be in the decade ahead — the autumn of Moore’s Law, the winter of Dennard Scaling.

Abstract
Design productivity is still a challenge for reconfigurable computing. It is expensive to port a software workload to RTL, to maintain the RTL as the workload evolves, and to wait for hours to recompile a bitstream after each design change. Soft processors can help mitigate these costs, and provide new pathways to application acceleration. A mid-range FPGA can now host hundreds of soft CPUs and their interconnection network, and such heterogeneous massively parallel processor and accelerator arrays can sustain hundreds of operations, memory accesses, and branches per cycle.

This talk will look back on the history and diversity of soft processor cores for FPGAs, and their continuing relevance for the decade ahead. What new tools, IP, and infrastructure will help us to exploit the coming million LUT, 10 TFLOPS FPGAs? Along the way we will revisit an austere design esthetic and an implementation methodology for crafting FPGA-optimized soft cores, and see how the lessons of mapping one processor into one 1995 FPGA can inform us how to design massively parallel programmable accelerators going forward.

Here are the slides.

Microsoft Catapult at ISCA 2014, In the News

This week at ISCA 2014 Andrew Putnam presented A Reconfigurable Fabric for Accelerating Large-Scale Datacenter Services (PDF).

Some of the Catapult team members (Microsoft Research and Bing)

Abstract: Datacenter workloads demand high computational capabilities, flexibility, power efficiency, and low cost. It is challenging to improve all of these factors simultaneously. To advance datacenter capabilities beyond what commodity server designs can provide, we have designed and built a composable, reconfigurable fabric to accelerate portions of large-scale software services. Each instantiation of the fabric consists of a 6×8 2-D torus of high-end Stratix V FPGAs embedded into a half-rack of 48 machines. One FPGA is placed into each server, accessible through PCIe, and wired directly to other FPGAs with pairs of 10 Gb SAS cables.
In this paper, we describe a medium-scale deployment of this fabric on a bed of 1,632 servers, and measure its efficacy in accelerating the Bing web search engine. We describe the requirements and architecture of the system, detail the critical engineering challenges and solutions needed to make the system robust in the presence of failures, and measure the performance, power, and resilience of the system when ranking candidate documents. Under high load, the largescale reconfigurable fabric improves the ranking throughput of each server by a factor of 95% for a fixed latency distribution—or, while maintaining equivalent throughput, reduces the tail latency by 29%.

Coverage:

 

FPGAs, Then and Now

On the left, from 1995, J32, one 32-bit RISC SoC in an XC4010. It had 20x20x2=800 4-LUTs (and 400 3-LUTs).

On the right, from 2013, 1000 32-bit RISC datapaths and 250 router cores in an XC7VX690T (which provides over 433,000 6-LUTs and 1470 BRAMs). A work in progress.

In other words, in the past 18 years Moore’s Law has taken us from 1K LUTs per FPGA to 1K 32-bit CPUs per FPGA.

1995: One 32-bit RISC SoC in an XC4010 --- 2013: 1000 32-bit RISC datapaths and 250 router cores in an XC7VX690T.

FCCM 2013 Panel: Reconfigurable Computing in the Era of Dark Silicon

At FCCM 2013, I was on a panel to discuss Reconfigurable Computing in the Era of Dark Silicon. If you haven’t heard of the Dark Silicon meme in the computer architecture community, I recommend you review Michael Taylor (UCSD)‘s slides from DaSi 2012.

It’s difficult to take these things out of context, but here for posterity’s sake are my position slides: Gray-Dark Silicon and HeMPPAAs. I emphasize that orders of magnitude energy efficiency improvements might be achieved by building workload-optimized computers in FPGAs using a HeMPPAA (heterogeneous massively parallel processor and accelerator arrays) architecture. I also propose infrastructure investments so that FPGA design in the large is much more like the software development experience.