GRVI is an FPGA-efficient RISC-V RV32I soft processor core, hand technology mapped and floorplanned for best performance/area as a processing element (PE) in a parallel processor. GRVI implements a 2 or 3 stage single issue pipeline, typically consumes 320 6-LUTS in a Xilinx UltraScale FPGA, and currently runs at 300-375 MHz in a Kintex UltraScale (-2) in a standalone configuration with most favorable placement of local BRAMs.
Phalanx is massively parallel FPGA accelerator framework, designed to reduce the effort and cost of developing and maintaining FPGA accelerators. A Phalanx is a composition of many clusters of soft processors and accelerator cores with extreme bandwidth memory and I/O interfaces on a Hoplite NOC. Across clusters, cores and accelerators communicate by message passing.
Talks and Publications
GRVI Phalanx was introduced on January 5, 2016, at the 3rd RISC-V Workshop at Redwood Shores, CA. Presentation slides and video.
Jan Gray, GRVI Phalanx: A Massively Parallel RISC-V FPGA Accelerator Accelerator, 24th IEEE International Symposium on Field-Programmable Custom Computing Machines (FCCM 2016), May 2016. Received the FCCM 2016 Best Short Paper Award. [PDF]
Nov. 29, 2017: GRVI Phalanx Update: Plowing the Cloud with Thousands of RISC-V Chickens (slides PDF) (12 min video) at the 7th RISC-V Workshop. This talk for the RISC-V community recaps the purpose, design, and implementation of the GRVI Phalanx Accelerator Kit, recent work, and present work in progress to deliver an SDK for AWS EC2 F1 and PYNQ-Z1, including an OpenCL-like programming model built upon Xilinx SDAccel.
Examples
Here are some example GRVI Phalanx designs:
Other Conference Sightings
An extended abstract and talk on GRVI Phalanx was presented at the 2nd International Workshop on Overlay Architectures (OLAF-2) at FPGA 2016.
GRVI Phalanx was discussed in the short talk Software-First, Software Mostly: Fast Starting with Parallel Programming for Processor Array Overlays at the Arduino-like Fast-Start for FPGAs pre-conference workshop at FCCM 2016. Slides.