SC16 (Super Computing 2016 Conference)

sc-2016-conference-banner

November 13-18, 2016
Salt Palace Convention Center
Salt Lake City, UT
RapidIO_logo

Fidus-Logo-1IDT Logoprodrive  WD_CorporateLogo_2Color_Web

Join RapidIO.org along with member companies Fidus SystemsIntegrated Device Technology (IDT), Prodrive Technologies and Western Digital at the SC16 Conference at booth #731. Event Information »

WDC Research will display at SC16 a prototype Storage Class Memory (SCM) controller that attaches via RapidIO to a heterogeneous set of compute engines including x86, ARM, RISC-V, FPGA and DSP at the SC16 Conference. SCM requires a new kind of memory controller: one that appears like DRAM controller to caches and cores, while seamlessly incorporating much of the media management functionality found today only in solid state drives. RapidIO coherence protocol is capable of delivering cache lines directly from SCM into caches while adding less than 250 ns of round-trip latency. Being an open, standardized and most widely deployed interconnect technology in critical mobile infrastructure, RapidIO is very well suited to coherent scale-out of disaggregated SCM in HPC and hyperscale deployments.

Fidus will demonstrate their FPGA based NVMe Over Fabrics Network Controller, a pure PCIe-based solution that allows data center architects to extend PCIe access of Solid State Drive (SSD) solutions over the network without the need for SCSI and SAS protocol translations.  Additionally multiple nodes of FPGA+CPU+SSD based storage can be scaled out over a low latency 50 Gbps RapidIO network using the direct RapidIO connectivity and IDT RapidIO enabled switching on the Sidewinder-100 card. Beyond the NVMe over fabrics capability, additional data center acceleration functions may be executed in the FPGA or direct RapidIO networked FPGA units on other Sidewinder cards in the same server Rack. The Sidewinder-100 leverages the industry’s latest 50 Gbps RapidIO interconnect. By eliminating translation to and from legacy storage device protocols, the Sidewinder-100 allows datacenters to leverage the true low latency times that SSD’s have to offer. For storage vendors and IP developers, the Sidewinder-100 provides the lowest risk path to technology and market validation and demonstration.

While the exponential growth of single-processor performance is coming to an end, a new era has arrived in which system performance will be key. Parallel processing, using many nodes, presents new challenges for the design of electronic hardware, software as well as mechanical hardware. From the start, Prodrive Technologies has been building up extensive experience of all possible types of processing, communication interfaces and associated low-to-mid-level software. Prodrive is unique in adopting development, highly automated manufacturing and vertical integration into the DNA of their company. This results in a COTS portfolio and unique mix of competences to enable their clients to optimize architecture, performance and reliability, at competitive prices and with fast time-to-market. Prodrive combines the latest technology with superior cooling concepts for the best overall system solution. RapidIO has enabled Prodrive for over 10 years to offer lowest latency compute solutions combining the best from different processing architectures with the best internal / external network properties. Visit Prodrive in the RapidIO.org booth at SC16 to learn about their RapidIO 10xN switches which open new doors to grow within the Industrial-HPC, Medical-HPC and general-HPC markets, building on the strong footprint that Prodrive already has.

IDT will also be at SC16 at the RapidIO.org booth showcasing its solutions for the HPC market around RapidIO technology. These include 100ns latency switching, NIC’s as well as board and sub system level platforms that include:

  • 50 Gbps RapidIO switching systems development platforms for HPC and Hyperscale cloud
  • The industry’s lowest per per Gb 50 Gbps Switching appliance
  • Connectivity solutions to heterogeneous computing that covers x86, ARM, OpenPower as well as FPGA and GPU accelerators
  • Supercomputing at the Edge Node with IBM Power8 server
  • High Performance link aggregation up to 100 Gbps with MoSys LineSpeed Flex MUX/deMUX technology
Tags: