Datacenter-In-A-Box at LOw Cost (DIABLO)

  • The basic hardware building block is the BEE3 multi-FPGA system, which was co-developed by Berkeley and Microsoft Research in our previous research project RAMP. The hardware can be currently purchased from BEEcube.

    DIABLO is a modularized single-FPGA design. It only uses high-speed transceivers on the FPGA for scaling up to a bigger system. Technically, DIABLO can be easily ported to many off-the-shelf Xilinx FPGA development boards.

  • The BEE3 board
  • Mapping a datacenter to FPGAs

    DIABLO is a modularized single-FPGA design. We map several server racks and top-of-rack switches to Rack FPGAs, and array/datacenter switch to Switch FPGAs. The two types of FPGAs are connected through high-speed SERDES based on the simulated network topology. We are currently simulating four server racks per FPGA

    Rack/Switch FPGA architecture
  • We use DIABLO to successfully reproduce the classic TCP incast problem. DIABLO also helps us to further explore this problem from the system perspective at 10 Gbps, gaining different insight from previous work.

  • TCP Incast throughput collapse
  • DIABLO reproduced memcached request latency long tail at the scale of 2,000 nodes, which is previously impossible for academia research. It helps researchers to explore Google-scale problems by running unmodified software code on the distributed execution-driven simulation engine.

    memcached request latency long tail with different interconnect on 2,000 simulated nodes