We are building a complex, hierarchical computing infrastructure revolving around the presence of FPGAs as basic computational blocks. Research is focused on both the interconnections between and among the components of the system.
At the lowest hierarchical level we have the Basic Block (BB). A BB is an FPGA-based hardware accelerator. A technique that automatizes the mapping of a computational blocks/agents found in the polyhedral network is required for an effective hardware mapping of software tasks to occur.
We target applications that expose large amount of parallelism, so we predict that we will require thousands of Nodes in our supercomputing system. Whenever this happens, scalability of the communication infrastructure becomes a critical aspect; this is the subject of this research.
At the second lowest level in the hierarchy, we have the Cluster Node (N). N is a platform where multiple BBs are connect with each other via PCI Express, and share part of the computation, ideally the most communication-heavy one. A technique to choose how to partition a PPN on multiple BBs is the focus of this specific research.
This research focuses on how to effectively connect multiple BBs and how to model the BB, in order to define the specifications of a BB prototype.