Improving AI performance
AIDIMM vs. Leading GPUs
Our idea is to place a GPU on a DIMM. We call this new device an AIDIMM because we envision users to use this for AI. The AIDIMM can then be placed on a standard DRAM interface.
Instead of using traditional GPU semiconductor components, we have chosen to use an FPGA (Field Programmable GateArray).
The FGPA can implement the AI/DL algorithms just like a GPU plus the FPGA has the added flexibility and programmability to implement our new interconnect architecture.
Over 2x Latency Improvement
AI Plus started with the bold idea of improving AI performance.
Today, GPU’s are used for AI/DL processing.We discovered that the main bottleneck for the GPU’s was the PCIe interconnect technology.
Combining our background, experience and expertise in memory technology, we developed a new interconnect architecture that allows us to move the GPU to the DRAM interface. This allows for significantly lower latency than the PCIe interface.
We also discovered that SSD storage could be moved to the DRAM interface resulting in better data storage and access with lower latency.
This allows us to implement our new interconnect architecture with existing, off-the-shelf semiconductors, keeping the cost down and providing faster time to market.With this in mind, we developed two groundbreaking AI solutions:
AI Plus architecture can be used to empower the following industries…