Abstract
Access patterns and cache utilization play a key role in the analyzability of data-intensive applications. In this demo, we re-examine our previous research on software-hardware co- design to push data transformation closer to memory from a real-time perspective. Deployed in modern CPU+FPGA systems, our design enables efficient and cache-friendly access to large data by only moving relevant bytes from the target memory. This (1) compresses the cache footprint and (2) reorganizes complex memory access patterns into sequential and predictable patterns.
Proceedings of the WiP Session at the IEEE Real-Time Systems Symposium (RTSS@Work), 2022
Shahin Roozkhosh, Denis Hoornaert, Renato Mancuso, Manos Athanassoulis