The performance of the loop is bounded by the DRAM bandwidth.
The performance of the loop is bounded by the private cache bandwidth. The bandwidth of the shared cache and DRAM may degrade perfomance.
The performance of the loop is bounded by the L1 bandwidth.
The performance of the loop is bounded by the L2 bandwidth.
The performance of the loop is bounded by the L3 bandwidth.
The performance of the loop is bounded by the L4 bandwidth.
The performance of the loop is bounded by the DRAM bandwidth.
The performance of the loop is bounded by the MCDRAM bandwidth.
To improve performance: Improve caching efficiency and eliminate inefficient memory access patterns. The loop is also scalar. To fix: Vectorize the loop. Scalar memory instructions might degrade application performance.
The performance of the loop is bounded by the private cache bandwidth. The bandwidth of the shared cache and DRAM may degrade perfomance.
The performance of the loop is bounded by the L1 bandwidth.
The performance of the loop is bounded by the L2 bandwidth.
The performance of the loop is bounded by the L3 bandwidth.
The performance of the loop is bounded by the L4 bandwidth.
The performance of the loop is bounded by the DRAM bandwidth.
The performance of the loop is bounded by the MCDRAM bandwidth.
To improve performance: Improve caching efficiency and eliminate inefficient memory access patterns. The loop is also scalar. To fix: Vectorize the loop. Scalar memory instructions might degrade application performance.
- Data transferred between L1 and L2 cache levels (in cache lines) exceed CARM traffic between CPU registers and Memory subsystem (in bytes).
This can be due to inefficient memory access pattern and cache line utilization. In this case, you only access a single element from a full cache line stored in L1. - Memory-Level Roofline
- Vectorization Resources for Intel® Advisor Users