Hi,

I have been evaluation cache replacement policies in gem5. I used the
following test program to evaluate LRU and BIP policies.
I am using a L2 cache of 256kB so that all of A, B, C (each takes 128kB)
would not fit in the cache. I was expecting that LRU would perform  better
here but my experiments suggest otherwise. (i.e. LRU gets a miss l2 overall
data miss rate of 0.630 and BIP 0.619 for 5 iterations of the code).

I am expecting the following to happen.
For BIP : A is loaded and touched so it will be in MRU position. B is
loaded and C is loaded. when B is touched again it can not present in the
cache because A and C will be present in the cache. So BIP will suffer.

For LRU : When B is accessed again A and B will be in MRU. So all cache
hits.

But my results does not suggest this and I can not figure out why. Can
someone give an explanation?
Assume that I run this code many times.

Thanks,
Charitha

    int lenA = 1<<15;
    int lenB = 1<<15;
    int lenC = 1<<15;

    printf("Running custom\n");

    uint32_t * A = malloc(sizeof(uint32_t)*lenA); // total size 128kB
    uint32_t * B = malloc(sizeof(uint32_t)*lenB); // total size 128kB
    uint32_t * C = malloc(sizeof(uint32_t)*lenC); // total size 128kB

    // load A and access A
    for(int i=0; i<lenA; i++){
        A[i] = 1;
        int p  = A[i] + 10;
    }

    // load B
    for(int i=0; i<lenB; i++) B[i] = 5;

    // load C
    for(int i=0; i<lenC; i++) C[i] = 10;

    // update B again
    for(int i=0; i<lenB; i++) B[i] += 20;

    free(A);
    free(B);
    free(C);
_______________________________________________
gem5-users mailing list
[email protected]
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Reply via email to