[gem5-users] How are system calls handled in FS mode?

2021-06-22 Thread Balls Mahoney via gem5-users
I am confused about how system calls work in Full System mode (x86). For
example, in src/arch/x86/isa/decoder/two_byte_opcodes.isa for FS mode only
sysenter() is called (included code below). However, I don't see this
function defined anywhere else. Is this something passed straight to the
kernel? How about when the system call returns? Any guidance would this
would be greatly appreciated.

0x4: decode FullSystemInt {
0: SyscallInst::sysenter('xc->syscall()',
 IsSyscall, IsNonSpeculative,
 IsSerializeAfter);
default: sysenter();
}

Thanks.
- Jon
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s

[gem5-users] Re: Regarding Cache Clusivity

2021-06-22 Thread Chongzhi Zhao via gem5-users
Hi gem5 community,
Any suggestions on this topic?


On Thu, May 27, 2021 at 11:29 AM Chongzhi Zhao  wrote:

> Update:
> To enforce strict inclusivity, I changed BaseCache::handleEvictions() to
> below (changes in bold font). I also added "fully_incl" to Enum in
> Cache.py.
> Would this make caches strictly inclusive by back invalidation upon
> eviction?
>
> bool
> BaseCache::handleEvictions(std::vector _blks,
> PacketList )
> {
> bool replacement = false;
> for (const auto& blk : evict_blks) {
> if (blk->isValid()) {
> replacement = true;
> const MSHR* mshr =
> mshrQueue.findMatch(regenerateBlkAddr(blk),
> blk->isSecure());
> if (mshr) {
> // Must be an outstanding upgrade or clean request on a
> block
> // we're about to replace
> assert((!blk->isSet(CacheBlk::WritableBit) &&
> mshr->needsWritable()) || mshr->isCleaning());
> return false;
> }
> }
> }
>
> // The victim will be replaced by a new entry, so increase the
> replacement
> // counter if a valid block is being replaced
> if (replacement) {
> stats.replacements++;
>
> // Evict valid blocks associated to this victim block
> for (auto& blk : evict_blks) {
> if (blk->isValid()) {
>
>
>
>
>
>
>
>
>
>
>
>
> *// Before evicting this victim block, snoop upper-level
> caches// and attempt to invalidate existing same blocks if
> inclusive cacheif (clusivity == Enums::fully_incl) {
> RequestPtr req = std::make_shared(
> regenerateBlkAddr(blk), blkSize, 0, Request::invldRequestorId);
> if (blk->isSecure())
> req->setFlags(Request::SECURE);
> req->taskId(blk->getTaskId());PacketPtr invl_pkt = new
> Packet(req, MemCmd::InvalidateReq);
> invl_pkt->setExpressSnoop();invl_pkt->senderState =
> nullptr;cpuSidePort.sendTimingSnoopReq(invl_pkt);
>   }*
>
> evictBlock(blk, writebacks);
> }
> }
> }
>
> return true;
> }
>
>
> On Thu, May 13, 2021 at 8:41 PM Chongzhi Zhao  wrote:
>
>> Hi gem5 community,
>>
>> TL;DR:
>>
>>1. In "classic" memory, the current 2 options, mostly_incl and
>>mostly_excl, seem to apply only to cache fill but NOT eviction. As a
>>result, blocks evicted from L2 may be still present in L1. Is my
>>understanding correct?
>>2. What would be a reasonable way to enforce strict inclusivity and
>>exclusivity?
>>
>> --
>>
>> Details:
>> This link may provide some background:
>> https://m5-dev.m5sim.narkive.com/qRrXUtt7/gem5-dev-review-request-3156-mem-add-cache-clusivity-to-steer-behaviour-on-fill
>>
>>1. In the case of a replacement eviction, BaseCache::allocateBlock()
>>finds a victim and then calls BaseCache::handleEvictions() >>
>>BaseCache::evictBlock() >> Cache::evictBlock(). As a result, a packet is
>>pushed into writebacks list. doWritebacks() doesn't seem to bother 
>> upstream
>>caches that hold the same block.
>>2. To implement strict inclusivity, the most obvious way to me is
>>letting doWritebacks() send an invalidation packet through cpuSidePort. 
>> But
>>I don't know how to stitch these things together without breaking other
>>stuff.
>>
>>
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s

[gem5-users] Re: Understanding write timing in MemCtrl

2021-06-22 Thread Jason Lowe-Power via gem5-users
Hi Vincent,

It depends on when/how you're ending the simulation. If you end the
simulation at some particular tick, then you'll see writes left in the
write queue. Just like a real machine, writes don't happen instantaneously,
and at some point in time, there are writes sitting in the write buffer
(and dirty data in the cache, too). In gem5, like a real system, if you
wanted to ensure everything is flushed to persistent storage, you could
call a flush system call. Also like a real system, there is no instruction
to flush the memory controller write queues. The data there is
architecturally visible, so it doesn't matter if it's in the write queue or
in memory.

If for some reason you really need all of the data in gem5's backing memory
(e.g., to take a checkpoint), you can call the drain() function which will
dump everything.

I believe you're really asking about timing accuracy, though. If that's the
case, I would give you two comments: (1) I would expect that your program
runs long enough that 32 cache lines that haven't been written back to
memory will make no difference in the overall execution time. And (2) if
you really need to be modeling that this detailed of a level, you should
probably be using full system mode so you can correctly model syscalls, etc.

Hopefully this answers your question!

Cheers,
Jason

On Tue, Jun 22, 2021 at 8:06 AM Vincent R. via gem5-users <
gem5-users@gem5.org> wrote:

> Hi again,
>
> just wanted to give this a second try. No urgent matter here, just some
> lack of understanding and curiosity on my side.
>
> Thank you,
> Vincent
>
>
> Am 04.06.2021 um 11:34 schrieb Vincent R.:
> > Hi everyone,
> >
> > I am currently doing some experiments with packet timings in the
> > Memory Controller( gem5 version 20.1.0.2., SE mode). As I understood
> > it, writes accesses are serviced instantly by the controller and their
> > actual timing is only calculated later when the corresponding
> > nextReqEvent is processed and the packet is removed from the write queue.
> > This works fine, however, with default parameters set, there are still
> > a lot of write packets left in the queue, when the simulation exits.
> > So, in my understanding, these are never correctly timed.
> >
> > As I was experimenting I set the write threshold parameters of MemCtrl
> > in a way to force the controller to process all writes. Naturally the
> > run time increases by a large amount of ticks.
> >
> > My question: How does gem5 perform a correct timing simulation while
> > leaving untimed writes in the queue at the end of simulation? Or
> > doesn't it? Have I misunderstood something?
> >
> > Test system is a simple example configuration without caches.
> >
> >
> > Thank you for your help.
> > Vincent
> ___
> gem5-users mailing list -- gem5-users@gem5.org
> To unsubscribe send an email to gem5-users-le...@gem5.org
> %(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s
>
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s

[gem5-users] Re: Understanding write timing in MemCtrl

2021-06-22 Thread Vincent R. via gem5-users

Hi again,

just wanted to give this a second try. No urgent matter here, just some 
lack of understanding and curiosity on my side.


Thank you,
Vincent


Am 04.06.2021 um 11:34 schrieb Vincent R.:

Hi everyone,

I am currently doing some experiments with packet timings in the 
Memory Controller( gem5 version 20.1.0.2., SE mode). As I understood 
it, writes accesses are serviced instantly by the controller and their 
actual timing is only calculated later when the corresponding 
nextReqEvent is processed and the packet is removed from the write queue.
This works fine, however, with default parameters set, there are still 
a lot of write packets left in the queue, when the simulation exits. 
So, in my understanding, these are never correctly timed.


As I was experimenting I set the write threshold parameters of MemCtrl 
in a way to force the controller to process all writes. Naturally the 
run time increases by a large amount of ticks.


My question: How does gem5 perform a correct timing simulation while 
leaving untimed writes in the queue at the end of simulation? Or 
doesn't it? Have I misunderstood something?


Test system is a simple example configuration without caches.


Thank you for your help.
Vincent

___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[gem5-users] Re: Read request and writebacks in gem5

2021-06-22 Thread Aritra Bagchi via gem5-users
Any help from anyone is appreciated. Thanks!

Regards,
Aritra



On Tue, Jun 22, 2021, 01:18 Aritra Bagchi  wrote:

> Hi,
>
> Could anybody help me understand what happens in gem5 when a read request
> reaches a cache (say L3) and L2's write queue has a pending writeback
> (writeback that has not yet been written to L3) with the same block as the
> read request? Is the read request gets serviced from the write queue as
> the writeback has recent data? If so, where in gem5 can I find the code for
> this?
>
> Thanks and regards,
> Aritra
>
>
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s

[gem5-users] HMC in SE mode using a single vault controller

2021-06-22 Thread hissa alshamsi via gem5-users
Hi everyone,

I am trying to use HMC in SE mode. The problem is when I run hello world binary 
file or any other larger benchmark, apparently from stats.txt only one vault 
controller is being used. I don't know why the other controllers are shown to 
be in IDLE state with zero values.

I have used this line for compilation:

build/X86/gem5.opt configs/example/se.py --cpu-type=DerivO3CPU --cpu-clock=2GHz 
--caches --l2cache --mem-type=HMC_2500_1x32 
--cmd=tests/test-progs/hello/bin/x86/linux/hello

Can someone explain to me why is this happening? and how to utilize all the 
vault controllers?

Thank you in advance,
Hessa.
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s