[gem5-users] Re: finding tag of address in cache in ruby model

2022-04-13 Thread Gabriel Busnot via gem5-users
To reduce fraud or cyber-crime risks, Arteris IP will never instruct you to 
change wire payment instructions or bank account details using email or fax. If 
you receive such an email or fax appearing to be from Arteris IP, please call 
your Arteris IP’s contact to verify. Please do not verify by email.
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s

[gem5-users] Re: Compilation time

2022-03-24 Thread Gabriel Busnot via gem5-users
Hi,

I've also experienced long/very long link times with the GNU linker. I suggest 
you try LLD, the LLVM linker that has been significantly faster for me.

Regards,
Gabriel
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[gem5-users] Re: Read main memory directly in the timing model

2022-03-21 Thread Gabriel Busnot via gem5-users
Hi,

I believe what you are trying to achieve is what is commonly called a 
"functional access". Though, I am not exactly sure about what is your intent 
when you say that you want to "bypass caches in order to maintain coherence" as 
it sounds contradictory to me.

If you chose to mix regular and cache-bypassing accesses to the same memory 
location without extra care, you will likely violate coherence and eventually 
face inconsistent data. The "extra care" usually consists in explicitly 
flushing caches to update main memory before bypassing the caches with 
subsequent accesses.

If you only want to maintain coherence, then you have to perform "direct memory 
accesses" using the functional access of the regular gem5 ports. This will 
result in an instantaneous access to the data you are looking for, wherever the 
most up-to-date version of the data is, while updating the caches state to 
maintain coherence.

Finally, if you really want to access main memory content and you really know 
what it can imply in terms of coherence violation, then you could directly 
connect your initiator to memory using an extra request port on the initiator 
side that you directly connect to the memory response port through a small XBar 
like the IOXBar. You will also have to bind to the XBar the port that was 
previously connected to the memory response port and the XBar output to memory. 
Don't forget to update the IOXBar default parameters to null latencies and the 
width to greater than cacheline size (likely 512 bits) in order not to 
introduce extra delays (it will still incur and extra clock cycle of delay, 
which is unlikely to cause any issue).

Best,
Gabriel
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[gem5-users] Re: Inquiry on the gem5 communities and forums

2022-02-18 Thread Gabriel Busnot via gem5-users
Hi and welcome Jianda,

You are in the right place! Feel free to subscribe to this mailing list to get 
notified upon new post. You can also post any question related to gem5. You 
will usually get an answer within a working day or two. You can also monitor 
the Jira to learn about what is going on in the background 
(https://gem5.atlassian.net/jira/software/c/projects/GEM5/issues).

Best,
Gabriel
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[gem5-users] Re: 答复: Does the gem5 v21.0.1.0 support to bootup with kernel 5.10 in Ruby-CHI and O3

2022-02-17 Thread Gabriel Busnot via gem5-users
Hi Liyichao,

You might be in luck! A patch fixing non-HN shared CHI cache has been pushed to 
develop yesterday. You can cherry-pick it here: 
https://gem5-review.googlesource.com/c/public/gem5/+/56810.
You can also have a look at the patch attached to the following issue: 
https://gem5.atlassian.net/browse/GEM5-1185. The patch configures a shared L2 
but it should work about the same for shared L3. Your diff looks OK to me but 
it is hard to be assertive.

If this doesn't work for you, can you please fill a Jira with detailed 
reproduction steps for the bug?

Regards,
Gabriel
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[gem5-users] Re: Use of std::ostream in OutputStream class

2022-02-10 Thread Gabriel Busnot via gem5-users
Hi Scott,

This is regular use of polymorphism through virtual member functions : you 
**dynamically** allocate whatever subclass of std::ostream (e.g., 
std::ofstream) and you store the returned pointer value into a pointer to base 
class (std::ostream*). When deleting this pointer, independently from the type 
of the variable you call delete on, the virtual destructor mechanism will 
ensure the destructor of the subclass you allocated is called.

Thus, you could call delete on _stream and have your stream properly closed.

Having said that, I would strongly advise against manual use of new and delete, 
especially when you are not the owner of the pointer you are playing with. Just 
call flush on the stream. This eventually synchronizes the most derived class 
through pubsync (https://en.cppreference.com/w/cpp/io/basic_ostream/flush).

Best,
Gabriel
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[gem5-users] Re: Destructor for BaseCPU

2022-02-10 Thread Gabriel Busnot via gem5-users
Hi,

If I remember correctly, SimObjects are never destructed, although they have a 
virtual destructor. The memory is reclaimed by the OS at process exit. Thus, I 
would also recommend not using the destructor and prefer registerExitCallback().

Gabriel
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[gem5-users] Re: Snoop Directory Size

2022-02-08 Thread Gabriel Busnot via gem5-users
Hi Majid,

I'm no expert in the classic cache model but let me share my understanding of 
the issue.

First, did you notice there is 2 coherentXBars in your system? You have the 
system XBar (system.membus) and the L1s to L2 XBar. You are currently looking 
at the L1s to L2 XBar. This crossbar should be considered a detail of 
implementation as in a real system, the crossbar is part of the shared L2.

But your questions remain valid except that the L2 XBar should even require 
only 8 lines as L2 is bellow the L2XBar so it is outside of its snoop domain. 
16 lines should be enough for the membus. However, it would only hold true only 
if lines were evicted before fill requests were sent downstream. It looks like 
this is not the case and a cache first issues a read and then evicts a line, if 
needed, when getting the data back (see in BaseCache, 
recvTimingResp->handleFill->allocateBlock->handleEvictions->evictBlock->{writebackBlk
 | cleanEvictBlk}). This should answer (2), if I'm correct.

As for (1), I think you are right: the check should be located right at 
allocation point to be accurate. Yet, consider this to merely be a safety check 
in case the directory was growing to unreasonable sizes and hogging host 
memory. This is not a threat to simulation corerectness. You are not really 
supposed to fit directory size to your particular cache hierarchy. Actually, I 
think that you can't do that as you cannot easily bound the number of 
outstanding fill and evict requests in your system. I personally measured up to 
12 allocated entries in the L2XBar snoop filter compared to 8 L1 cachelines and 
up to 17 entries in the membus snoop filter compared to 16 L1 + L2 cachelines. 
It might reach more with a different benchmark. Using a "big enough" maximum 
directory size is the recommended approach.

In real systems, the directory size and associativity is carefully defined not 
to waste nor lack resources based on cache hierarchy configuration (size, 
inclusivity, associativity) and coherence protocol. Directories are sized after 
functional requirements first as opposed to caches that are sized after 
perf/area/power considerations first. In gem5, directory entries are allocated 
as needed and deallocated in a timely manner, which is optimal to model the 
functional behavior of a directory sized to never lack of available entries, as 
in most (all?) systems I am aware of.

If you want to experiment with exotic snoop filters, you might need to 
implement it yourself. Notice that the classic caches in gem5 are not meant to 
model coherency traffic accurately and mostly focus on functional coherence. 
Ruby (e.g., the CHI protocol) models coherency more accurately but does not 
allow configuring the directory size in the current implementation. If you 
don't plan on studying snoop filters, just leave max size to "plenty enough" 
and forget about it, I guess ;)

Regards,
Gabriel
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[gem5-users] Re: Read Clean Request Packets

2021-12-08 Thread Gabriel Busnot via gem5-users
Hi Aritra,

When a cache access misses, the cache in turns issues a request to next level 
cache or memory to request the line. Depending on whether the line needs to be 
read or written and other heuristics and policies, the cache will require from 
the line it gets back to have certain attributes. Uniqueness (exclusivity) and 
cleanness (equality with memory content) are the most commonly used attributes.

In the case of ReadCleanReq, the missing line is requested to be clean, that is 
up to date with memory. The line will thus be received as one of shared clean 
or unique clean. A clean line is typically required when a cache knows it is 
likely to evict it unmodified (i.e., still in clean state). In gem5, this is 
assumed when a cache is read-only or (mostly) exclusive.

Best,
Gabriel
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[gem5-users] Re: Reissuing a Load/Store from sequencer/L1 cache controller

2021-12-02 Thread Gabriel Busnot via gem5-users
Hi Vipin,

The solution will depend on the way you want to retrieve your saved messages.

If you want to retrieve them in the same order you saved them, just add an 
extra MessageBuffer to the cache machine, initialize it in the python script 
and enqueue/dequeue in it the messages you want to save (hint, if using gem5 
v21+, you should be able to write out_msg := in_msg instead of copying all 
fields by hand).

If you do not store too many messages, you can also attempt to use the recycle 
feature of MessageBuffer to cycle through messages and find the one you are 
looking for. Because ruby does not support loops, you will have to hack the 
built_in controller wake up loop. Details would be a bit too long to expose and 
I don't have time to test my idea... But that will definitely make for a fun 
exercise ;)

You can also store inside the TBE all the information you need to reconstruct 
the message when needed.

Last approach that should work for associative lookup: define your own 
container similar to TBE but with support for non default constructible types. 
This will be hacky but sometime ruby does not leave you the choice. Here is a 
very simple example that you can adapt to your need:

1. Put "#include " in a new file 
"src/mem/ruby/protocol/std::unordered_map.hh" (do not forget the std:: in the 
file name as ruby will automatiaclly include this exact file because of 2.)
2. Add

  structure(std::unordered_map, external="yes") {
  void emplace(Addr, MyMsgType);
  MyMsgType at(Addr);
  void erase(Addr);
}
std::unordered_map myMapName, template="";

to your machine.
3. You can now write
   myMapName.emplace(addr, in_msg);
   out_msg := myMapName.at(addr);
   myMapName.erase(addr);

This is very hacky and I don't recommend this for anything other than internal 
use, but it should work like a charm. If you need something cleaner, you will 
have to dive into ruby code generation and how it interfaces with C++. You will 
then be able to define arbitrary complex C++ constructs that can be used from 
inside ruby.

And if I am missing a more idiomatic solution, anyone please enlighten me ;)

Best,
Gabriel
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[gem5-users] Re: Gem5 stuck when simulate C++ workload in SE mode

2021-11-30 Thread Gabriel Busnot via gem5-users
Hi Victor,

No, it won't because the random number generator that you seed with seedRandom 
is independent from the one you seed inside your program. seedRandom seeds 
random_mt declared in base/random.hh which is not accessible from inside your 
program as of the latest version of gem5. You can easily fix that:

First, go to kern/linux/linux.cc:130 (inside devRandom()) and replace 
"random.random(0, 255)" with "random_mt.random(0, 255)".
Second, read /dev/random inside your program to seed your internal 
pseudo-random number generator. This seed will be determined using random_mt 
which is itself seeded by yourself using seedRandom.

Alternatively to the first step, you can define a python binding in 
python/pybind11/core.cc (or update the existing seedRandom binding) to seed the 
random generator currently used by devRandom(). This is a bit more work but it 
will guarantee that the value you get from /dev/random does not depend on what 
happens in the rest of the simulator.

Gabriel
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[gem5-users] Re: Introducing randomness to gem5

2021-11-28 Thread Gabriel Busnot via gem5-users
Hi Victor,

>From what I know, you might not get "true" randomness inside the simulated 
>program that easily. And it is actually a good property for a simulator: its 
>behavior should not change depending on the outer world state (e.g., time). I 
>would rather get the seed from the command line and pass it using the option  
>"--options" if you are using se.py or fs.py. That way, you control your seed 
>and are free to use one you freshly generated with your host or reuse another 
>one. Being able to reuse the same seed is something you will probably want at 
>some point to be able to reproduce a simulation. Non-deterministic programs 
>often are a nightmare to debug.

Best,
Gabriel
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[gem5-users] Re: Introducing randomness to gem5

2021-11-25 Thread Gabriel Busnot via gem5-users
Hi Victor,

It MAY result in different results depending on which components you are using 
in gem5. As stated in my post you cited, the main random number generator which 
is seeded by seedRandom (random_mt) is not always used in gem5, which is an 
issue. Also, not every component in gem5 support randomization. If you are 
using Ruby, you should observe different outputs with different seeds though.

If the component you use support randomization but use the wrong number 
generator, replace it with random_mt (src/base/random.hh). If you think that 
some components could add some randomness but don't, you can add it yourself. 
Typically, you can add random delays to packets when ordering is not a 
functional requirement. Take a look at MessageBuffer::enqueue for an example.

Best,
Gabriel
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[gem5-users] Re: 'm_stall_time' stat in MessageBuffer

2021-11-24 Thread Gabriel Busnot via gem5-users
Hi Daecheol,

I agree with you, m_stall_time calculation does not make much sens to me 
either. It is unreliable at best if you just look at its variations from one 
run to the next. And looking at it again, I think it is even bugged since the 
default copy constructor for Message has been added (m_time is not updated upon 
copy).

I think that a distribution stat would be the most natural representation for 
message "stall time". The biggest drawback is that you need to specify a min 
and max value for the distribution. Everything below (resp. above) this range 
will be stored in a single "bellow min" (resp. "above max") bucket. Min 
obviously is 0 for "stall times" but max is harder to define in general, 
especially if changing the tick frequency. As a rule of thumb, I would define 
max slightly above the highest value you consider acceptable. Maybe add a 
MessageBuffer parameter to be able to specify it from the python script.

Best,
Gabriel
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[gem5-users] Re: Reg v21.2 release

2021-11-10 Thread Gabriel Busnot via gem5-users
Hi,

I am not aware of a roadmap for any new version on the gem5 website yet.
But you can still use the develop branch of the public gem5 repository. This is 
where the latest patches are pushed on a day-to-day basis before making their 
way to stable upon releases. I believe this branch to be very usable as commits 
go through a unit testing and code review procedure... And if you face a bug, 
feel free to contribute with a jira issue and even a patch that will make it to 
the next release ;).

Best,
Gabriel
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[gem5-users] Re: CHI prefetcher on V21

2021-11-10 Thread Gabriel Busnot via gem5-users
Hi,

Prefetch implementation seems unchanged and incomplete since the first release 
of the CHI protocol. In particular, the notifyPf* functions in 
CHI-cache-funcs.sm are still all empty. I am also not aware of any related open 
Jira issues.

Regards,
Gabriel
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[gem5-users] Re: Expose C++ enum in SLICC

2021-11-08 Thread Gabriel Busnot via gem5-users
Hi Sampad,

I don't think that you can import C++ enums in SLICC in a general and sage way. 
SLICC does not support "external enums" so any enum declared in SLICC will 
result in a C++ enum being generated under the gem5::ruby namespace.

One way of safely using external enums (any namespace, scoped enums, etc.) is 
to:
   - define functions that return the enum values you want to use (1 function 
per enum value, ideally)
   - declare these functions in a file included in RubySlicc_includes.hh
   - declare the enum name as an external primitive type in SLICC (see 
RubySlicc_Exports.sm for an example)
   - decalre the functions returning enum values in SLICC before using them.

You won't be able to use the SLICC enum syntax but you should remain type and 
value safe accross C++ and SLICC.

Regards,
Gabriel
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[gem5-users] Re: Using ruby_random_test for CHI protocol

2021-11-08 Thread Gabriel Busnot via gem5-users
Hi,

You basically have an address map or controller hierarchy specification issue.

CHI uses a different address to machineID mapping scheme than previous 
protocols. CHI adds the concept of "downstream machine" in the sense of 
"downstream in the memory hierarchy". When calling 
AbstractController::mapAddressToDownstreamMachine(), which is the 
hierarchy-aware version of AbstractController::mapAddressToMachine(), only the 
controllers bellow the current AbstractController will be searched.

In your case, it seems like no controller of the required type is mapped to the 
required address in the caller downstream hierarchy. Check the address ranges 
of your components and the calls to CHI_Node.setDownstream() in your python 
script. You can also inspect the generated .ini config file to check these 
parameters.

Regards,
Gabriel
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[gem5-users] Re: Coherent NoC with Gem5

2021-10-20 Thread Gabriel Busnot via gem5-users
Hi Tung,

You have a bunch of test and trace CPUs available under 
src/cpu/{testers,trace}. The trace CPU replays traces recorded using DerivO3CPU 
while the testers generate synthetic traffic. If you want to replay your own 
traces, you must either convert them to gem5 traces or implement your own trace 
reader.

Elastic traces base format is pretty simple 
(https://www.gem5.org/documentation/general_docs/cpu_models/TraceCPU###Trace-file-formats)
 so converting from a different trace format should not be too difficult as 
long as you do not care too much about dependencies and so.

Implementing your own trace reader also is a viable option if you already have 
the trace parser logic available.

Regards,
Gabriel
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[gem5-users] Re: Coherent NoC with Gem5

2021-10-11 Thread Gabriel Busnot via gem5-users
Hi,

Yes, there is. Look at the CHI protocol. It is compiled by default starting 
from gem5 20.0. You can find the documentation here: 
https://www.gem5.org/documentation/general_docs/ruby/CHI/

Other protocols are available in src/mem/ruby/protocol: MI, MSI, MESI and MOESI 
in different flavors.

Regards,
Gabriel
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[gem5-users] Re: Seg. Fault while "Creating a simple configuration script"

2021-10-08 Thread Gabriel Busnot via gem5-users
Hi,

1 and 2 or inoffensive warnings. To solve 3, can you please compile gem5.debug 
and provide us with the output you get with it. The callstack will contain 
symbols, as opposed to the one you copy pasted. Also, can you attach your 
configuration script?

Regards,
Gabriel
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[gem5-users] Delta cycle

2021-09-24 Thread Gabriel Busnot via gem5-users
Hi,

Is there a way to obtain the delta-cycle semantics of SystemC using the 
existing event management features in gem5?

In other words, I would like to be able to delta-schedule an event at the 
current tick doing something like eventManager.schedule(event, curTick(), 
delta=true) so that it triggers after all events already scheduled at current 
tick *and* after all events that are going to be immeditelly-scheduled at the 
current tick calling eventManager.schedule(event, curTick(), delta=false).

Looking at the current event scheduling logic, it looks like the event queue 
actually is a LIFO, which is the complete opposite to the delta-cycle 
semantics. A FIFO would be somewhat closer but still different from a true 
delta-cycle since it is still possible to end up with hard-to-anticipate events 
interleaving in both cases.

A bad work around would be to schedule at curTick()+1, which introduces a 
negligible imprecision. However, it gets worse if the event handler then calls 
clockEdge() as it will return the next clock edge, thus amplifying 
significantly the inaccuracy.

Thanks,
Gabriel
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[gem5-users] Re: Simulation Frequency is always constant - 1THz

2021-09-23 Thread Gabriel Busnot via gem5-users
Hi,

Arthur's answer still stands. As he said, sim_freq is the time resolution of 
the simulator itself. It defines the points in time where you are 'allowed' to 
schedule events. By default, you can schedule an event every 1e-12 second of 
simulated time (1THz). That does not mean all components in the model take 
action at this frequency. The CPU model, for instance, will use its own 
frequency (c.f. ClockedObject class) to compute how many simulation ticks have 
to elapse before its next own clockEdge().

Regards,
Gabriel
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[gem5-users] Re: regarding gem5

2021-09-23 Thread Gabriel Busnot via gem5-users
Are you talking about Ruby, Garnet and the like? It is perfectly available in 
the link I gave to you. git clone will provide you with gem5 v.21.1.0.2, the 
very latest stable release as of today. There is no "special" public version of 
gem5 I am aware of, anyway.

The commands I gave to you are, word for word, the commands you need to build 
gem5 with Garnet-standalone, once dependencies are installed using apt. Follow 
these steps very carefully if you have bad luck with my commands: 
https://www.gem5.org/documentation/learning_gem5/part1/building/.

Gabriel
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[gem5-users] Re: regarding gem5

2021-09-22 Thread Gabriel Busnot via gem5-users
Just read the error message you get. It cannot be more explicit than that: The 
SConstruct file is missing from your gem5_gt directory. And not knowing where 
this gem5_gt directory comes from and what it contains, what can I say?

git clone https://gem5.googlesource.com/public/gem5
cd gem5
scons build/NULL/gem5.opt PROTOCOL=Garnet_standalone -j2

This should work.

And one more time : do not send screenshots! Copy paste the content of your 
terminal instead. And read again the long answer I made to you a few days ago 
about how to ask questions in an efficient and informative manner.
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[gem5-users] Re: Running gem5.opt via GDB

2021-09-19 Thread Gabriel Busnot via gem5-users
Hi,

SIGTRAP is what you should observe when hitting a breakpoint, so no error 
there. The missing file is probably because you did not install the system 
debug files package. You probably don't need them as the bug likely happens 
before calling system functions. Finally, the segmentation fault is an actual 
bug. I would recommend using gdb on gem5.debug instead of gem5.opt, though. 
Optimization often strips out too much information for gdb to be usable.

Regards,
Gabriel
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[gem5-users] Re: regarding ERRORS in building of X86 ISA

2021-09-08 Thread Gabriel Busnot via gem5-users
What about running scons inside the gem5 directory instead of your home 
directory? And don't run it with sudo.
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[gem5-users] Re: regarding dependencies

2021-09-07 Thread Gabriel Busnot via gem5-users
One more time: this is not a dependency error! Read carefully the lines about 
 being not found: this is written "WARNING". This is an OPTIONAL feature 
resulting in a non fatal warning if not present.

Your problem is a couple of lines above: "fatal error: ld terminated with 
signal 9 [Killed]". Again, this is symptomatic of your system RUNNING OUT OF 
PHYSICAL MEMORY. Read this and try to confirm this diagnostic: 
https://docs.memset.com/other/linux-s-oom-process-killer.

Not knowing which version of gem5 you are using, maybe you have LTO enabled 
(pre v21.0.1 build). LTO, for link time optimization, makes ld use A LOT more 
memory. Try adding --no-lto and see what you get.

Now some general information and advice to efficiently ask for help on forums:
1. Do your own research
2. Do your own research again
3. Do your own research one last time
If after these three steps you didn't fix your issue, then it is reasonable to 
come ask for help and people will be happy to use some of their own time to 
help you solve your issue. Here is what makes a good question:
4. Tell what you are trying to do.
5. Tell what you did to achieve it that did not work.
6. Copy paste (or attach if too long) the whole output you got when doing it. 
Use copy pasting for that instead of screenshots.
7. Give information about your environment: OS version, gem5 version, compiler 
version, etc.
After you got answers back:
8. Read carefully ALL answers to your question
9. Apply their recommendations after understanding what they do. Do not take 
actions you don't understand on your computer if they don't come from a trusted 
source. A forum is not a trusted source in that respect.
At this point, if you have a different problem, go back to step 1 as you might 
be able to solve this new issue yourself. If you have the same problem, jump 
back to step 5.
And if you find yourself an answer to your question:
10. Post an answer to your own question to help other people in the future.

And finally, this is the 7th thread you open on the forum within a single week. 
One for general help (that's ok), one because of a typo (steps 1 to 3 would 
have solved this one hands down), two about docker despite me answering the 
first time (google would have solved this one for you just as well as it did 
for me. The answer is even part of the official Docker tutorial as it is so 
common.) and 3 about that same killed program issue which is being answered the 
same for the 3rd time (google would also have answered this one easily). 
Please, don't spam help forums. Take the time you need to learn about the tools 
you are using. This is part of what a PhD is for and what is expected from you.

Have fun in your research,
Gabriel
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[gem5-users] Re: first gem5 build

2021-09-05 Thread Gabriel Busnot via gem5-users
As already answered in your previous question, you are likely running out of 
memory and Linux is killing the offending process: the gem5 link process. As 
already advised, increase your VirtualBox memory to at least 6GB. Also open you 
browser in your native system instead of VirtualBox as it is a memory hog. If 
you can't increase your VirtualBox memory, consider using a swap partition. It 
will kill your performance but at least you will get to a complete build. But 
if 6GB on your VirtualBox is not an option, I would strongly advise swapping 
your workstation all together. 16GB on your host is a bare minimum to work 
comfortably in a virtual machine.

Gabriel
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[gem5-users] Re: Regarding permission denied

2021-08-31 Thread Gabriel Busnot via gem5-users
Hi,

I already answered a similar question from you. You need to be root or to be 
part of the docker Unix group as detailed here: 
https://www.journaldunet.fr/web-tech/developpement/1497415-comment-corriger-l-erreur-docker-got-permission-denied/.

Again, I would strongly advise that you get familiar with the tools you are 
using by reading some of the tutorials that are plenty available on the web. 
Also, the link I provided is the first answer you get from google when 
searching "docker permission denied". Please consider doing your own research 
before seeking for answers here ;) Most problems in the life of a beginner 
computer scientist can be solved by a few minutes of googling.

Have fun with your research,
Gabriel
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[gem5-users] Re: regarding gem5

2021-08-31 Thread Gabriel Busnot via gem5-users
Hi Sravani,

I can't find the '1s' you are talking about.

Overall, I would recommend reading a python tutorial as well as a C++ tutorial 
if you want to feel comfortable enough while reading "Learning gem5".

Dependencies are either installed using apt (or any regular package manager) or 
built with gem5 automatically.

Regards,
Gabriel
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[gem5-users] Re: Regarding docker file

2021-08-31 Thread Gabriel Busnot via gem5-users
You need to be root to connect to the docker Daemon. You can also add yourself 
to the docker group and reboot if I remember correctly.

Gabriel
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[gem5-users] Re: CHI, Ruby - changing cacheline size

2021-08-27 Thread Gabriel Busnot via gem5-users
Hi again ;),

Yes, Ruby should support any cacheline size (at least as long as it is a power 
of two).
And yes, you need to change the parameter O3CPU.fetchBufferSize and defaulted 
to 64 bytes. Not sure if it has any other implications but O3_ARM_v7a_3 sets it 
to 16 for instance.

Regards,
Gabriel
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[gem5-users] Re: CHI and Ruby Cache block size

2021-08-17 Thread Gabriel Busnot via gem5-users
Hi Javed,

First a note about the relationship of classes involved in the CHI model.
The L1 and L2 CHI caches are not derived from python class RubyCache 
corresponding to C++ class CacheMemory. CacheMemory is only responsible for 
managing the memory of the cache, not the protocol that goes around it. L1 and 
L2 CHI caches are 2 configurations of the same Cache_Controller class (same 
name in C++ and Python) generated by SLICC after the "Cache" state machine of 
the CHI protocol. A CacheMemory instance is then an attribute of this 
CHI_Cache_Controller class (composition in OOP).

About the block size, now. Regarding the CHI protocol, it expects the entire 
RubySystem to use the same block size, as all other Ruby protocols released in 
gem5. For convenience, the block size is stored as a static in the RubySystem 
class and accessed from there by many base components of a ruby model: Address, 
DataBlock, AbstractController, RubyPort, etc. The block size effectively is a 
constant global parameter for the whole simulation.

While convenient, this is is both limiting to model more exotic systems, and 
bad OOP practice. Refactoring of this part would be more than welcome, 
including by me ;) Not an easy job, though.

Best,
Gabriel
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[gem5-users] Re: simulation error in gem5 v21.1

2021-08-13 Thread Gabriel Busnot via gem5-users
Hi Boya,

This is indeed an argparse glitch. I am not aware of a way to specify an option 
so that the next CL argument must be considered as an argument for that option. 
https://stackoverflow.com/a/16175115/11350445 confirms this behavior and 
recommends using an '=' sign to specify an option's argument in that case. 
Prepending a space to the argument as you did also is a valid workaround.

I believe this behavior is not directly specified by argparse as "--an-option 
'--foo bar'" will fail if --foo is a valid option and succeed if not while 
"--an-option '--foo' will always fail, whether foo is a valid argument or not 
(according to my very rapid testing). This might be a bug so if you feel like 
doing a little more testing, you could fill an issue on the python bug tracker.

Best,
Gabriel
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[gem5-users] Re: CHI - Cluster CPUs having a shared L2 cache

2021-08-10 Thread Gabriel Busnot via gem5-users
Hi Javed,

I don't have a reliable answer for you. It is possible that the current CHI 
cache implementation cannot be shared, although it would surprise me. I would 
suggest you to ping Tiago Mück who wrote this stuff to ask him about that.

On your side, you can dig in the ProtocolTrace flag output. It can be quite 
easy to read once greped to the cacheline you want. It will tell you how you 
got into the RU state which violates the strict inclusiveness of the L2 claimed 
by the CHI_L2Controller class. You could benefit from the RubyGenerated flag as 
well to trace the actions executed by each controller. Also use --debug-start 
to reduce trace size and pipe output to gzip to compress it significantly (read 
with zcat). I have a working patch available to accelerate tracing when piping 
to gzip but I didn't have time to finalize the tests yet. You can find it here: 
https://gem5-review.googlesource.com/c/public/gem5/+/45025.

Gabriel
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s

[gem5-users] Re: How to redirect the fopen to open a different file instead of specifed filename in the program binary given as input to gem5 simulator in SE mode

2021-07-22 Thread Gabriel Busnot via gem5-users
Hi Gogineni,

If you REALLY CAN'T modify the app, I would then replace the file(s) your app 
opens with a symlink to the file you want to open.

And if you feel in a hacky mood, you can dive in src/sim/syscall_emul.hh and 
hardcode path override in openatFunc. Needless to say this is a terrible long 
term solution, but a fun exercise to get in touch with syscall emulation ;)

Gabriel
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[gem5-users] Re: CHI - Cluster CPUs having a shared L2 cache

2021-07-22 Thread Gabriel Busnot via gem5-users
Hi Javed,

Woops, I didn't see the split option in your first post. My bad.

I think the l2 is actually named "system.cpu0.l1i.downstream_destinations" and 
you will find it in the ini file. I think this is due to the way gem5 sets 
SimObject names. When you assign a SimObject to several object attributes 
(cpu.l2, cpu.l1i and finally cpu.l1d), it will have one of the names according 
to the complex SimObject and SimObjectVector logic. for some reason, it does 
not end up as a child of cpu0.l1d despite it being the last in the list. I am 
regularly fighting SimObject naming logic as well, that's normal ;)

Also check the warnings in the output. Some of them will warn you about 
SimObject reparenting. Sadly, SimObject name is determined by the attribute you 
set it to and you are not supposed to change it.

I would then suggest two non-tested options. You can assign the L2 controller 
to cpu.l2 after registering it as downstream_components of l1i and l1d. Let's 
hope it will set the desired name.
The other "last resort" option is to violate SimObject._name privacy and set it 
manually after the SimObject has been assigned for the last time... I would 
advise against that, though.

Whenever possible, it is actually best to assign a SimObject at the time of 
object creation and never assign it again afterwards... Not always possible, 
though. Also make use of "private" attributes (i.e., attributes with a name 
starting with '_') as much as possible. It bypasses the SimObject assignment 
logic and solves many issues.

Gabriel
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[gem5-users] Re: Gem5 stuck when simulate C++ workload in SE mode

2021-07-20 Thread Gabriel Busnot via gem5-users
Indeed, gem5 is designed to be deterministic so all random number generation 
should rely on a deterministically seeded random number generator. This random 
number generator normally is 'random_mt' located in src/base/random.cc. However 
not all random number generated in gem5 relies on this generator, yet.

Looking at syscall emulation implementations, it looks like a dedicated random 
generator is used here for emulating a read to /dev/urandom: 
src/kern/linux/linux.cc:126. You could replace "random.random(0, 255)" 
with "random_mt.random(0, 255)" to use the main random number 
generator. Not tested but should work. Then, you can seed random_mt from your 
python script using "seedRandom(int())". seedRandom is from the 
m5.core module. I personally like to use a --seed command line option to set 
the seed more conveniently.

Gabriel
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[gem5-users] Re: CHI - Cluster CPUs having a shared L2 cache

2021-07-20 Thread Gabriel Busnot via gem5-users
Hi Javed,

addSharedL2Cache is only called on lines 465 and 470 of CHI.py and these lines 
are touched only if options.littleclust_l2cache == 'shared'.
You don't set it in the command line and the default value is 'private', which 
explains why it never gets called.

Also I suspect that on lines 475 and 481 at least, you intended to call 
addSharedL2Cache instead of addPrivL2Cache.

Maybe you could start over from the original CHI_config.py file and add the 
addSharedL2Cache to CHI_RNF to check that it works in a simpler case. There 
should really not be much to change from addPricL2Cache implementation to get a 
shared L2. Walk your way up from a single cluster and single cpu cluster to a 
single cluster and dual cpu to a multi cluster and multi cpu configuration. 
Once you are sure about what needs to be done for a shared L2 cache to work, 
you will be able to clean up you current implementation from all these little 
bugs ;) I see no reason for this not to work.

Best,
Gabriel
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[gem5-users] Re: Gem5 stuck when simulate C++ workload in SE mode

2021-07-20 Thread Gabriel Busnot via gem5-users
Hi Duc,

Having the print not displayed on screen does not necessarily mean that it has 
not been executed. It could just have been stuck in stdout buffer which was not 
flush because of an abnormal program termination.

If you initialize you data with truly random and unbounded data, "number = 
block.number[i] >> 25" could be as big as (2^(64-25)-1) which is... huge, even 
taking the sqrt.
If you have not already done so, you should check that cap is not to big in 
some cases and that the program takes a reasonable amount of time when running 
natively on your host. By reasonable, I mean a couple of ms for a less than a 
minute simulation on gem5.

And as a side note, if you want to stress your caches, you can have a look at 
the MemTest cpu in gem5. It is purposely built to generate back to back memory 
accesses with controlled line sharing.

Best,
Gabriel
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[gem5-users] Re: Enabling L1 and L2 prefetchers in Ruby

2021-07-16 Thread Gabriel Busnot via gem5-users
This is a complex question. CHI is an ARM protocol which obviously work for ARM 
ISA's... At least. Does it work for x86? There are good chances that the answer 
is "yes" because x86 memory model is more restrictive than ARM's. If you need 
an assertive answer though, I can't give it to you. This is tricky as the ISA 
memory model will interfere with the cache coherence protocol properties. You 
might be in trouble when running apps that make use of atomic instructions and 
memory fences. If you are familiar with memory consistency/coherency issues, 
you know waht I mean. If not, this is a good start to answer such question ;)

And if anyone has a guess on that... thanks in advance ;)

Gabriel
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[gem5-users] Re: Enabling L1 and L2 prefetchers in Ruby

2021-07-16 Thread Gabriel Busnot via gem5-users
Indeed, only MESI_two_level's L1 cache has a prefetcher, my bad. From here, 
your two only options to get an L2 prefetcher is either to use MESI_two_level 
or to add prefetcher support to MESI_three_level's L1 cache, which will not be 
an easy job, I think.

CHI, which is a MOESI implementation, should support prefetching "soon". I 
don't know exactly what is missing to support it.

Gabriel
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[gem5-users] Re: Enabling L1 and L2 prefetchers in Ruby

2021-07-16 Thread Gabriel Busnot via gem5-users
Hi Majid,

Which protocol are you using?
With MESI_Two_Level and MESI_Three_Level, the L1 cache has an "enable_prefetch" 
boolean parameter, juste like the L0.
You can then customize the prefetcher of each cache according to the options 
available in RubyPrefetcher.py.

Best,
Gabriel
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[gem5-users] Re: Question about cross cache when accessing cache, please help

2021-07-15 Thread Gabriel Busnot via gem5-users
Hi Will,

I am not exactly sure about what your final goal is. I am assuming that you 
want a coherent cacheable write issued by L1 (typically caused by a write back) 
to bypass L2 and directly hit L3. I am pretty confident saying that bypassing a 
single cache level in gem5's classic cache system is not possible. The fact 
that, by definition of a cache hierarchy, there is no port connections between 
L1 and L3 but only between L1 and L2 and between L2 and L3, is a first reason 
for this being impossible. An access is either cacheable and will hit as high 
as possible in the hierarchy, or non cacheable and will traverse the entire 
cache hierarchy to reach the targeted device. 

However, if such behavior is really what you want, you could implement it 
yourself in at least two ways. You can either
(1) add an extra port to make L1 able to communicate directly with L3 through 
the regular protocol or
(2) flag requests targeted at L3 in such a way that L2 forwards them directly.
I would probably go for (2) not to disturb the cache hierarchy structure too 
much.

Anyway, both approaches will have subtle consequences on cache coherence which 
you must handle. To be specific, what happens if L2 has a valid cacheline and 
L1 wants to directly write this line to L3 for some reason? L2 will no longer 
be valid so L3 must snoop-invalidate it. But this is not the way a cache 
"hierarchy" usually works. If you really want to experiment with exotic caching 
and coherence mechanisms, you will likely be more comfortable using Ruby.

Best,
Gabriel
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[gem5-users] Re: CHI - Cluster CPUs having a private L2 cache

2021-07-12 Thread Gabriel Busnot via gem5-users
Hi Javed,

This looks fine to me, at least regarding L2s being private.
CHI_config.py:538 in the line where you instantiate a cache memory per CPU and 
CHI_config.py:557 is the line where you instantiate the CHI ruby controller 
that makes use of that cache memory instance.

One way to check if two python variables point to the same instance or not is 
to test them with "is": ((a is b) == True) means that a and b are the same  
object. (a is not b) is the test syntax you want to test for non-identity. Just 
check that ruby_system.littleCluster._cpus[i].l2 as well as 
ruby_system.littleCluster._cpus[i].l2.cache are different for all i.

Best,
Gabriel
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[gem5-users] Re: Custom SimObject Causes Host Machine to Freeze

2021-07-07 Thread Gabriel Busnot via gem5-users
Hi Thomas,

In addition to Jason's advises, I would like to add that if a simulation 
crashes your whole system, it is likely that you are running out of physical 
memory. Unless you are using unstable drivers on your host, which I believe you 
don't, the easiest way to unintentionally crash a system probably is to put a 
memory allocation in a hot loop and forget to free the memory.

You can also consider setting up a VM with a limited amount of memory and no 
swap to sandbox your simulation. You can snapshot that VM right before running 
your debug run to faster recover from crashes.

best,
Gabriel
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[gem5-users] Re: Where does system.cpu (from se.py/fe.py scripts) exist in gem5?

2021-06-30 Thread Gabriel Busnot via gem5-users
My bad, static goes before the return type.

Gabriel
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[gem5-users] Re: Where does system.cpu (from se.py/fe.py scripts) exist in gem5?

2021-06-30 Thread Gabriel Busnot via gem5-users
Hi Balls,

You can use SimObject::find for that purpose. You can get the name of 
SimObjects from m5out/config.ini.
You could also implement a search-by-type template method that returns a vector 
of matching SimObjects.
Something like:

static template 
std::vector findSimObjects() {
std::vector ret;
for(auto so_ptr: simObjectList) {
 if (dynamic_cast(so_ptr)) ret.emplace_back(so_ptr);
}
return ret;
}

As to where are these SimObjects created? They are all on the heap. The 
corresponding pointers all go into SimObject::SimObjectList. The SimObject that 
are explicitely listed in a python SimObject subclass also go to the 
corresponding C++ parameter structure. You can then do whatever you want with 
these pointers in gem5. Note that SimObjetcs are never deleted if I believe my 
experiments and I would advice you no to try to delete them either ;)

Best,
Gabriel
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[gem5-users] Re: [Big, Little] clusters with CHI and SE mode

2021-06-28 Thread Gabriel Busnot via gem5-users
Hi Javed,

This is an error from the dot file processing tool.
For some reason, likely a hard coded string buffer size, it does not support 
lines longer that 16384 lines.
Luckily, you don’t need it and can ignore this error.

The second error says that one of the parameters has no set value.
In your case, this is data_channel_size in Cache_Controller.
See CHI.py line 225 to see what you need to do.

Best,
Gabriel
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s

[gem5-users] Re: [Big, Little] clusters with CHI and SE mode

2021-06-24 Thread Gabriel Busnot via gem5-users
Hi Javed,

2- Yes, I meant to define L1ICache_big, L1ICache_little, etc. If you need 
different cache configurations. However, I didn't get that you need private 
L2s. But if you call the big and little clusters by the term "cluster", then I 
believe that each cluster has a single entry point to the coherent interconnect 
which usually is the master port of a shared cache. In that case, the CHI RN-F 
is what maps to such concept of CPU cluster. I would recommend starting with 
private L1s and shared L2 until things work, as this is natively supported by 
the CHI config files. Then, hack your way through the CHI config files to make 
the L2 private and add a shared L3 if needed. You will have to modify 
CHI_config.CHI_RNF.

3- create_system in CHI.py expects system to be populated with CPUs 
(system.cpu). The only thing that differs between a CPU type and another really 
is its accuracy : atomic, minor, O3, etc. The performance of the simulated CPU 
then depends on the specific parameters of that CPU but you can look at that 
later. Having said that, I would write something like:

assert(len(system.cpu) == options.num_cpus_little + options.num_cpus.big)
ruby_system.little_cluster = CHI_RNF(system.cpu[:options.num_cpus_little], 
)
ruby_system.big_cluster = CHI_RNF(system.cpu[-options.num_cpus_big:], )
ruby_system.little_cluster.addPrivL2Cache(...)
ruby_system.big_cluster.addPrivL2Cache(...)

Best,
gabriel
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[gem5-users] Re: [Big, Little] clusters with CHI and SE mode

2021-06-21 Thread Gabriel Busnot via gem5-users
Hi Javed,

I don't think that you want to use devices.CpuCluster as it is used to manage 
classic caches while you want to use Ruby caches.

My first approach would be, using se.py as is:
1- Define two more options in CHI.py to specify the number of big (B) and 
the number of little (L) cpus from the command line
2- Define the L1ICache, L1DCache and L2DCache for each the big and the 
little cluster
3- Pass the first B cpus as a single list together with the correct caches 
to the first CHI_RNF. Assign the result to ruby_system.bigCluster.
4- Pass the last L cpus as a single list together with the correct caches 
to the second CHI_RNF. Assign the result to ruby_system.littleCluster.
5- Add the private L2 cache of the correct type to both cluster.
Keep everything else as is.

Best,
Gabriel
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[gem5-users] Re: Compiling failure for original code in gem5 book

2021-06-21 Thread Gabriel Busnot via gem5-users
If you follow each step carefully starting from here, you should be good ;) The 
tutorial has been updated recently to take the latest API changes into account.

https://www.gem5.org/getting_started/

Gabriel
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[gem5-users] Re: Write Buffer Configuration for Ruby

2021-06-21 Thread Gabriel Busnot via gem5-users
Hi Wang,

If by "write buffer queue", you mean the "mandatoryQueue", then you cannot 
restrict its size wihtout risking an assert error as the sequencer does not 
check the mandatoryQueue fullness before enqueuing.

Still, the maximum number of concurrent tag array and data array lookups can be 
controlled using the banking mechanism of the CacheMemory used in Ruby cache 
controllers. Sadly, MOESI CMP Directory does not seem to make use of this 
mechanism and adding support for it is no easy task.

You can still look at MOESI AMD and CHI to get two examples of cache banking 
support in a Ruby protocol.
Again, adding support for that to a protocol will be a hard task and adding it 
without introducing subtle bugs can be even harder.
But if you have no other choice than learning Ruby and adding this feature to 
the protocol you want to use, here are a few hints to start (based on CHI 
protocol):
1. Look for the RequestType enumeration in CHI-cache-funcs.sm. It lists the 
names associated to arbitrary resources like tag and data array banks. These 
tags are associated to a transition when the corresponding resource is required 
by this transition. E.g., when CHI needs to read the tag array during a 
transition, the TagArrayRead tag is associated to the transition (see 
CHI-cache-transitions.sm).
2. Then the checkResourceAvailable function in the same file tests for 
resource availability according to the flag passed as arguments.
3. If the later returns true, then recordRequestType (still in 
CHI-cache-funcs.sm) will be called to record the resource usage. If 
checkResourceAvailable returns false, a "Resource stall" is triggered, the 
transition is aborted and the next input port is checked.
4. If you set a resource stall handler (search rsc_stall_handler in 
CHI-cache-ports.sm), you can customize the behavior of the controller upon 
Resource stall for each input port. You should not need this feature.

Note that while the RequestType enumeration type and checkResourceAvailable and 
recordRequestTypes function names are magic names that you must conform to, the 
content of RequestType and the body of these two functions is yours to choose. 
This is the "fully customized" version of the check_allocate mechanism, if you 
know it. If not, then check it first to get in touch with the ResourceStall 
mechanism.

Good luck with your experiments,
Gabriel
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[gem5-users] Re: Compiling failure for original code in gem5 book

2021-06-21 Thread Gabriel Busnot via gem5-users
It looks like you are not using the latest version of the tutorial source code.
Where did you get the tutorial source code from?

I would recommend checking out the tag v21.0.0.0 and starting from the code in 
src/learning_gem5/part2.
gem5 API has recently changed and any file older than a couple of months is 
likely incompatible with the latest changes.

Gabriel
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[gem5-users] Re: Compiling failure for original code in gem5 book

2021-06-21 Thread Gabriel Busnot via gem5-users
Hi Qishen,

Can you first assert that gem5 compiles on your environment before adding your 
own code?

Then, I would recommend double-checking your SConscript file against 
src/learning_gem5/part2/SConscript.

BTW: at line 248 of simple_memobj.cc: you forgot const ;)

Best,
Gabriel
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[gem5-users] Re: CHI and caches

2021-06-18 Thread Gabriel Busnot via gem5-users
Hi,

O3_ARM_v7a_3 comes with predefined cache configurations that correspond to this 
specific CPU: O3_ARM_v7a_DCache, O3_ARM_v7a_ICache, etc.
However, these caches are effectively used only if CacheConfig.config_cache() 
is called. This does not happen if --ruby is used with the fs.py and se.py 
files.

Best,
Gabriel
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[gem5-users] Re: Functional read not implemented

2021-06-18 Thread Gabriel Busnot via gem5-users
Well, you have likely set up a nasty time bomb here.

What you are basically doing is assuming that
  - Valid data contained in the network is always present in a controller as 
well
  - Or the backing store data (memory) is up to date.
Only in that case you can forget about the data in the network as long as you 
look for data in all controllers.
I believe this property is not true for most protocols, including CHI and the 
one I am working on for a living.

However, it won't crash until the only place where valid data is present is the 
network AND the corrupted data you get back is critical enough to cause a 
visible bug. You could then get away with this for weeks, months or even 
years... Until the bug kicks in. And good luck finding that it comes from a bad 
functional read at that point. 

I would personally:
  1. duplicate all functionalWrite methods in the garnet directory
  2. rename these duplicates functionalRead (and return bool instead of 
uint32_t)
  3. replace calls to functionalWrite in their bodies with calls to 
functionalRead on the same component.
It shouldn't take more than an hour to duplicate the 35 occurrences of 
functionalWrite in the garnet directory ;).
You can then verify this is correct with ruby_mem_test.py using --num-cpu=8 and 
--functional=50 or so. I recommend running that over night to see if it detects 
any data corruption. If it passes, you should be good to go. If it doesn't... 
Welcome to ruby debugging hell.

Cheers,
Gabriel
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[gem5-users] Re: Functional read not implemented

2021-06-18 Thread Gabriel Busnot via gem5-users
BTW: the standard CHI.py configuration file does not seem to support Garnet 
anyway.
You might want to use SimpleNetwork if you want CHI to work or change the 
protocol if you want Garnet to work.
I don't have much experience with these other networks and protocols so I can't 
help you much on this.

Gabriel
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[gem5-users] Re: Functional read not implemented

2021-06-18 Thread Gabriel Busnot via gem5-users
Hi Vedika,

That's bad luck and slightly surprising to me that functional reads are not 
implemented in Garnet.
Because you use the SE mode, functional accesses are absolutely necessary for 
syscall emulation, both functional reads and writes.

Now, why do Garnet only support functional writes? According to this post 
(https://gem5.atlassian.net/browse/GEM5-604?focusedCommentId=11781), functional 
accesses are also used in FS mode but I believe only functional writes. That 
would explain why Garnet only supports them: it is a detailed network model so 
it does not make a lot of sense to use it in SE mode which is not as accurate 
as FS.

That being said, I see no technical reason for not supporting functional reads 
in Garnet. If you want to try, you can get inspiration from 
GarnetNetwork::functionalWrite to implement GarnetNetwork::functionalRead. It 
will likely consist in walking your way down to each component in the network 
that might contain a chunk of data, i.e. a Ruby message. I believe that 
implementing and calling functionalRead wherever functionalWrite is currently 
called in Garnet will cut it. Finally, note that depending on whether you use 
CHI or another protocol, you will need two versions of functional read. With 
CHI, the default protocol, you must override Network::functionalRead(Packet 
*pkt, WriteMask& mask). With other protocols, you must override 
Network::functionalRead(Packet *pkt). Mostly copy pasting involved in both 
cases.

You can also use fs.py and you should be fine ;)

Also note that --caches and --l2caches are inhibited by --ruby as you cannot 
use the classic caches (--caches and --l2cache) and Ruby (--ruby) at the same 
time.

Good luck with your experiments,
Gabriel
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[gem5-users] Re: Run full system

2021-06-17 Thread Gabriel Busnot via gem5-users
Hi Zhang,

I have not tested but perhaps using one of the --cpu-type options listed in the 
error message would fix your issue:

"(choose from 'O3_ARM_v7a_3', 'TimingSimpleCPU', 'ex5_big', 'DerivO3CPU', 
'TraceCPU')"

O3_ARM_v7a_3 or DerivO3CPU are the closest options to "O3CPU".

Best,
Gabriel
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[gem5-users] Re: System() and RubySystem()

2021-06-16 Thread Gabriel Busnot via gem5-users
Hi Javed,

Please, see my comments inline.

Best,
Gabriel

Javed Osmany wrote:
> Hello
> 
> Trying to understand the following:
> 
> So in example config scripts I see the following:
> 
> system = System() // Is this then instantiating the default overall 
> system ??
This is pretty much it, yes.
> 
> Now, I understand that there are two types of memory system, namely classic 
> and Ruby.
Yes.
> 
> And then if "-ruby" is specified on the command line, the script Ruby.py is
> called.
> Within Ruby.py, the function creat_system() is called and in Ruby.py (lines 
> 193 - 194), we
> have the code
> 
>system.ruby = Ruby.System() // Is this then instantiating the Ruby 
> memory system
> within the overall System() ??
Here it gets a bit more technical but you are right.
Let me detail a little bit more what is going on.
BTW, Not sure which version of gem5 you have but I don't have a dot between 
Ruby and System and I believe that you should not have one either.

system, which is an instance of the System python class, is what is called a 
SimObject.
The SimObject python class, which System inherits from, gives special 
capabilities.
The most important one is to be able to build a SimObject tree by asigning 
SimObjects to other SimObjetcs attributes.
For instance, with A, B, C and D being SimObject instances, one can write:
A.b = B; B.c = c; A.d = d // Builds the SimObject tree [A->B->C ; A->D]

The SimObject tree that you have before calling m5.instantiate will define the 
order in which the C++ version of each SimObject will be constructed.
You basically go from the leafs to the root of the SimObject tree so that all 
parameters are valid in each C++ SimObject constructor.
Then, the init method of each C++ SimObject is called in a non-specified order 
in order to perform extra operations that cannot take place in the constructor.

In the present case, an instance of the SimObject "RubySystem" is being 
assigned to the ruby attribute of system (which also is a SimObject).
system initially has no ruby attribute but because we are assigning a 
SimObject, magic happens behind the scene and a new attribute "ruby" is 
dynamically added. Notice that you could use any attribute name instead of ruby.
In a nutshell, yes, our new RubySystem is now "physically" part of the overall 
System.

>ruby= system.ruby // This is then just an alias ??
Yes, this is a local alias: system.ruby could be used instead of ruby in the 
rest of the function.
> 
> 
> Thanks in advance.
> J.Osmany
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[gem5-users] Re: Fwd: Compiling Problem

2021-06-03 Thread Gabriel Busnot via gem5-users
If I may add, I believe the YouTube version of the tutorial probably exhibits 
the same issue that was resolved in 
https://gem5.atlassian.net/browse/GEM5-988?atlOrigin=eyJpIjoiMWEzZTZkMmQ4MjJjNGYxM2I1NGM3MDJjZDIzM2RhNzAiLCJwIjoiaiJ9.

I suggest checking the text version of the tutorial that has been updated to 
take the latest API into account here : 
https://www.gem5.org/documentation/learning_gem5/introduction/.

This question seems related too: 
https://lists.gem5.org/archives/list/gem5-users@gem5.org/thread/EJGUEDU4O3DV2VIEMA4UU62TSD2JTIJ4/#PHENIBBDDAUWLLXFFPY4PCW57INMZ52T

Gabriel
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[gem5-users] Re: event can't trigger transition from L0 to L1

2021-06-03 Thread Gabriel Busnot via gem5-users
Hi,

"Resource stall" means that you are running out of something to perform the 
transition right now.
Something can be: room in the TBE table, banks in a cache tag or data array, 
space in a message buffer, etc.
All these things are grouped under the concept of resources.

Before a transition takes place, SLICC introduces checks for all resources 
consumed by the upcoming transition.
Specifically, an enqueue block triggers a check on the available space in the 
enqueue buffer. check_allocate(SOMETHING) will test the return value of 
SOMETHING.areNSlotsAvailable(...). Finally, a "transition(...) {RESOURCE_TAG} 
{...}" (notice the extra braces with RESOURCE_TAG in), will test the return 
value of checkResourceAvailable(RESOURCE_TAG, address) that you implement 
yourself in your machine definition. The later typically is for more advanced 
protocols.

If one of the required resources is lacking (the corresponding test fails), 
then the transition will not take place.
It will still be logged in the protocol trace but with the Resource Stall flag 
at the end to indicate that the transition did not take place.
The controller then schedules a wake up at the next cycle to retry the 
transition and then jump to the next in_port.

In you case, the transition requires two resources to take place: a slot in the 
TBE table for the i_allocateTBE action and an entry in the requestNetwork 
buffer for the a_issueGETS action. You are likely running out of one of these 
two resources.
Sadly, SLICC will not tell you witch one... Luckily, because this is perfectly 
normal, you probably don't need to know unless you are hacking the protocol and 
suspect this is a bug.

If you scroll down (possibly a few hundred lines), you should see the 
transition being performed at some point (no resource stall at the end).
I recommend grepping on the line address to remove some noise ;)

Gabriel
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[gem5-users] Re: Running CHI protocol configurations

2021-06-03 Thread Gabriel Busnot via gem5-users
Javed Osmany wrote:
> Hello Gabriel
> 
> Thank you for your answers.
> 
> To address the points you have raised:
> 
> >  > [GB] First, downstream_destination is the set of
> > possible destinations for this component. It does not mean that it will 
> > actually
> > communicate will all of them. It depends on the rest of your configuration.
> >  
> The command line I am using to run the simple test being:
> 
> ./build/ARM/gem5.opt configs/example/se.py --ruby --topology=Pt2Pt 
> --num-cpus=4
> --num-dirs=2 --num-l3caches=2 --cmd=tests/test-progs/hello/bin/arm/linux/hello
> 
> So this is making use of the default CHI.py and CHI_config.py
> 
> >  > Second, you don’t specify explicitly which snf a
> > given hnf communicates with. Instead, each hnf is responsible for a given 
> > [set of] address
> > ranges and each snf is also responsible for a [set of] address ranges. 
> > There is no need
> > for hnf and snf address ranges to correspond.
> >  
> As mentioned above, I am using the default configs. Therefore, where could I 
> find the
> above information (ie which snf a given hnf communicates with). I could not 
> see it in
> config.ini.
An hnf will pick the snf that is mapped to the address it is targeting. Make 
sure the hnf and the snf you want to communicate with each other have the same 
address range in the ini file and you should be good.

If you want to check how interleavings are defined, please look at 
CHI_HNF.createAddrRanges() in CHI_config.py for hnf interleavings and 
setup_memory_controllers() in Ruby.py.

> 
> 
> >  > However, if you setup an hnf and a snf to be
> > mapped to the same address ranges, they will exclusively communicate with 
> > each other.
> >  
> That makes sense.
> 
> 
> >  > With the default CHI.py file, I believe this
> > corresponds to configurations where --num-dirs and num-l3caches are equal.
> >  
> Yes. I tried using different values for –num-dirs and num-l3caches and this 
> then generated
> a run time error.
This does not seem normal for "reasonnable" values of num-dirs and 
num-l3caches. Maybe you could report the error in another thread or on the gem5 
Jira.
> 
> 
> Best regards
> JO
> From: Gabriel Busnot [mailto:gabriel.bus...@arteris.com]
> Sent: 02 June 2021 15:44
> To: gem5 users mailing list gem5-users(a)gem5.org
> Cc: Javed Osmany javed.osmany(a)huawei.com
> Subject: RE: Running CHI protocol configurations
> 
> Hi Javed,
> 
> Answers inline.
> 
> Best,
> Gabriel
> 
> From: Javed Osmany via gem5-users
> mailto:gem5-users@gem5.org>>
> Sent: 02 June 2021 16:21
> To: gem5 users mailing list
> mailto:gem5-users@gem5.org>>
> Cc: Javed Osmany mailto:javed.osm...@huawei.com>>
> Subject: [gem5-users] Running CHI protocol configurations
> 
> 
> [EXTERNAL EMAIL]
> Hello
> 
> Have generated an ARM ISA gem5.opt executable, where the PROTOCOL CHI.
> 
> Running the simple “Hello World” program on a config of [4 RNFs, 2 HNFs, 2 
> SNFs] and
> looking at the config.ini file, there are a few things I don’t understand.
> 
> The command I use being:
> 
> ./build/ARM/gem5.opt configs/example/se.py --ruby --topology=Pt2Pt 
> --num-cpus=4
> --num-dirs=2 --num-l3caches=2 --cmd=tests/test-progs/hello/bin/arm/linux/hello
> 
> From config.ini, we have
> 
> [system]
> type=System
> children=clk_domain cpu0 cpu1 cpu2 cpu3 cpu_clk_domain cpu_voltage_domain 
> dvfs_handler
> mem_ctrls0 mem_ctrls1 redirect_paths0 redirect_paths1 redirect_paths2 ruby 
> sys_port_proxy
> voltage_domain workload
>   :
> mem_mode=timing
> mem_ranges=0:536870912 <== Memory is 512 MBytes
> memories=system.mem_ctrls0.dram system.mem_ctrls1.dram
> 
> [system.ruby]
> type=RubySystem
> children=clk_domain hnf0 hnf1 network power_state rnf0 rnf1 rnf2 rnf3 snf0 
> snf1 <== So
> have instantiated the 4 x RNFs, 2 x HNFs, 2 x SNFs
> access_backing_store=false
> all_instructions=false
> block_size_bytes=64
> clk_domain=system.ruby.clk_domain
> eventq_index=0
> hot_lines=false
> memory_size_bits=48
> num_of_sequencers=4
> number_of_virtual_networks=4
> phys_mem=Null
> power_model=
> power_state=system.ruby.power_state
> randomization=false
> system=system
> 
> [system.ruby.hnf0.cntrl]
> type=Cache_Controller
> children=cache datIn datOut mandatoryQueue power_state prefetchQueue 
> replTriggerQueue
> reqIn reqOut reqRdy retryTriggerQueue rspIn rspOut snpIn snpOut snpRdy 
> triggerQueue
> addr_ranges=0:536870912:0:64 <== What does this mean?? (range is 0: 512 
> Mbytes. What
> does :0:64 imply??) (Similar query for system.ruby.h
> nf1.cntrl.addr_range)
> 
> [GB] This is an interleaved address range. In that particular case, hnf are 
> interleaved on
> 64 bytes blocks and there are as many interleavings as there are hnf (2 in 
> your case).
> 0:64 means that it is the interleaving with ID 0 and 64 is the base 10 
> representation of
> the interleaving mask (single mask in that case). Please look carefully at 
> AddrRange in
> src/python/m5/params.py for more details about address interleaving in gem5.
>   

[gem5-users] Re: Running CHI protocol configurations

2021-06-02 Thread Gabriel Busnot via gem5-users
For some reason, answering from outlook yielded the weird automatic answer 
above.
Here is my answer with inline comments below.



Hello

Have generated an ARM ISA gem5.opt executable, where the PROTOCOL CHI.

Running the simple “Hello World” program on a config of [4 RNFs, 2 HNFs, 2 
SNFs] and looking at the config.ini file, there are a few things I don’t 
understand.

The command I use being: 

./build/ARM/gem5.opt configs/example/se.py --ruby --topology=Pt2Pt --num-cpus=4 
--num-dirs=2 --num-l3caches=2 --cmd=tests/test-progs/hello/bin/arm/linux/hello

From config.ini, we have

[system]
type=System
children=clk_domain cpu0 cpu1 cpu2 cpu3 cpu_clk_domain cpu_voltage_domain 
dvfs_handler mem_ctrls0 mem_ctrls1 redirect_paths0 redirect_paths1 
redirect_paths2 ruby sys_port_proxy voltage_domain workload
  :
mem_mode=timing
mem_ranges=0:536870912  Memory is 512 MBytes
memories=system.mem_ctrls0.dram system.mem_ctrls1.dram

[system.ruby]
type=RubySystem
children=clk_domain hnf0 hnf1 network power_state rnf0 rnf1 rnf2 rnf3 snf0 snf1 
 So have instantiated the 4 x RNFs, 2 x HNFs, 2 x SNFs
access_backing_store=false
all_instructions=false
block_size_bytes=64
clk_domain=system.ruby.clk_domain
eventq_index=0
hot_lines=false
memory_size_bits=48
num_of_sequencers=4
number_of_virtual_networks=4
phys_mem=Null
power_model=
power_state=system.ruby.power_state
randomization=false
system=system

[system.ruby.hnf0.cntrl]
type=Cache_Controller
children=cache datIn datOut mandatoryQueue power_state prefetchQueue 
replTriggerQueue reqIn reqOut reqRdy retryTriggerQueue rspIn rspOut snpIn 
snpOut snpRdy triggerQueue
addr_ranges=0:536870912:0:64  What does this mean?? (range is 0: 512 Mbytes. 
What does :0:64 imply??) (Similar query for system.ruby.h
nf1.cntrl.addr_range)

[GB] This is an interleaved address range. In that particular case, hnf are 
interleaved on 64 bytes blocks and there are as many interleavings as there are 
hnf (2 in your case). 0:64 means that it is the interleaving with ID 0 and 64 
is the base 10 representation of the interleaving mask (single mask in that 
case). Please look carefully at AddrRange in src/python/m5/params.py for more 
details about address interleaving in gem5.
  :
downstream_destinations=system.ruby.snf0.cntrl system.ruby.snf1.cntrl  If I 
wanted a configuration where hnf0.cntrl only communicated to 
system.ruby.snf0.cntrl, would I need to generate a custom version of 
CHI_configs.py or do I need to provide a custom version of CHI.py and 
CHI_configs.py?
[GB] First, downstream_destination is the set of possible destinations for this 
component. It does not mean that it will actually communicate will all of them. 
It depends on the rest of your configuration.
Second, you don’t specify explicitly which snf a given hnf communicates with. 
Instead, each hnf is responsible for a given [set of] address ranges and each 
snf is also responsible for a [set of] address ranges. There is no need for hnf 
and snf address ranges to correspond.
However, if you setup an hnf and a snf to be mapped to the same address ranges, 
they will exclusively communicate with each other.
With the default CHI.py file, I believe this corresponds to configurations 
where --num-dirs and num-l3caches are equal.
-   num-dirs corresponds to the number of memory interfaces that 
historically were directories in ruby but are snf in CHI
-   num-l3caches corresponds to the number of hnf that contain both a 
system-level cache as well as a directory… It’s a bit confusing but is a 
consequence of legacy parameter naming in gem5 python.

[system.ruby.hnf0.cntrl.cache]
type=RubyCache
children=replacement_policy
assoc=16
block_size=0  Why is the block size 0 ??
   :

[GB] 0 means default ruby block size value (64 bytes). You can find such 
information in the .py file containing the paramemter description for a given 
SimObject subclass. In that case,  src/mem/ruby/structures/RubyCache.py and 
src/mem/ruby/system/RubySystem.py.
Thanks in advance
JO
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s

[gem5-users] Re: Running CHI protocol configurations

2021-06-02 Thread Gabriel Busnot via gem5-users
To reduce fraud or cyber-crime risks, Arteris IP will never instruct you to 
change wire payment instructions or bank account details using email or fax. If 
you receive such an email or fax appearing to be from Arteris IP, please call 
your Arteris IP’s contact to verify. Please do not verify by email.
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s

[gem5-users] Re: ruby_mem_test

2021-05-10 Thread Gabriel Busnot via gem5-users
Hi,

The tester got an unexpected value that triggered this panic.
Basically, the tester did not get back the latest value it wrote to the 
mentioned address.
This can be caused by countless reasons.

Which protocol are you using?
If you are using your own protocol, then you likely have a bug. Good luck with 
that ;)
If you are using a provided protocol, are you also using the provided python 
run-scripts?

You can start by checking the PROTOCOL entry in build/variables/NULL.
I would recommend you to test first with the CHI protocol to test your 
environment.

Cheers,
Gabriel
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s