On Wed, 20 May 2015, Joel Hestness wrote:
On May 12, 2015, 3:55 p.m., Nilay Vaish wrote:
You need to provide example of what this change is trying to achieve.
Personally, if wanted buffers to have a size, I would want to that size
to be configurable in python.
Brad Beckmann wrote:
Would it be helpful if we described an example in the patch description?
This is obviously an improvement in functionality that people may want to use.
I agree that eventually we would want to do this in Python. I'll let Mark
comment on that.
Mark Wilkening wrote:
This change allows precise controll of specific individual buffer sizes in
a controller. Because these buffers are specific to a particular coherence
protocol (dependant on your gem5 build), it seems appropriate to set these
sizes in the SLICC file. This also does not have to be used, if not used the
default sizes for these buffers will be used instead. In the future we could
potentially set the sizes in python by setting sizes for buffers identified by
virtual network ID and network direction (in/out). This will require tight
coordination with the protocol definition to be used properly however.
With this patch you can just set the size in SLICC like follows:
MessageBuffer mandatoryQueue, ordered="false", buffer_size=32
MessageBuffer protocolSpecificQueue, network="out", virtual_network=4,
buffer_size=64
Joel Hestness wrote:
While I'm glad that we aim to add backpressure through Ruby, I agree with
Nilay; This doesn't seem like the right way to set up buffer depth
parameterization. We already expose the buffers in the Python code to connect
them to the network. It would be easier and far more flexible to make
MessageBuffers into SimObjects and allow the parameter to be set in Python. You
wouldn't even need an init function (as Nilay suggests), but you could just
pass the parameters to the C++ constructor during SimObject instantiation. Such
an implementation would avoid changes to SLICC and this patch would need to be
reverted.
In addition to the usability issues, performance debugging bottlenecks
resulting from buffer-sizing in the protocol files would likely be very
difficult. There are the obvious issues with statically sizing buffers in the
protocol files; the sizing will only make sense for particular system
organizations but cause hard-to-expose bandwidth bottlenecks in others.
Increasing the buffer size in the *.sm files and recompiling may fix this for
your desired system, but to avoid recompiling for every different system
organization, you'd end up with statically-defined, effectively infinite
buffers again. Thus, the only *right* way to use this patch is to pass a
parameter to the machine definition which is used for buffer sizing. Might as
well pass the parameter directly to the message buffer in Python.
Mark Wilkening wrote:
I tend to agree with Jason that python configurability of message buffer
sizes would be a good feature, but I think it is not a trivial ammount of
effort and should be addressed in a future patch as needed. Whether message
buffers are elevated to SimObjs or whether the sizing is parameterized in the
network or controllers, there will need to be changes to how message buffers
are created and managed in the network, controllers, and SLICC.
The non-trivial amount of work required to provide Python
configurability of buffer size is comparable to the amount of
non-trivial debugging for just the FIRST user who has to find a
performance bottleneck resulting from buffer sizing in a .sm file.
Worse, to provide Python configurability in a future patch will require
reverting this patch AND fixing all protocols that define buffer size in
.sm files. It is in the gem5 community's interest to do this correctly
now.
Joel, let's not push AMD for doing this in python. I don't think AMD
would be willing to go that way and my personal belief is that it would
require several hours of work to get it correctly, more so since somebody
would first need to understand how buffers are created. Please just take
it from me that we should put python configurability on the back burner.
But since I do not want the compiler to be cluttered, I am posting some
code below that AMD should try to model instead of doing what they are
doing now. The code below will compile if we somehow add the init
function defined in the .sm to the init() function that StateMachine.py
defines.
Mark, I sincerely suggest that you take time to do this. I think it would
not a lot of time. Moreover, we would be able to later on shorten /
remove the init() function that StateMachine.py defines since all the
initialization would be done .sm file. This will sort of give the python
configurability as well.
--
Nilay
diff --git a/src/mem/protocol/MESI_Two_Level-L1cache.sm
b/src/mem/protocol/MESI_Two_Level-L1cache.sm
--- a/src/mem/protocol/MESI_Two_Level-L1cache.sm
+++ b/src/mem/protocol/MESI_Two_Level-L1cache.sm
@@ -161,6 +161,9 @@
void unset_tbe();
void wakeUpBuffers(Address a);
void profileMsgDelay(int virtualNetworkType, Cycles c);
+ void init() {
+ responseToL1Cache.resize(100);
+ }
// inclusive cache returns L1 entries only
Entry getCacheEntry(Address addr), return_by_pointer="yes" {
diff --git a/src/mem/protocol/RubySlicc_Types.sm
b/src/mem/protocol/RubySlicc_Types.sm
--- a/src/mem/protocol/RubySlicc_Types.sm
+++ b/src/mem/protocol/RubySlicc_Types.sm
@@ -35,7 +35,10 @@
// undefined declaration error.
//
-external_type(MessageBuffer, buffer="yes", inport="yes", outport="yes");
+structure (MessageBuffer, external = "yes", buffer="yes", inport="yes",
outport="yes") {
+ void resize(int);
+}
+
external_type(OutPort, primitive="yes");
external_type(Scalar, primitive="yes");
_______________________________________________
gem5-dev mailing list
[email protected]
http://m5sim.org/mailman/listinfo/gem5-dev