I agree with Ali, any message under a cache line in length should be just fine. In fact, in some networks all messages might have to be a cache line in length.
Nate On Fri, Sep 5, 2008 at 9:07 AM, Ali Saidi <[EMAIL PROTECTED]> wrote: > I don't think there should be any problem with these working. > > Ali > > On Sep 5, 2008, at 2:48 AM, Gabe Black wrote: > >> More concretely, if you look in appendix F of the third Intel >> manual, it >> shows the formats of the messages the APICs would send each other over >> the special purpose APIC bus. I'm not intending to implement one >> since >> they've used the system bus instead for quite a while, but that at >> least >> shows the quantity of information going back and forth. There's also >> an >> arbitration ID which goes away when you use the system bus, so these >> messages can probably be simpler than what's in the appendix. I'd >> guess >> around 64 to 128 bits per message. >> >> Gabe >> >> [EMAIL PROTECTED] wrote: >>> The accesses are supposed to be atomic, I'm pretty sure. I think >>> they're >>> basically just special messages passed around on the bus which I'm >>> approximating with reads and writes. I set aside a page for those >>> accesses >>> because it was a convenient size and I already needed a second page >>> for access >>> to the local APICs configuration space, but I don't expect anywhere >>> near that >>> much data to actually go into these messages. It should just be a >>> handful of >>> bytes. What might be best would be to create some new type/command >>> for packets >>> that represent interrupt messages, but I initially shied away from >>> that because >>> it might be hard to implement properly, could make all the devices >>> more complex >>> since they have to handle or at least ignore those messages, and >>> adds clutter >>> and complexity to the way the memory system works. I'm pretty open to >>> suggestions if there's some way to implement things that seems >>> obviously best. >>> >>> Gabe >>> >>> Quoting Steve Reinhardt <[EMAIL PROTECTED]>: >>> >>> >>>> The memory system doesn't do any segmentation. It will choke if >>>> you try and >>>> do a single access that spans cache blocks in cached memory. For >>>> uncached >>>> physical accesses, I don't know if there are any hard limits or >>>> not, but >>>> performance would get unrealistic if you tie up a bus while a full >>>> page of >>>> data traverses it. >>>> >>>> There are ports that provide readBlob/writeBlob calls, though >>>> they're really >>>> only intended for functional accesses (like program loading, syscall >>>> emulation, etc.). >>>> >>>> I don't know enough detail about the APIC accesses to comment on the >>>> atomicity issues. I'd expect that it's designed so that you just >>>> do atomic >>>> accesses. >>>> >>>> Steve >>>> >>>> On Thu, Sep 4, 2008 at 9:23 AM, Gabe Black <[EMAIL PROTECTED]> >>>> wrote: >>>> >>>> >>>>> I'm close to the point of sending messages between APICs, and I >>>>> was >>>>> thinking about the semantics of how I actually want to send the >>>>> message. >>>>> I need to know the specifics of a few properties of the memory >>>>> system in >>>>> order to be sure it will work. First, if I send a bigger message, >>>>> is it >>>>> possible it'll be split up before getting to it's destination? >>>>> Second, >>>>> if it is split up, is there any guarantee of ordering? Third, is >>>>> there >>>>> any easy socket style mechanism to send a bunch of bytes to one >>>>> location >>>>> (a bunch of small, sequential writes seems fairly clunky), or >>>>> would I >>>>> want to write into a buffer and then somehow signal it was all in >>>>> there? >>>>> I've allocated a page of memory space for each APIC so they have >>>>> a space >>>>> to send each other messages. I suppose a fourth concern if the >>>>> buffer >>>>> approach is used is that multiple APICs could compete to write >>>>> into the >>>>> same portion of the APICs page unless it was further divided into >>>>> regions for each sender. >>>>> >>>>> >>>>> Gabe >>>>> _______________________________________________ >>>>> m5-dev mailing list >>>>> m5-dev@m5sim.org >>>>> http://m5sim.org/mailman/listinfo/m5-dev >>>>> >>>>> >>> >>> >>> >>> >>> _______________________________________________ >>> m5-dev mailing list >>> m5-dev@m5sim.org >>> http://m5sim.org/mailman/listinfo/m5-de >>> >> _______________________________________________ >> m5-dev mailing list >> m5-dev@m5sim.org >> http://m5sim.org/mailman/listinfo/m5-dev >> > > _______________________________________________ > m5-dev mailing list > m5-dev@m5sim.org > http://m5sim.org/mailman/listinfo/m5-dev > > _______________________________________________ m5-dev mailing list m5-dev@m5sim.org http://m5sim.org/mailman/listinfo/m5-dev