What's in that table has to do with memory consistency and lock free
operations (and it also doesn't seem to be correct since the library
supports a bunch of that stuff, but perhaps it has to do with
completeness across platforms or interactions with optimization like
Hans Boehm talks about).

The thread stuff is not in gcc, it's in libstdc++.  You can find a
similar libstdc++ status here
http://gcc.gnu.org/onlinedocs/libstdc++/manual/status.html.  That
said, I don't know what versions things are available for.

I would highly recommend using the C++11 threading stuff.  It is way
more convenient than bare pthreads (I've been using it a lot).  On
linux, it simply wraps pthreads for the most part (and pthreads mostly
uses futex, so it's really fast).  The promises/futures are very handy
for higher level operations.  The scoped locks make using mutexes much
easier, etc.

The real question, is what's the minimum version now, and is the
threading support in that minimum version.  grepping on zizzer
indicates that mutexes, condition variables, and threads are available
in 4.4.  promises/futures are available in 4.5.

  Nate


On Tue, Feb 5, 2013 at 12:56 PM, Steve Reinhardt <ste...@gmail.com> wrote:
> Good find... sadly it does look like trying to use the C++11 threads
> support would be premature.  Oh well.
>
> Steve
>
>
> On Tue, Feb 5, 2013 at 12:22 PM, Nilay <ni...@cs.wisc.edu> wrote:
>
>> I was not aware that C++11 includes support for threads. I just looked at
>> the developments that have taken place in GCC and Glibc. Here is the link
>> that lists the status of different features --
>> http://gcc.gnu.org/projects/cxx0x.html
>>
>> After taking a look at the entries under the heading Concurrency, I am of
>> the opinion that we should go with pthreads for the time being.
>>
>> --
>> Nilay
>>
>> On Tue, February 5, 2013 1:50 pm, Steve Reinhardt wrote:
>> > Another question is what threading model to use... my original patch (and
>> > thus Nilay's current code) use pthreads, but if it's practical to use the
>> > C++11 stuff instead, I'd be in favor of switching to that.
>> >
>> > Steve
>> >
>> >
>> > On Tue, Feb 5, 2013 at 11:47 AM, Steve Reinhardt <ste...@gmail.com>
>> wrote:
>> >
>> >> I agree that single-threaded simulations should not be paying any
>> >> significant overhead.  It would be nice to make this a runtime thing
>> >> though.  For example, a lot of the multi-thread overhead is in the
>> >> periodic
>> >> synchronization events, and if we know we're in single-threaded mode we
>> >> just don't ever schedule any of those, so then there's no overhead but
>> >> no
>> >> ifdef required either.
>> >>
>> >> Some of the other things might me a little trickier; for example, for
>> >> things like the etherlink or bus that require internal locking, perhaps
>> >> we
>> >> want the multithread version to be a subclass, then if we know all the
>> >> communicating objects are on the same thread we just instantiate the
>> >> base
>> >> class that doesn't have the locking in it.
>> >>
>> >> For DPRINTFs we can probably just leave the locks in; they're expensive
>> >> enough anyway that the extra overhead of doing a lock on a
>> >> single-threaded
>> >> system is probably negligible.
>> >>
>> >> Thoughts?  Any other specific cases where this approach wouldn't work?
>> >>
>> >> Steve
>> >>
>> >>
>> >> On Tue, Feb 5, 2013 at 11:30 AM, Nilay <ni...@cs.wisc.edu> wrote:
>> >>
>> >>> We need to decide on whether the code for multi-threading the
>> >>> simulation
>> >>> is always compiled or is it optional. I am right now leaning towards it
>> >>> being optional. All the code required for multi-threading should be
>> >>> protected by a #ifdef macro. This is so that the performance impact on
>> >>> single threaded simulations is kept to the minimal.
>> >>>
>> >>> --
>> >>> Nilay
>> >>>
>> >>> On Mon, February 4, 2013 10:11 am, Steve Reinhardt wrote:
>> >>> > I agree, this seems like a good time to clean things up and get them
>> >>> > committed, and ideally create a regression test too.
>> >>> >
>> >>> > My preference for the next step after that is to start doing some
>> >>> > intra-node parallelization, which would mean doing to some internal
>> >>> > interconnect what was done to the etherlink.  This would be either
>> >>> the
>> >>> Bus
>> >>> > object for the classic memory system or something else (maybe
>> >>> > MessageBuffer?) for Ruby.
>> >>> >
>> >>> > Steve
>> >>> >
>> >>> >
>> >>> > On Mon, Feb 4, 2013 at 6:32 AM, Ali Saidi <sa...@umich.edu> wrote:
>> >>> >
>> >>> >>
>> >>> >>
>> >>> >> Hi Nilay,
>> >>> >>
>> >>> >> That is awesome! If I understand then you have two
>> >>> >> systems communicating together over an ethernet link. I would
>> >>> suggest
>> >>> >> that this might be a good time for a checkpoint, getting the patches
>> >>> in
>> >>> >> state that are good, deciding on a threading model, and getting the
>> >>> >> first pass committed.
>> >>> >>
>> >>> >> Thanks,
>> >>> >>
>> >>> >> Ali
>> >>> >>
>> >>>
>> >>> _______________________________________________
>> >>> gem5-dev mailing list
>> >>> gem5-dev@gem5.org
>> >>> http://m5sim.org/mailman/listinfo/gem5-dev
>> >>>
>> >>
>> >>
>> > _______________________________________________
>> > gem5-dev mailing list
>> > gem5-dev@gem5.org
>> > http://m5sim.org/mailman/listinfo/gem5-dev
>> >
>>
>>
>> --
>> Nilay
>>
>> _______________________________________________
>> gem5-dev mailing list
>> gem5-dev@gem5.org
>> http://m5sim.org/mailman/listinfo/gem5-dev
>>
> _______________________________________________
> gem5-dev mailing list
> gem5-dev@gem5.org
> http://m5sim.org/mailman/listinfo/gem5-dev
_______________________________________________
gem5-dev mailing list
gem5-dev@gem5.org
http://m5sim.org/mailman/listinfo/gem5-dev

Reply via email to