When I did some googling on this earlier today it was apparent that at some point in the past vector<bool> was typically specialized to provide bit-packed storage, which meant that it didn't behave like a normal vector since you couldn't get a reference to an individual element as Nate indicated, and there were debates about whether that was a bug or a feature. It wasn't clear what the resolution was (whether vector<bool> still gets specialized by default or not), but it does appear that the creation of bitset was a side effect of that debate.
Bottom line is still that we should be using bitset wherever it makes sense. Steve On Tue, Sep 23, 2008 at 5:09 PM, nathan binkert <[EMAIL PROTECTED]> wrote: > I think vector<bool> is indeed a vector of bools because &x[0] is > supposed to return a bool * that you can mess with. I think that's > why bitset exists. I'd be happy to see us move to bitset. The size > of the bitset is easy to get since the code is autogenerated anyway. > > Nate > > On Tue, Sep 23, 2008 at 2:16 PM, <[EMAIL PROTECTED]> wrote: > > I generally agree, except that I'd be a bit surprised (but not shocked) > if > > vector<bool> was specialized to use bits. I'll check into this more at > some > > point but my vote would be to go with bitset since it sounds like that > would > > get us where we'd want to be. > > > > Gabe > > > > Quoting Steve Reinhardt <[EMAIL PROTECTED]>: > > > >> I thought vector<bool> was supposed to be a space-optimized bit vector. > >> > >> I think there are two things going on here: > >> > >> 1. It's an STL type, which means that the implementation probably is a > >> nightmare of layered abstractions that in theory the compiler can figure > out > >> and flatten to an efficient piece of code in the common case. If you're > >> single-stepping through the debug version, I'm not surprised that it's a > >> mess, but I also would not be surprised if in the opt version it boils > down > >> to roughly equivalent to the optimized code you're proposing. > >> > >> 2. We probably should be using std::bitset rather than std::vector<bool> > >> when possible... the former should be faster since it's non-resizable > and > >> thus might have less bounds-checking code. I've been trying to use > bitset > >> for this type of thing everywhere I can in m5 (packet flags), and it > looks > >> like Kevin is using it in a few places in o3 (though he uses > vector<bool> > >> also... not sure if that's intentional). Plus even if vector<bool> is > no > >> longer space-optimized I'm sure that bitset is. > >> > >> In any case, we definitely don't want to write a one-off piece of code > just > >> for trace flags. If someone can absolutely prove that neither > vector<bool> > >> nor bitset is adequate when compiled with optimization, we can consider > >> writing a replacement class for all our bit vectors, but I highly doubt > that > >> this is the case. > >> > >> Steve > >> > >> > >> > >> On Tue, Sep 23, 2008 at 11:33 AM, Ali Saidi <[EMAIL PROTECTED]> wrote: > >> > >> > You're talking about replacing return flags[t]; with a space optimized > >> > bit vector? I imagine it would help performance some if for no other > >> > reason that the trace flags would fit is a single cache block rather > >> > than spanning multiple as they do now. > >> > > >> > Ali > >> > > >> > > >> > On Sep 23, 2008, at 2:42 AM, Gabe Black wrote: > >> > > >> > > I just finished stepping through some code having to do with PCs > in > >> > > simple CPU, and I noticed that not printing DPRINTFs is actually a > >> > > fairly involved process, considering that you're not actually doing > >> > > anything. Part of the issue, I think, is that whether or not a > >> > > traceflag > >> > > is on is stored in a vector of Bools. Since the size of the vector > >> > > won't > >> > > change often (ever?) would it make sense to just make it a char [] > and > >> > > use something like the following? > >> > > > >> > > flags[t >> 3] & (1 << (t & 3)); > >> > > > >> > > > >> > > I realize when you've got tracing on you're not going for blazing > >> > > speed > >> > > in the first place, but if it's easy to tighten it up a bit that's > >> > > probably a good idea. The other possibility is that it's actually > not > >> > > doing a whole lot but calling through a bunch of functions gdb stops > >> > > at > >> > > one at a time. That would look like a lot of work to someone > stepping > >> > > through with gdb but could be just as fast. > >> > > > >> > > Gabe > >> > > _______________________________________________ > >> > > m5-dev mailing list > >> > > [email protected] > >> > > http://m5sim.org/mailman/listinfo/m5-dev > >> > > > >> > > >> > _______________________________________________ > >> > m5-dev mailing list > >> > [email protected] > >> > http://m5sim.org/mailman/listinfo/m5-dev > >> > > >> > > > > > > > > > > _______________________________________________ > > m5-dev mailing list > > [email protected] > > http://m5sim.org/mailman/listinfo/m5-dev > > > > > _______________________________________________ > m5-dev mailing list > [email protected] > http://m5sim.org/mailman/listinfo/m5-dev >
_______________________________________________ m5-dev mailing list [email protected] http://m5sim.org/mailman/listinfo/m5-dev
