I think I probably started using vector<bool> but wound up using bitset eventually. Likely anything I used vector<bool> for could be easily cleaned up to use bitset instead. It's probably not going to make a difference performance wise where I used it, but at least it would be more consistent.
Regarding Gabe, this probably doesn't help clear anything up, but: http://www.sgi.com/tech/stl/bit_vector.html Description A bit_vector is essentially a vector <http://www.sgi.com/tech/stl/Vector.html><bool>: it is a Sequence <http://www.sgi.com/tech/stl/Sequence.html> that has the same interface as vector <http://www.sgi.com/tech/stl/Vector.html>. The main difference is that bit_vector is optimized for space efficiency. A vector always requires at least one byte per element, but a bit_vector only requires one bit per element. *Warning*: The name bit_vector will be removed in a future release of the STL. The only reason that bit_vector is a separate class, instead of a template specialization of vector<bool>, is that this would require partial specialization of templates. On compilers that support partial specialization, bit_vector is a specialization of vector<bool>. The name bit_vector is a typedef. This typedef is not defined in the C++ standard, and is retained only for backward compatibility. [EMAIL PROTECTED] wrote: > I generally agree, except that I'd be a bit surprised (but not shocked) if > vector<bool> was specialized to use bits. I'll check into this more at some > point but my vote would be to go with bitset since it sounds like that would > get us where we'd want to be. > > Gabe > > Quoting Steve Reinhardt <[EMAIL PROTECTED]>: > > >> I thought vector<bool> was supposed to be a space-optimized bit vector. >> >> I think there are two things going on here: >> >> 1. It's an STL type, which means that the implementation probably is a >> nightmare of layered abstractions that in theory the compiler can figure out >> and flatten to an efficient piece of code in the common case. If you're >> single-stepping through the debug version, I'm not surprised that it's a >> mess, but I also would not be surprised if in the opt version it boils down >> to roughly equivalent to the optimized code you're proposing. >> >> 2. We probably should be using std::bitset rather than std::vector<bool> >> when possible... the former should be faster since it's non-resizable and >> thus might have less bounds-checking code. I've been trying to use bitset >> for this type of thing everywhere I can in m5 (packet flags), and it looks >> like Kevin is using it in a few places in o3 (though he uses vector<bool> >> also... not sure if that's intentional). Plus even if vector<bool> is no >> longer space-optimized I'm sure that bitset is. >> >> In any case, we definitely don't want to write a one-off piece of code just >> for trace flags. If someone can absolutely prove that neither vector<bool> >> nor bitset is adequate when compiled with optimization, we can consider >> writing a replacement class for all our bit vectors, but I highly doubt that >> this is the case. >> >> Steve >> >> >> >> On Tue, Sep 23, 2008 at 11:33 AM, Ali Saidi <[EMAIL PROTECTED]> wrote: >> >> >>> You're talking about replacing return flags[t]; with a space optimized >>> bit vector? I imagine it would help performance some if for no other >>> reason that the trace flags would fit is a single cache block rather >>> than spanning multiple as they do now. >>> >>> Ali >>> >>> >>> On Sep 23, 2008, at 2:42 AM, Gabe Black wrote: >>> >>> >>>> I just finished stepping through some code having to do with PCs in >>>> simple CPU, and I noticed that not printing DPRINTFs is actually a >>>> fairly involved process, considering that you're not actually doing >>>> anything. Part of the issue, I think, is that whether or not a >>>> traceflag >>>> is on is stored in a vector of Bools. Since the size of the vector >>>> won't >>>> change often (ever?) would it make sense to just make it a char [] and >>>> use something like the following? >>>> >>>> flags[t >> 3] & (1 << (t & 3)); >>>> >>>> >>>> I realize when you've got tracing on you're not going for blazing >>>> speed >>>> in the first place, but if it's easy to tighten it up a bit that's >>>> probably a good idea. The other possibility is that it's actually not >>>> doing a whole lot but calling through a bunch of functions gdb stops >>>> at >>>> one at a time. That would look like a lot of work to someone stepping >>>> through with gdb but could be just as fast. >>>> >>>> Gabe >>>> _______________________________________________ >>>> m5-dev mailing list >>>> [email protected] >>>> http://m5sim.org/mailman/listinfo/m5-dev >>>> >>>> >>> _______________________________________________ >>> m5-dev mailing list >>> [email protected] >>> http://m5sim.org/mailman/listinfo/m5-dev >>> >>> > > > > > _______________________________________________ > m5-dev mailing list > [email protected] > http://m5sim.org/mailman/listinfo/m5-dev > > > _______________________________________________ m5-dev mailing list [email protected] http://m5sim.org/mailman/listinfo/m5-dev
