Nothing really comes to mind. 1.3.0 is ancient (2000?) and lots
has changed since then so it's hard to say in general. Normally,
we would need a test case demonstrating the problem. Ideally
against 1.3.0, although I'm not sure how to even build it. The
gprof utput for both versions could be useful as well.

But wait, you mentioned _C_depth(). I think the problem might
be easier to solve: _C_depth() is a debugging function that gets
called (in debug mode only) from insert() and erase() to verify
that the tree is balanced. The function runs in O(N*log(N)) so
it does have a significant cost, but it should never be called
when debugging is disabled (i.e., when _RWSTDDEBUG is not
#defined).

If calling the function even in debug mode is causing problems
for the customer the fix should be straightforward: guard the
calls to it with some new macro to give users the ability to
disable it even when _RWSTDDEBUG is #defined. If this is
something the customer would be interested in have them open
an issue for it in Jira.

Martin

Jeremy Dean wrote:
Dear Rogue Wave Support team,
One of our customers got a problem when update to RogueWave/SourcePro C++ Core v9 from Rogue Wave C++ Standard Library Limits Release v1.3.0 A problem is that a performance turns worse.
It came to take 15 minutes for processing before for 10 seconds.
The customer says that the following functions are bottlenecks.
__rw::__rb_tree(...)::_C_depth(...)
The customer investigated it by a profiling tool called gprof. What will be thought about as a cause?
Best Regards,
Yasuhiro
Jeremy Dean Rogue Wave Software, Inc. Technical Support Phone: 303-545-3205 -- 1-800-404-4767 E-mail: [EMAIL PROTECTED] Web: http://www.roguewave.com/support Knowledge Base entries: http://www.roguewave.com/kbdocs/search.html View issues online at: http://www.roguewave.com/youraccount/login/

Reply via email to