https://sourceware.org/bugzilla/show_bug.cgi?id=29993
Nick Clifton <nickc at redhat dot com> changed:
What |Removed |Added
----------------------------------------------------------------------------
Assignee|unassigned at sourceware dot org |nickc at redhat dot com
CC| |nickc at redhat dot com
Status|NEW |ASSIGNED
--- Comment #3 from Nick Clifton <nickc at redhat dot com> ---
(In reply to Frank Ch. Eigler from comment #0)
Hi Frank,
> objcopy.c's merge copy seems to be responsible. There is a
> doubly nested loop over the notes, resulting in O(n**2) complexity.
Not quite...
> 2406 for (pnote = pnotes; pnote < pnotes_end; pnote ++)
> 2407 {
> [...]
> 2426 /* Rule 2: Check to see if there is an identical previous
> note. */
> 2427 for (iter = 0, back = pnote - 1; back >= pnotes; back --)
> 2428 {
> 2429 if (is_deleted_note (back))
> 2430 continue;
But a few lines further on there is:
/* Don't scan too far back however. */
if (iter ++ > 16)
{
/* FIXME: Not sure if this can ever be triggered. */
merge_debug ("ITERATION LIMIT REACHED\n");
break;
}
So the inner loop only executes a maximum of 16 times per outer iteration.
Well it inspects 16 non-deleted notes. If there are lots of deletions then the
loop will continue for longer. But there is also another optimization:
/* Our sorting function should have placed all identically
attributed notes together, so if we see a note of a different
attribute type stop searching. */
if (back->note.namesz != pnote->note.namesz
|| memcmp (back->note.namedata,
pnote->note.namedata, pnote->note.namesz) != 0)
break;
So once the scan reaches a different kind of note it will stop searching.
> Please consider improving the algorithm's performance on this sort of large
> dataset.
Do you have any suggestions ? I will investigate myself, but if you have any
ideas I would love to hear them.
Cheers
Nick
--
You are receiving this mail because:
You are on the CC list for the bug.