On 2013-01-25 14:11:39 +0900, Michael Paquier wrote:
> On Thu, Jan 24, 2013 at 3:41 AM, Andres Freund <and...@2ndquadrant.com>wrote:
> 
> > I think the usage of list_append_unique_oids in
> > ReindexRelationsConcurrently might get too expensive in larger
> > schemas. Its O(n^2) in the current usage and schemas with lots of
> > relations/indexes aren't unlikely candidates for this feature.
> > The easist solution probably is to use a hashtable.
> >
> I just had a look at the hashtable APIs and I do not think it is adapted to
> establish the list of unique index OIDs that need to be built concurrently.
> It would be of a better use in case of mapping the indexOids with something
> else, like the concurrent Oids, but still even with that the code would be
> more readable if let as is.

It sure isn't optimal, but it should do the trick if you use the
hash_seq stuff to iterate the hash afterwards. And you could use it to
map to the respective locks et al.

If you prefer other ways to implement it I guess the other easy solution
is to add the values without preventing duplicates and then sort &
remove duplicates in the end. Probably ends up being slightly more code,
but I am not sure.

I don't think we can leave the quadratic part in there as-is.

Greetings,

Andres Freund

-- 
 Andres Freund                     http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to