Thanks for the tips. What constitutes a "large" work_mem? My server has 61GB of memory and my work_mem is currently set to include all of that. What percent of my total memory should I lower that to so it won't impact GIN updates?
On Mon, Mar 21, 2016 at 2:27 PM, Tom Lane <t...@sss.pgh.pa.us> wrote: > > Chris Spencer <chriss...@gmail.com> writes: > > I have 5 GIN indexes on a PG 9.3 table containing about 50 million records. > > Each index covers a "group" of common records like billing address, > > shipping address, contact names, etc. > > > When first created, the indexes works miracles in speeding up the full text > > search of these fields. However, I'm running into some serious maintenance > > headaches. > > > After I insert a few thousand new records, the indexes seem to have no > > effect. A query that might normally take 1 second now takes 5 minutes. > > Probably the problem is that most of the new index entries are still > sitting in the index's pending list and haven't been merged to the main > search tree. Try VACUUM'ing the table after a bulk insert. (I think > there are other ways of triggering a GIN pending-list cleanup, but > I don't recall which ones were available in 9.3.) > > Worthy of note here is that the max pending list size is governed by > work_mem, so a large work_mem can make this effect worse. (We got > smart and made it an independent setting, but not till 9.5.) > > regards, tom lane