Re: [PERFORM] GiST indexes and concurrency (tsearch2)

2005-02-09 Thread Marinos J. Yannikos
Tom Lane wrote: You might try the attached patch (which I just applied to HEAD). It cuts down the number of acquisitions of the BufMgrLock by merging adjacent bufmgr calls during a GIST index search. [...] Thanks - I applied it successfully against 8.0.0, but it didn't seem to have a noticeable

Re: [PERFORM] GiST indexes and concurrency (tsearch2)

2005-02-09 Thread Marinos J. Yannikos
Tom Lane wrote: I'm not completely convinced that you're seeing the same thing, but if you're seeing a whole lot of semops then it could well be. I'm seeing ~280 semops/second with spinlocks enabled and ~80k semops/second ( 4 mil. for 100 queries) with --disable-spinlocks, which increases total

Re: [PERFORM] GiST indexes and concurrency (tsearch2)

2005-02-03 Thread Marinos J. Yannikos
Oleg Bartunov wrote: On Thu, 3 Feb 2005, Marinos J. Yannikos wrote: concurrent access to GiST indexes isn't possible at the moment. I [...] there are should no problem with READ access. OK, thanks everyone (perhaps it would make sense to clarify this in the manual). I'm willing to see some

Re: [PERFORM] GiST indexes and concurrency (tsearch2)

2005-02-03 Thread Marinos J. Yannikos
Oleg Bartunov wrote: Marinos, what if you construct apachebench Co free script and see if the issue still exists. There are could be many issues doesn't connected to postgresql and tsearch2. Yes, the problem persists - I wrote a small perl script that forks 10 chils processes and executes the

Re: [PERFORM] GiST indexes and concurrency (tsearch2)

2005-02-03 Thread Marinos J. Yannikos
Tom Lane schrieb: What's the platform exactly (hardware and OS)? Hardware: http://www.appro.com/product/server_1142h.asp - SCSI version, 2 x 146GB 10k rpm disks in software RAID-1 - 32GB RAM OS: Linux 2.6.10-rc3, x86_64, debian GNU/Linux distribution - CONFIG_K8_NUMA is currently turned off (no

[PERFORM] GiST indexes and concurrency (tsearch2)

2005-02-02 Thread Marinos J. Yannikos
Hi, according to http://www.postgresql.org/docs/8.0/interactive/limitations.html , concurrent access to GiST indexes isn't possible at the moment. I haven't read the thesis mentioned there, but I presume that concurrent read access is also impossible. Is there any workaround for this, esp. if

Re: [PERFORM] optimization ideas for frequent, large(ish) updates

2004-02-15 Thread Marinos J. Yannikos
Jeff Trout wrote: Remember that it is going to allocate 800MB per sort. It is not you can allocate up to 800MB, so if you need 1 meg, use one meg. Some queries may end up having a few sort steps. I didn't know that it always allocates the full amount of memory specificed in the configuration

Re: [PERFORM] optimization ideas for frequent, large(ish) updates

2004-02-14 Thread Marinos J. Yannikos
Josh Berkus wrote: 800MB for sort mem? Are you sure you typed that correctly? You must be counting on not having a lot of concurrent queries. It sure will speed up index updating, though! 800MB is correct, yes... There are usually only 10-30 postgres processes active (imagine 5-10 people

[PERFORM] (partial?) indexes, LIKE and NULL

2004-01-27 Thread Marinos J. Yannikos
Hi, with the following table: Table public.foo Column | Type | Modifiers +--+--- t | text | Indexes: a btree (t) Shouldn't queries that use ... where t like '%something%' benefit from a when t is NULL in almost all cases, since the query planner could use

Re: [PERFORM] why do optimizer parameters have to be set manually?

2003-12-19 Thread Marinos J. Yannikos
Tom Lane wrote: No, they are not that easy to determine. In particular I think the idea of automatically feeding back error measurements is hopeless, because you cannot tell which parameters are wrong. Isn't it just a matter of solving an equation system with n variables (n being the number of

[PERFORM] why do optimizer parameters have to be set manually?

2003-12-18 Thread Marinos J. Yannikos
Hi, it seems to me that the optimizer parameters (like random_page_cost etc.) could easily be calculated and adjusted dynamically be the DB backend based on the planner's cost estimates and actual run times for different queries. Perhaps the developers could comment on that? I'm not sure how