Hmm ... I wonder whether this means that the current work on
parallelizing I/O (the posix_fadvise patch in particular) is a dead
end. Because what that is basically going to do is expend more CPU
to improve I/O efficiency. If you believe this thesis then that's
not the road we want to go
Well, when select count(1) reads pages slower than my disk, its 16x + slower
than my RAM. Until one can demonstrate that the system can even read pages
in RAM faster than what disks will do next year, it doesn't matter much that
RAM is faster. It does matter that RAM is faster for sorts,
You guys are right. I tried Miller and gave me the same result. Is there
any way to tune this so that for the common last names, the query run time
doesn't jump from 1s to 300s?
Thanks for the help!
Can you send the output of EXPLAIN ANALYZE for both cases?
...Robert
--
Sent via
On Tue, Dec 9, 2008 at 1:11 PM, Joshua D. Drake j...@commandprompt.com wrote:
Those intel SSDs sound compelling. I've been waiting for SSDs to get
competitive price and performance wise for a while, and when the
intels came out and I read the first benchmarks I immediately began
scheming.
On Tue, Dec 16, 2008 at 2:32 PM, Jaime Casanova
jcasa...@systemguards.com.ec wrote:
we have a some bad queries (developers are working on that), some of
them run in 17 secs and that is the average but when analyzing logs i
found that from time to time some of them took upto 3 mins (the same
is the exact query... i think it will be removed later today because
is a bad query anyway... but my fear is that something like happens
even with good ones...
maybe chekpoints could be the problem?
i have 8.3.5 and condigured checkpoint_timeout in 15 minutes,
chekpoint_segments 6 and
as you see, explain analyze says it will execute in 175.952ms and
because of network transfer of data executing this from pgadmin in
another machine it runs for 17s... but from time to time pgFouine is
shown upto 345.11 sec
Well, 86000 rows is not enough to give PostgreSQL a headache, even on
On Tue, Dec 30, 2008 at 4:05 AM, Laszlo Nagy gand...@shopzeus.com wrote:
We have serveral table where the index size is much bigger than the table
size.
You'll usually get index bloat in roughly the same measure that you
get table bloat. If you always (auto)vacuum regularly, then the
amount of
On Tue, Dec 30, 2008 at 5:41 AM, Anton Bogdanovitch
poison@gmail.com wrote:
I have to insert rows to table with 95% primary key unique_violation.
If you're inserting a lot of rows at once, I think you're probably
better off loading all of the data into a side table that does not
have a
On Wed, Dec 31, 2008 at 11:01 AM, Alvaro Herrera
alvhe...@commandprompt.com wrote:
The point of a prepared statement IMHO is to do the planning only once.
There's necessarily a tradeoff between that and having a plan that's
perfectly adapted to specific parameter values.
I think it has been
While looking at a complex query that is being poorly planned by
PostgreSQL 8.2.9, I discovered that any non-trivial CASE...WHEN
expression seems to produce a selectivity estimate of 0.005. This
also happens on HEAD.
psql (8.4devel)
Type help for help.
head=# create table tenk (c) as select
On Mon, Jan 5, 2009 at 11:40 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
While looking at a complex query that is being poorly planned by
PostgreSQL 8.2.9, I discovered that any non-trivial CASE...WHEN
expression seems to produce a selectivity estimate
Where you *will* have some major OS risk is with testing-level software
or bleeding edge Linux distros like Fedora. Quite frankly, I don't
know why people run Fedora servers -- if it's Red Hat compatibility you
want, there's CentOS.
I've had no stability problems with Fedora. The worst
On Sun, Jan 18, 2009 at 10:30 PM, Chris dmag...@gmail.com wrote:
Hi all,
I have a view that looks like this:
SELECT
CASE
WHEN r.assetid IS NULL THEN p.assetid
ELSE r.assetid
END AS assetid,
CASE
WHEN r.userid IS NULL THEN p.userid
I tried work_mem and maintenance_work_mem but it does not seem to make much
difference yet. Admittedly I had set it to 100M and 80M, so after reading a
little bit more I have found that I could easily set it to several GBs. But
I am not sure those are the correct config parameters to use for
Is that how it works for an index as well? I just found out that I have an
index that is 35GB, and the table is 85GB. ( I will look into the index, it
works fine, but an index that is almost one third of the size of the table,
seems a little bit strange. )
So if it works the same way and
I'm having a problem with a query that takes more or less 3.2 seconds to be
executed.
This query uses a view which encapsulates some calculations (in order to
avoid duplicating theses calculations at several places in the project).
In order to keep that post readable, I've put the view
Is there any tweaks to force pgsql to use index on description?
Even if you could force it to use the index, it wouldn't make the
query run faster.
As others have pointed out, what you really need is a different kind of index...
...Robert
--
Sent via pgsql-performance mailing list
My question: with that kind of volume and the underlying aggregation
functions (by product id, dates, possibly IP addresses or at least
countries of origin..) will PG ever be a good choice? Or should I be
looking at some other kind of tools? I wonder if OLAP tools would be
overkill for
On Thu, Jan 29, 2009 at 10:58 AM, anders.blaaga...@nordea.com wrote:
Hi,
If I have a view like:
create view X as (
select x from A
union all
select x from B)
and do
select max(x) from X
I get a plan like:
Aggregate
Append
Seq Scan on A
Seq Scan on B
If A and B are
On Thu, Jan 29, 2009 at 3:21 PM, henk de wit henk53...@hotmail.com wrote:
Hi,
When I try to restore a database dump on PostgreSQL 8.3
that's approximately 130GB in size and takes about 1 hour, I noticed index
creation makes up the bulk of that time. I'm using a very fast I/O subsystem
(16
On Thu, Jan 29, 2009 at 3:15 PM, Subbiah Stalin-XCGF84
ssubb...@motorola.com wrote:
I'm in the process of tuning a query that does a sort on a huge dataset.
With work_mem set to 2M, i see the sort operation spilling to disk writing
upto 430MB and then return the first 500 rows. Our query is of
On Mon, Feb 2, 2009 at 1:17 PM, Brian Cox brian@ca.com wrote:
I'm using 8.3.5. Table ts_defects has 48M rows. Through psql: delete from
ts_defects;
Result: out of memory/Can't allocate size: 32
I then did 10 or so deletes to get rid of the rows. Afterwards, inserts into
or queries on this
On Mon, Feb 2, 2009 at 3:01 PM, Brian Cox brian@ca.com wrote:
How much memory do you have in your machine? What is work_mem set to?
32G; work_mem=64M
Hmm. Well then I'm not sure why you're running out of memory, that
seems like a bug. Taking a long time, I understand. Crashing, not so
It's the pending trigger list. He's got two trigger events per row,
which at 40 bytes apiece would approach 4GB of memory. Apparently
it's a 32-bit build of Postgres, so he's running out of process address
space.
There's a TODO item to spill that list to disk when it gets too large,
but
On Tue, Feb 3, 2009 at 4:17 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Alvaro Herrera alvhe...@commandprompt.com writes:
Robert Haas escribió:
Have you ever given any thought to whether it would be possible to
implement referential integrity constraints with statement-level
triggers instead
On Wed, Feb 4, 2009 at 7:35 AM, Gregory Stark st...@enterprisedb.com wrote:
Robert Haas robertmh...@gmail.com writes:
That's good if you're deleting most or all of the parent table, but
what if you're deleting 100,000 values from a 10,000,000 row table?
In that case maybe I'm better off
On Fri, Feb 6, 2009 at 11:14 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Mario Splivalo mario.spliv...@megafon.hr writes:
Besides PK and uq-constraint indices I have this index:
CREATE INDEX transactions_idx__client_data ON transactions
USING btree (transaction_client_id,
On Fri, Feb 6, 2009 at 12:41 PM, Kevin Grittner
kevin.gritt...@wicourts.gov wrote:
Robert Haas robertmh...@gmail.com wrote:
What's weird about this example is that when he sets enable_seqscan to
off, the bitmap index scan plan is actually substantially faster, even
though it in fact does scan
Just guessing here, but what values are you using for
join_collapse_limit and from_collapse_limit, and what happens if you
make them much bigger (like 100)?
...Robert
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
effective_cache_size
This is just a hint to tell the planner how much cache will generally be
available.
ok, but available for what?
The documentation on these parameters is really very good.
http://www.postgresql.org/docs/8.3/interactive/runtime-config-query.html
On Mon, Feb 9, 2009 at 10:44 AM, justin jus...@emproshunts.com wrote:
Matthew Wakeling wrote:
On Sat, 7 Feb 2009, justin wrote:
In a big databases a checkpoint could get very large before time had
elapsed and if server cashed all that work would be rolled back.
No. Once you commit a
On Tue, Feb 10, 2009 at 5:31 AM, Rajesh Kumar Mallah
mallah.raj...@gmail.com wrote:
I have a query in which two huge tables (A,B) are joined using an indexed
column and a search is made on tsvector on some column on B. Very limited
rows of B are expected to match the query on tsvector column.
On Tue, Feb 17, 2009 at 11:17 AM, Alexander Gorban
alex.gor...@gmail.com wrote:
Hi,
I have table containing bytea and text columns. It is my storage for
image files and it's labels. Labels can be 'original' and 'thumbnail'.
I've C-function defined in *.so library and corresponding declaration
On Tue, Feb 17, 2009 at 12:46 PM, Alexander Gorban
alex.gor...@gmail.com wrote:
В Втр, 17/02/2009 в 12:24 -0500, Robert Haas пишет:
On Tue, Feb 17, 2009 at 11:17 AM, Alexander Gorban
alex.gor...@gmail.com wrote:
Hi,
I have table containing bytea and text columns. It is my storage
First of all, you need to do some research on the benchmark kit itself,
rather than blindly downloading and using one. BenchmarkSQL has significant
bugs in it which affect the result. I can say that authoritatively as I
worked on/with it for quite awhile. Don't trust any result that comes
On Fri, Feb 20, 2009 at 4:34 PM, Jonah H. Harris jonah.har...@gmail.com wrote:
On Fri, Feb 20, 2009 at 3:40 PM, Merlin Moncure mmonc...@gmail.com wrote:
ISTM you are the one throwing out unsubstantiated assertions without
data to back it up. OP ran benchmark. showed hardware/configs, and
On Sat, Feb 21, 2009 at 10:41 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Scott Carey sc...@richrelevance.com writes:
Are there any optimizations planned for the case where columns are
defined as NOT NULL?
We might get around to recognizing that case as an antijoin sometime.
It's nontrivial
On Mon, Feb 23, 2009 at 7:26 AM, Kouber Saparev kou...@saparev.com wrote:
Now, recently I have altered some of the default parameters in order to get
as much as possible out of the hardware - 12 GB of RAM, 8 processors. So, I
guess I have done something wrong, thus the planner is taking that
Can you please elaborate a bit?
I thought that A0.Prop would ignore the composite index created on the
columns subj and prop but this does not seem to be the case.
Yeah, I think you're barking up the wrong tree here. I think Tom had
the correct diagnosis - what do you get from show work_mem?
Just start up psql and type:
show work_mem;
(You could look in the config file too I suppose.)
...Robert
On Wed, Feb 25, 2009 at 1:53 PM, Farhan Husain russ...@gmail.com wrote:
On Wed, Feb 25, 2009 at 12:49 PM, Robert Haas robertmh...@gmail.com wrote:
You still haven't answered
Please note that this (1792MB) is the highest that I could set for work_mem.
Yeah, that's almost certainly part of your problem.
You need to make that number MUCH smaller. You probably want a value
like 1MB or 5MB or maybe if you have really a lot of memory 20MB.
That's insanely high.
On Wed, Feb 25, 2009 at 3:44 PM, Farhan Husain russ...@gmail.com wrote:
Initially, it was the default value (32MB). Later I played with that value
thinking that it might improve the performance. But all the values resulted
in same amount of time.
Well, if you set it back to what we consider to
On Wed, Feb 25, 2009 at 4:32 PM, Farhan Husain russ...@gmail.com wrote:
On Wed, Feb 25, 2009 at 3:30 PM, Robert Haas robertmh...@gmail.com wrote:
On Wed, Feb 25, 2009 at 3:44 PM, Farhan Husain russ...@gmail.com wrote:
Initially, it was the default value (32MB). Later I played with that
value
shared_buffers = 32MB # min 128kB or
max_connections*16kB
That's REALLY small for pgsql. Assuming your machine has at least 1G
of ram, I'd set it to 128M to 256M as a minimum.
As I wrote in a previous email, I had the value set to 1792MB (the highest I
could set) and
Here is the latest output:
ingentadb=# EXPLAIN ANALYZE select A0.Subj, A2.Obj From jena_g1t1_stmt A0,
jena_g1t1_stmt A1, jena_g1t1_stmt A2 Where
A0.Prop='Uv::http://prismstandard.org/namespaces/1.2/basic/isPartOf' AND
On Fri, Feb 27, 2009 at 3:18 PM, Alexander Staubo a...@bengler.no wrote:
On Sun, Feb 15, 2009 at 5:45 PM, Alexander Staubo a...@bengler.no wrote:
On Sun, Feb 15, 2009 at 5:29 AM, David Wilson david.t.wil...@gmail.com
wrote:
On Sat, Feb 14, 2009 at 5:25 PM, Alexander Staubo a...@bengler.no
On Sat, Feb 28, 2009 at 9:51 PM, Brian Cox brian@ca.com wrote:
Actually, they're all deadlocked. The question is why?
Here's a brief background. The ts_defects table is partitioned by occurrence
date; each partition contains the rows for 1 day. When the data gets old
enough, the partition
On Sat, Feb 28, 2009 at 11:20 AM, Alexander Staubo a...@bengler.no wrote:
On Fri, Feb 27, 2009 at 11:54 PM, Robert Haas robertmh...@gmail.com wrote:
The problem here is that the planner estimates the cost of a Limit
plan node by adding up (1) the startup cost of the underlying plan
node
On Mon, Mar 2, 2009 at 1:22 PM, Brian Cox brian@ca.com wrote:
As you can see there are only 3 transactions and 1 starts 1 hour after
the drop begins. I'm still trying to figure out how to interpret the
pg_locks output, but (presumably) you/others on this forum have more
experience at this
On Tue, Mar 3, 2009 at 12:05 PM, Sebastjan Trepca tre...@gmail.com wrote:
Hey,
I have a table that links content together and it currently holds
about 17 mio records. Typical query is a join with a content table and
link table:
noovo-new=# explain analyze SELECT core_accessor.id,
On Tue, Mar 3, 2009 at 12:20 PM, Sebastjan Trepca tre...@gmail.com wrote:
But it's already attached in the first mail or am I missing something?
If you don't see it, check this: http://pastebin.com/d71b996d0
Woops, sorry, I thought you had sent plain EXPLAIN. I see it now.
The lowest level
On Tue, Mar 3, 2009 at 2:16 PM, Sebastjan Trepca tre...@gmail.com wrote:
Maybe this is useful, I removed the JOIN and it uses other
index(core_accessor_date_idx indexes (date_posted, nooximity)), but
its still hardly any better:
noovo-new=# explain analyze SELECT * FROM core_accessor WHERE
On Wed, Mar 4, 2009 at 6:20 PM, Aaron Guyon battlem...@gmail.com wrote:
On Tue, Mar 3, 2009 at 5:34 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Comparisons on
numerics aren't terribly fast though (in either release). I wonder
whether you could change the key columns to int or bigint.
I changed
On Thu, Mar 5, 2009 at 10:20 AM, Kevin Grittner
kevin.gritt...@wicourts.gov wrote:
Robert Haas robertmh...@gmail.com wrote:
Well, it looks like the problem is that 8.3 is not using the index
idx_bundle_content_bundle_id. But I don't know why that should be
happening, unless there's a problem
On Thu, Mar 5, 2009 at 1:57 PM, Kevin Grittner
kevin.gritt...@wicourts.gov wrote:
Aaron Guyon battlem...@gmail.com wrote:
8.3.3: 1195 ms
8.2.12: 611 ms
Could you send the non-commented lines from the postgresql.conf files
from both installations?
If feasible, update to the latest bug-fix
On Thu, Mar 5, 2009 at 1:30 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Jonathan Hseu vom...@vomjom.net writes:
Sort (cost=11684028.44..11761274.94 rows=30898601 width=40)
Sort Key: time
- Bitmap Heap Scan on ticks (cost=715657.57..6995196.08 rows=30898601
width=40)
Recheck
Its worth ruling out given that even if the likelihood is small, the fix is
easy. However, I don’t see the throughput drop from peak as more
concurrency is added that is the hallmark of this problem — usually with a
lot of context switching and a sudden increase in CPU use per transaction.
On Fri, Mar 13, 2009 at 10:06 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Gregory Stark st...@enterprisedb.com writes:
Tom Lane t...@sss.pgh.pa.us writes:
Ugh. So apparently, we actually need to special-case Solaris to not
believe that posix_fadvise works, or we'll waste cycles uselessly
calling
On Wed, Mar 18, 2009 at 1:43 PM, Scott Carey sc...@richrelevance.com wrote:
Its worth ruling out given that even if the likelihood is small, the fix is
easy. However, I don¹t see the throughput drop from peak as more
concurrency is added that is the hallmark of this problem usually with a
Actually the patch I submitted shows no overhead from what I have seen and I
think it is useful depending on workloads where it can be turned on even on
production.
Well, unless I'm misunderstanding something, waking all waiters every
time could lead to arbitrarily long delays for writers on
On Thu, Mar 19, 2009 at 5:43 PM, Scott Carey sc...@richrelevance.com wrote:
Well, unless I'm misunderstanding something, waking all waiters every
time could lead to arbitrarily long delays for writers on mostly
read-only workloads... and by arbitrarily along, we mean to say
potentially just
On Fri, Mar 20, 2009 at 1:16 PM, Anne Rosset aros...@collab.net wrote:
Richard Huxton wrote:
Anne Rosset wrote:
EXPLAIN ANALYZE
SELECT
audit_change.id AS id,
audit_change.audit_entry_id AS auditEntryId,
audit_entry.object_id AS objectId,
audit_change.property_name AS
On Fri, Mar 20, 2009 at 4:29 PM, Anne Rosset aros...@collab.net wrote:
Alvaro Herrera wrote:
Robert Haas escribió:
Something is wrong here. How can setting enable_seqscan to off result
in a plan with a far lower estimated cost than the original plan? If
the planner thought the non-seq-scan
On Fri, Mar 20, 2009 at 7:39 PM, Jignesh K. Shah j.k.s...@sun.com wrote:
Alvaro Herrera wrote:
So Simon's correct.
And perhaps this explains why Jignesh is measuring an improvement on his
benchmark. Perhaps an useful experiment would be to turn this behavior
off and compare performance.
On Mon, Mar 23, 2009 at 1:08 PM, Anne Rosset aros...@collab.net wrote:
enable_nestloop = off
That may be the source of your problem. Generally setting enable_* to
off is a debugging tool, not something you ever want to do in
production.
...Robert
--
Sent via pgsql-performance mailing list
On Tue, Mar 24, 2009 at 11:43 PM, Bryan Murphy bmurphy1...@gmail.com wrote:
Looking through our configuration one more time, I see that at some
point I set random_page_cost to 2.0, but I don't see any other changes
to query planner settings from their default values.
You don't by any chance
On Mon, Mar 30, 2009 at 1:50 PM, d...@sidhe.org wrote:
I'm running a 64-bit build of Postgres 8.3.5 on AIX 5.3, and have a really
strange, annoying transient problem with one particular query stalling.
The symptom here is that when this query is made with X or more records in
a temp table
On Mon, Mar 30, 2009 at 4:02 PM, d...@sidhe.org wrote:
On Mon, Mar 30, 2009 at 1:42 PM, d...@sidhe.org wrote:
On Mon, Mar 30, 2009 at 12:42 PM, d...@sidhe.org wrote:
Arguably in this case the actual query should run faster than the
EXPLAIN
ANALYZE version, since the cache is hot. (Though
On Wed, Apr 1, 2009 at 5:10 AM, Mahu Vasile m...@vrancart.ro wrote:
tcp_keepalives_count = 1 # TCP_KEEPCNT;
This might not be what you want.
http://www.postgresql.org/docs/8.3/static/runtime-config-connection.html
Presumably you'd like to wait more than 1 second before declaring
Can I try again? :)
How hard would it be to teach the planner about preserving uniqueness of
relations in subqueries?
And using that information to remove unnecessary self joins on unique sets?
I can try to rewrite some queries to test it on real data for how much
gain it would provide.
I
On Fri, Apr 3, 2009 at 9:32 AM, Matthew Wakeling matt...@flymine.org wrote:
I'm writing a plpgsql function that effectively does a merge join on the
results of two queries. Now, it appears that I cannot read the results of
two queries as streams in plpgsql, so I need to copy the contents of one
On Mon, Apr 6, 2009 at 2:54 AM, roopasatish roopaben...@gmail.com wrote:
I have an issue with the add foreign key constraint which goes for waiting
and locks other queries as well.
ALTER TABLE ONLY holding_positions ADD CONSTRAINT
holding_positions_stock_id_fkey FOREIGN KEY (stock_id)
On Sun, Apr 5, 2009 at 11:35 PM, Rainer Mager rai...@vanten.com wrote:
-Original Message-
From: Tom Lane [mailto:t...@sss.pgh.pa.us]
Rainer Mager rai...@vanten.com writes:
From: Tom Lane [mailto:t...@sss.pgh.pa.us]
Hmm ... it's pretty unusual to see the index fetch portion of a
On Mon, Apr 6, 2009 at 8:52 AM, Matthew Wakeling matt...@flymine.org wrote:
On Fri, 3 Apr 2009, Simon Riggs wrote:
On Fri, 2009-04-03 at 10:04 -0400, Tom Lane wrote:
Matthew Wakeling matt...@flymine.org writes:
On Fri, 3 Apr 2009, Robert Haas wrote:
Why not just use SQL to do the join
On Wed, Apr 8, 2009 at 9:42 AM, Marinos Yannikos m...@geizhals.at wrote:
It seems that ANALYZE does not really sample text column values as much as
it could. We have some very bad query plans resulting from this:
...
- Bitmap Index Scan on m_pkey (cost=0.00..28.61 rows=102 width=0)
2009/4/16 Matthew Wakeling matt...@flymine.org:
On Thu, 16 Apr 2009, Grzegorz Jaśkiewicz wrote:
On Thu, Apr 16, 2009 at 11:37 AM, Matthew Wakeling matt...@flymine.org
wrote:
I have a query that is executed really badly by Postgres. It is a nine
table
join, where two of the tables are
On Thu, Apr 16, 2009 at 9:49 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Merlin Moncure mmonc...@gmail.com writes:
On Thu, Apr 16, 2009 at 8:05 AM, Matthew Wakeling matt...@flymine.org
wrote:
That solves the problem. So, a view is treated as a subquery then?
no...the view is simply inlined into
On Mon, Apr 20, 2009 at 9:55 AM, Rafael Domiciano
rafael.domici...@gmail.com wrote:
Hello People,
I have initiated a work to review the sqls of our internal software.
Lot of them he problem are about sql logic, or join with table unecessary,
and so on.
But software has lot of sql with date,
On Tue, Apr 21, 2009 at 8:12 PM, da...@lang.hm wrote:
Using prepared queries, at least if you use PQexecPrepared or
PQexecParams, also reduces the work required on the client to build the
whole string, and the parsing overhead on the database side to pull it
apart again. That's where the
On Wed, May 6, 2009 at 6:08 PM, Simon Riggs si...@2ndquadrant.com wrote:
Agreed. Perhaps I should say then that the syntax needs to express the
requirements of the planner/executor behaviour, rather than being the
main aspect of the feature, as some have suggested.
Agreed.
Hopefully, notions
On Thu, May 7, 2009 at 10:52 PM, da...@lang.hm wrote:
Hopefully, notions of partitioning won't be directly tied to chunking of
data for parallel query access. Most queries access recent data and
hence only a single partition (or stripe), so partitioning and
parallelism and frequently exactly
IOW, I don't think you should need to partition if all you want is
load balancing. Partitioning should be for isolation, and load
balancing should happen when appropriate, whether there is
partitioning involved or not.
actually, I will contridict myself slightly.
[...]
however, trying to
On Tue, May 12, 2009 at 8:59 AM, Dimitri dimitrik...@gmail.com wrote:
Wait wait, currently I'm playing the stress scenario, so there are
only 256 sessions max, but thing time is zero (full stress). Scenario
with 1600 users is to test how database is solid just to keep a huge
amount of users,
On Tue, May 12, 2009 at 1:00 PM, Dimitri dimitrik...@gmail.com wrote:
On MySQL there is no changes if I set the number of sessions in the
config file to 400 or to 2000 - for 2000 it'll just allocate more
memory.
I don't care whether the setting affects the speed of MySQL. I want
to know if it
On Tue, May 12, 2009 at 4:24 PM, Simon Riggs si...@2ndquadrant.com wrote:
On Tue, 2009-05-12 at 15:52 -0400, Robert Haas wrote:
On Tue, May 12, 2009 at 12:49 PM, Tom Lane t...@sss.pgh.pa.us wrote:
1. There is no (portable) way to pass the connection from the postmaster
to another pre
On Tue, May 12, 2009 at 11:22 AM, Dimitri dimitrik...@gmail.com wrote:
Robert, what I'm testing now is 256 users max. The workload is growing
progressively from 1, 2, 4, 8 ... to 256 users. Of course the Max
throughput is reached on the number of users equal to 2 * number of
cores, but what's
On Tue, May 12, 2009 at 12:49 PM, Tom Lane t...@sss.pgh.pa.us wrote:
1. There is no (portable) way to pass the connection from the postmaster
to another pre-existing process.
[Googles.] It's not obvious to me that SCM_RIGHTS is non-portable,
and Windows has an API call WSADuplicateSocket()
On Tue, May 12, 2009 at 11:18 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Matthew Wakeling matt...@flymine.org writes:
On Tue, 12 May 2009, Simon Riggs wrote:
No, we spawn then authenticate.
But you still have a single thread doing the accept() and spawn. At some
point (maybe not now, but in the
On May 19, 2009, at 7:36 AM, Simon Riggs si...@2ndquadrant.com wrote:
On Tue, 2009-05-19 at 12:17 +0100, Matthew Wakeling wrote:
Yes, Postgres has been missing the boat on this one for a while. +1
on
requesting this feature.
That's an optimizer feature.
Speaking of avoiding large sorts,
On Tue, May 19, 2009 at 6:49 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Simon Riggs si...@2ndquadrant.com writes:
On Tue, 2009-05-19 at 08:58 -0400, Tom Lane wrote:
Nonsense. The planner might think some other plan is cheaper, but
it definitely knows how to do this, and has since at least 8.1.
On Wed, May 20, 2009 at 4:11 AM, Simon Riggs si...@2ndquadrant.com wrote:
The Hash node is fully executed before we start pulling rows through the
Hash Join node. So the Hash Join node will know at execution time
whether or not it will continue to maintain sorted order. So we put the
Sort node
On Wed, May 20, 2009 at 12:22 PM, Kobby Dapaah kdap...@gmail.com wrote:
I just upgraded from a
2xIntel Xeon-Harpertown 5450-Quadcore,16 GB,Redhat EL 5.1-64
To
2xIntel Xeon-Nehalem 5570-Quadcore,36GB,Redhat EL 5.3-64
Any advice on how I'll get the best of this server?
This is what I
On Thu, May 21, 2009 at 8:59 AM, Matthew Wakeling matt...@flymine.org wrote:
On Thu, 21 May 2009, Linos wrote:
i have to buy a new server and in the budget i have (small) i have
to select one of this two options:
-4 sas 146gb 15k rpm raid10.
-8 sas 146gb 10k rpm raid10.
It depends
On Thu, May 21, 2009 at 5:41 PM, Scott Marlowe scott.marl...@gmail.com wrote:
On Thu, May 21, 2009 at 2:29 PM, Robert Haas robertmh...@gmail.com wrote:
On Thu, May 21, 2009 at 8:59 AM, Matthew Wakeling matt...@flymine.org
wrote:
On Thu, 21 May 2009, Linos wrote:
i have to buy a new
When the query plan takes a wrong path, is it possible that it is because
statistics have not been run or updated?
Yes. If you are not using autovacuum, you need to ANALYZE regularly,
or bad things will happen to you.
...Robert
--
Sent via pgsql-performance mailing list
On Fri, May 29, 2009 at 5:57 PM, Anne Rosset aros...@collab.net wrote:
Robert Haas wrote:
On Thu, May 28, 2009 at 6:46 PM, Anne Rosset aros...@collab.net wrote:
- Index Scan using
item_pk on item (cost=0.00..176865.31 rows=97498 width=88
On Mon, Jun 1, 2009 at 2:14 PM, Anne Rosset aros...@collab.net wrote:
The table has 468173 rows and the value for default_statistics_target is
750.
Anne
Hi Robert,
we did a vacuum analyze and the results are the same.
Here are the results of the queries :
SELECT SUM(1) FROM item WHERE
On Mon, Jun 1, 2009 at 4:53 PM, Anne Rosset aros...@collab.net wrote:
On Mon, Jun 1, 2009 at 2:14 PM, Anne Rosset aros...@collab.net wrote:
SELECT SUM(1) FROM item WHERE is_deleted = 'f'; sum - 1824592 (1
row)
SELECT SUM(1) FROM item WHERE folder_id = 'tracker3641
On Mon, Jun 1, 2009 at 8:35 PM, Kevin Grittner
kevin.gritt...@wicourts.gov wrote:
S Arvind arvindw...@gmail.com wrote:
The reason why we need it manually is , we don't need any
performance drop in our production hours. So we figured out the most
less usage working time, most freq used tables
1 - 100 of 654 matches
Mail list logo