Tobias Brox wrote:
[Madison Kelly - Thu at 10:25:07AM -0500]
Will the priority of the script pass down to the pgsql queries it calls?
I figured (likely incorrectly) that because the queries were executed by
the psql server the queries ran with the server's priority.
I think you are right,
Scott Marlowe wrote:
nope, the priorities don't pass down. you connect via a client lib to
the server, which spawns a backend process that does the work for you.
The backend process inherits its priority from the postmaster that
spawns it, and they all run at the same priority.
Shoot, but
Andreas Kostyrka wrote:
Am Donnerstag, den 02.11.2006, 09:41 -0600 schrieb Scott Marlowe:
Sometimes it's the simple solutions that work best. :) Welcome to the
world of pgsql, btw...
OTOH, there are also non-simple solutions to this, which might make
sense anyway: Install slony, and run
[Madison Kelly - Mon at 08:10:12AM -0500]
to run, which puts it into your drawback section. The server in
question is also almost under load of some sort, too.
A great tip and one I am sure to make use of later, thanks!
I must have been sleepy, listing up cons vs drawbacks ;-)
Anyway, the
Tobias Brox wrote:
[Madison Kelly - Mon at 08:10:12AM -0500]
to run, which puts it into your drawback section. The server in
question is also almost under load of some sort, too.
A great tip and one I am sure to make use of later, thanks!
I must have been sleepy, listing up cons vs
Though I've read recent threads, I'm unsure if any matches my case.
We have 2 tables: revisions and revisions_active. revisions contains
117707 rows, revisions_active 17827 rows.
DDL: http://hannes.imos.net/ddl.sql.txt
Joining the 2 tables without an additional condition seems ok for me
[Madison Kelly - Mon at 08:48:19AM -0500]
Ah, sorry, long single queries is what you meant.
No - long running single transactions :-) If it's only read-only
queries, one will probably benefit by having one transaction for every
query.
---(end of
Hannes Dorbath wrote:
Though it should only have to join a few rows it seems to scan all rows.
What makes you think that's the case?
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
---(end of broadcast)---
TIP 6: explain
On 06.11.2006 15:13, Heikki Linnakangas wrote:
Hannes Dorbath wrote:
Though it should only have to join a few rows it seems to scan all rows.
What makes you think that's the case?
Sorry, not all rows, but 80753. It's not clear to me why this number is
so high with LIMIT 10.
--
Regards,
Tobias Brox wrote:
[Madison Kelly - Mon at 08:48:19AM -0500]
Ah, sorry, long single queries is what you meant.
No - long running single transactions :-) If it's only read-only
queries, one will probably benefit by having one transaction for every
query.
In this case, what happens is one
Heikki Linnakangas [EMAIL PROTECTED] writes:
Hannes Dorbath wrote:
Though it should only have to join a few rows it seems to scan all rows.
What makes you think that's the case?
What it looks like to me is that the range of keys present in
pk_revisions_active corresponds to just the upper
am Fri, dem 03.11.2006, um 3:12:14 -0800 mailte Drew Wilson folgendes:
I have 700 lines of non-performant pgSQL code that I'd like to
profile to see what's going on.
What's the best way to profile stored procedures?
RAISE NOTICE, you can raise the aktual time within a transaction
Hi,
Sorry, but this
message was already post some days before!
Thank
you!
Carlos
-Mensagem original-De:
[EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]Em nome de Carlos H.
ReimerEnviada em: quarta-feira, 1 de novembro de 2006
03:23Para: pgsql-performance@postgresql.orgAssunto:
I'm having a spot of problem with out storage device vendor. Read
performance (as measured by both bonnie++ and hdparm -t) is abysmal
(~14Mbyte/sec), and we're trying to get them to fix it. Unfortunately,
they're using the fact that bonnie++ is an open source benchmark to
weasle out of doing
Select count(*) from table-twice-size-of-ram
Divide the query time by the number of pages in the table times the pagesize
(normally 8KB) and you have your net disk rate.
- Luke
Msg is shrt cuz m on ma treo
-Original Message-
From: Brian Hurt [mailto:[EMAIL PROTECTED]
Sent:
On 11/6/06, Brian Hurt [EMAIL PROTECTED] wrote:
I'm having a spot of problem with out storage device vendor. Read
performance (as measured by both bonnie++ and hdparm -t) is abysmal
(~14Mbyte/sec), and we're trying to get them to fix it. Unfortunately,
they're using the fact that bonnie++ is
On Mon, 2006-11-06 at 15:09, Merlin Moncure wrote:
On 11/6/06, Brian Hurt [EMAIL PROTECTED] wrote:
I'm having a spot of problem with out storage device vendor. Read
performance (as measured by both bonnie++ and hdparm -t) is abysmal
(~14Mbyte/sec), and we're trying to get them to fix it.
Andreas Kostyrka wrote:
The solution for us has been twofold:
upgrade to the newest PG version available at the time while we waited
for our new Opteron-based DB hardware to arrive.
Do you remember the exact Pg version?
--
Cosimo
---(end of
On Oct 31, 2006, at 8:29 PM, Tom Lane wrote:
John Major [EMAIL PROTECTED] writes:
My problem is, I often need to execute searches of tables like these
which find All features within a range.
Ie: select FeatureID from SIMPLE_TABLE where
FeatureChromosomeName like
'chrX' and StartPosition
Brian Hurt wrote:
I'm having a spot of problem with out storage device vendor. Read
performance (as measured by both bonnie++ and hdparm -t) is abysmal
(~14Mbyte/sec), and we're trying to get them to fix it. Unfortunately,
they're using the fact that bonnie++ is an open source benchmark to
20 matches
Mail list logo