The spec does require a FROM clause in SELECT (at least as of SQL99, did
not check SQL2003). However, it's clearly mighty useful to allow FROM
to be omitted for simple compute-this-scalar-result problems. You
should respond to the Oracle guy that SELECT whatever FROM dual is not
in the
hii'm waiting for new server to arrive.for the server i will get hp msa1000, with 14 discs (72g, 15000 rpm).what kind of partitioning you suggest, to get maximum performance?for system things i will have separate discs, so whole array is only for postgresql.
data processing is oltp, but with large
Hi Hubert,
hubert depesz lubaczewski schrieb:
hi
i'm waiting for new server to arrive.
for the server i will get hp msa1000, with 14 discs (72g, 15000 rpm).
what kind of partitioning you suggest, to get maximum performance?
for system things i will have separate discs, so whole array is only
Hi,
here's my problem:
# explain analyze select * from mxstrpartsbg where szam =
round(80*random())::integer;
QUERY PLAN
-
Seq
On Jun 14 12:53, Böszörményi Zoltán wrote:
# explain analyze select * from mxstrpartsbg where szam =
round(80*random())::integer;
AFAIK, you can use sth like that:
SELECT * FROM mxstrpartsbg
WHERE szam = (SELECT round(80*random())::integer OFFSET 0);
This will prevent calculation of
On 6/14/06, Sven Geisler [EMAIL PROTECTED] wrote:
You should configure your discs to RAID 10 volumes.You should set up a separate volume for WAL.A volume for an additional table space may also useful.In your case I would do 2 partitions:1. RAID 10 with 8 discs for general data
raid 10 is of course
On Jun 14 12:53, Böszörményi Zoltán wrote:
# explain analyze select * from mxstrpartsbg where szam =
round(80*random())::integer;
AFAIK, you can use sth like that:
SELECT * FROM mxstrpartsbg
WHERE szam = (SELECT round(80*random())::integer OFFSET 0);
This will prevent
Hi Hupert,
hubert depesz lubaczewski schrieb:
On 6/14/06, *Sven Geisler* [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
You should configure your discs to RAID 10 volumes.
You should set up a separate volume for WAL.
A volume for an additional table space may also useful.
In
On Tue, Jun 13, 2006 at 09:50:49PM -0400, Tom Lane wrote:
Jim C. Nasby [EMAIL PROTECTED] writes:
On Tue, Jun 13, 2006 at 06:04:42PM -0400, Tom Lane wrote:
It'd depend on the context, possibly, but it's easy to show that the
current planner does fold now() - interval_constant when making
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of
Greg Stark
Sent: Tuesday, June 13, 2006 11:16 PM
Subject: Re: [PERFORM] OT - select + must have from - sql
standard syntax?
[SNIP]
Well you could always create a dual, it was always just a
raid 10 is of course not questionable. but are you sure that it will
work faster than for example:
2 discs (raid 1) for xlog
6 discs (raid 10) for tables
6 discs (raid 10) for indices?
This depends on your application. Do you have a lot of disc reads?
Anyhow, I would put the xlog
Hi Richard,
Richard Broersma Jr schrieb:
This depends on your application. Do you have a lot of disc reads?
Anyhow, I would put the xlog always to a RAID 10 volume because most of
the I/O for update and inserts is going to the xlog.
4 discs xlog
6 discs tables
4 discs tables2
I have a
On 6/13/2006 at 9:13 PM, Tom Lane [EMAIL PROTECTED] wrote:
Read what you wrote, and rethink...
Hah. Yes, I understand the irony of that statement, but the point is
that the value of the variable won't change during query execution.
If you're desperate you can construct a query string with
Dave, Joshua, Scott (and all),
Thanks for your feedback, while I do appreciate it, I did not intent on
making this discussion buy this instead...I whole-heartly agree with
you. Joshua, you made the best comment, it is a business decision for
the client. I don't agree with it, but I understand it.
-- this is the third time I've tried sending this and I never saw it get
through to the list. Sorry if multiple copies show up.
Hi all,
I've been lurking using the web archives for a while and haven't found
an answer that seems to answer my questions about pg_dump.
We have a 206GB data
On Wed, Jun 14, 2006 at 04:32:23PM +0200, Sven Geisler wrote:
Hi Richard,
Richard Broersma Jr schrieb:
This depends on your application. Do you have a lot of disc reads?
Anyhow, I would put the xlog always to a RAID 10 volume because most of
the I/O for update and inserts is going to the
On Wed, Jun 14, 2006 at 01:30:10PM +0200, B?sz?rm?nyi Zolt?n wrote:
Replacing random() with a true constant gives me index scan
even if it's hidden inside other function calls. E.g.:
The database has no choice but to compute random() for every row; it's
marked VOLATILE.
--
Jim C. Nasby, Sr.
John E. Vincent [EMAIL PROTECTED] writes:
I've watched the backup process and I/O is not a problem. Memory isn't a
problem either. It seems that we're CPU bound but NOT in I/O wait.
Is it the pg_dump process, or the connected backend, that's chewing the
bulk of the CPU time? (This should be
On Wed, 2006-06-14 at 09:47, John E. Vincent wrote:
-- this is the third time I've tried sending this and I never saw it get
through to the list. Sorry if multiple copies show up.
Hi all,
BUNCHES SNIPPED
work_mem = 1048576 ( I know this is high but you should see some of our
sorts and
Tom Lane wrote:
Mischa Sandberg [EMAIL PROTECTED] writes:
Tom Lane wrote:
Does Solaris have any call that allows locking a shmem segment in RAM?
Yes, mlock(). But want to understand what's going on before patching.
Sure, but testing it with mlock() might help you understand what's going
On Tue, 2006-06-13 at 15:11, Dave Page wrote:
-Original Message-
And how old are the 2600's now?
Anyhoo, I'm not saying the current machines are excellent performers or
anything, but there are good business reasons to run them if you don't
need to squeeze out every last pony.
On 6/14/06, Scott Marlowe [EMAIL PROTECTED] wrote:
On Wed, 2006-06-14 at 09:47, John E. Vincent wrote: -- this is the third time I've tried sending this and I never saw it get through to the list. Sorry if multiple copies show up. Hi all,
BUNCHES SNIPPED work_mem = 1048576 ( I know this is high
On Wed, June 14, 2006 1:04 pm, John Vincent wrote:
I know it is but that's what we need for some of our queries. Our ETL
tool (informatica) and BI tool (actuate) won't let us set those things as
part of our jobs. We need it for those purposes. We have some really nasty
queries that will be
Out of curiosity, does anyone have any idea what the ratio of actual datasize to backup size is if I use the custom format with -Z 0 compression or the tar format? Thanks.On 6/14/06,
Scott Marlowe [EMAIL PROTECTED] wrote:
On Wed, 2006-06-14 at 09:47, John E. Vincent wrote: -- this is the third
Title: RE: [PERFORM] Which processor runs better for Postgresql?
From: Scott Marlowe
[mailto:[EMAIL PROTECTED] Sent: 14 June 2006
18:04To: Dave PageCc: Joshua D. Drake;
[EMAIL PROTECTED]; pgsql-performance@postgresql.orgSubject: RE:
[PERFORM] Which processor runs better
I have a box with an app and postgresql on it. Hardware includes with 2 2.8 Ghz
xeons 512KB cache, 4 GB of memory, 6 scsi disk in a software
raid 5 on a trustix 2.2 with a 2.6.15.3 kernel. The data and indexes are on the
raid array while the tx log is on disk
with the OS. All is well.
The one
On Wed, 2006-06-14 at 13:43, Dave Page wrote:
Just thought I'd point you to Dell's forums.
http://forums.us.dell.com/supportforums/board?board.id=pes_linuxpage=1
wherein you'll find plenty of folks who have problems with
-Original Message-
From: Scott Marlowe [mailto:[EMAIL PROTECTED]
Sent: 14 June 2006 20:52
To: Dave Page
Cc: Joshua D. Drake; [EMAIL PROTECTED];
pgsql-performance@postgresql.org
Subject: RE: [PERFORM] Which processor runs better for Postgresql?
Yeah, We've got a mix of 2650
jody brownell [EMAIL PROTECTED] writes:
27116 postgres 15 0 1515m 901m 91m S 0.0 22.9 18:33.96 postgres: qradar
qradar :::x.x.x.x(51149) idle
This looks like a memory leak, but you haven't provided enough info to
let someone else reproduce it. Can you log what your application is
On Wed, 2006-06-14 at 12:04, John Vincent wrote:
On 6/14/06, Scott Marlowe [EMAIL PROTECTED] wrote:
On Wed, 2006-06-14 at 09:47, John E. Vincent wrote:
-- this is the third time I've tried sending this and I
never saw it get
through to the list. Sorry if
How long does gzip take to compress this backup?
On Wed, 2006-06-14 at 15:59, John Vincent wrote:
Okay I did another test dumping using the uncompressed backup on the
system unloaded and the time dropped down to 8m for the backup.
There's still the size issue to contend with but as I said,
On Wed, Jun 14, 2006 at 02:11:19PM -0400, John Vincent wrote:
Out of curiosity, does anyone have any idea what the ratio of actual
datasize to backup size is if I use the custom format with -Z 0 compression
or the tar format?
-Z 0 should mean no compression.
Something you can try is piping
time gzip -6 claDW_PGSQL.test.bakreal 3m4.360suser 1m22.090ssys 0m6.050sWhich is still less time than it would take to do a compressed pg_dump. On 6/14/06,
Scott Marlowe [EMAIL PROTECTED] wrote:
How long does gzip take to compress this backup?On Wed, 2006-06-14 at 15:59, John Vincent wrote: Okay
On 6/14/06, Jim C. Nasby [EMAIL PROTECTED] wrote:
On Wed, Jun 14, 2006 at 02:11:19PM -0400, John Vincent wrote: Out of curiosity, does anyone have any idea what the ratio of actual datasize to backup size is if I use the custom format with -Z 0 compression
or the tar format?-Z 0 should mean no
On Wed, Jun 14, 2006 at 05:18:14PM -0400, John Vincent wrote:
On 6/14/06, Jim C. Nasby [EMAIL PROTECTED] wrote:
On Wed, Jun 14, 2006 at 02:11:19PM -0400, John Vincent wrote:
Out of curiosity, does anyone have any idea what the ratio of actual
datasize to backup size is if I use the custom
Folks,
First off, you'll be glad to know that I've persuaded two of the Sun
performance engineers to join this list soon. So you should be able to
get more difinitive answers to these questions.
Second, 7.4 still did linear scanning of shared_buffers as part of LRU and
for other activities.
All,
So I thought I'd pose this question:
If I have a pg database attached to a powervault (PV) with just an
off-the-shelf SCSI card I generally want fsync on to prevent data
corruption in case the PV should loose power.
However, if I have it attached to a NetApp that ensures data writes
On Wed, 14 Jun 2006 14:48:04 -0700
Dan Gorman [EMAIL PROTECTED] wrote:
If I have a pg database attached to a powervault (PV) with just an
off-the-shelf SCSI card I generally want fsync on to prevent data
corruption in case the PV should loose power.
However, if I have it attached to a
No. You need fsync on in order to force the data to get TO the NetApp
at the right time. With fsync off, the data gets cached in the
operating system.
-- Mark Lewis
On Wed, 2006-06-14 at 14:48 -0700, Dan Gorman wrote:
All,
So I thought I'd pose this question:
If I have a pg database
Would someone please find the answer to Tom's last question?
---
Tom Lane wrote:
I wrote:
Looking at this, I wonder if there isn't a bug or at least an
inefficiency in 8.1. The KeysEqual short circuit tests are still
On Jun 13, 2006, at 8:50 PM, Tom Lane wrote:
Jim C. Nasby [EMAIL PROTECTED] writes:
On Tue, Jun 13, 2006 at 06:04:42PM -0400, Tom Lane wrote:
It'd depend on the context, possibly, but it's easy to show that the
current planner does fold now() - interval_constant when making
estimates. Simple
Jim Nasby [EMAIL PROTECTED] writes:
On Jun 13, 2006, at 8:50 PM, Tom Lane wrote:
Hmm ... worksforme. Could you provide a complete test case?
decibel=# create table date_test(d date not null, i int not null);
[etc]
Not sure what you are driving at. The estimates are clearly not
defaults
Mark Lewis [EMAIL PROTECTED] writes:
On Wed, 2006-06-14 at 14:48 -0700, Dan Gorman wrote:
However, if I have it attached to a NetApp that ensures data writes
to via the NVRAM can I safely turn fsync off to gain additional
performance?
No. You need fsync on in order to force the
Jim C. Nasby írta:
On Wed, Jun 14, 2006 at 01:30:10PM +0200, B?sz?rm?nyi Zolt?n wrote:
Replacing random() with a true constant gives me index scan
even if it's hidden inside other function calls. E.g.:
The database has no choice but to compute random() for every row; it's
marked
On 14 Jun 2006 23:33:53 -0400, Greg Stark [EMAIL PROTECTED] wrote:
In fact the benefit of the NVRAM is precisely that it makes sure you *don't*
have any reason to turn fsync off. It should make the fsync essentially free.
Having run PostgreSQL on a NetApp with input from NetApp, this is
That makes sense. Speaking of NetApp, we're using the 3050C with 4 FC
shelfs. Any generic advice other than the NetApp (their NFS oracle
tuning options)
that might be useful? (e.g. turning off snapshots)
Regards,
Dan Gorman
On Jun 14, 2006, at 10:14 PM, Jonah H. Harris wrote:
On 14 Jun
On 6/15/06, Dan Gorman [EMAIL PROTECTED] wrote:
shelfs. Any generic advice other than the NetApp (their NFS oracle
tuning options) that might be useful? (e.g. turning off snapshots)
I was using PostgreSQL on a 980c, but feature-wise they're probably
pretty close.
What type of application are
On 6/15/06, Jonah H. Harris [EMAIL PROTECTED] wrote:
On 6/15/06, Dan Gorman [EMAIL PROTECTED] wrote:
shelfs. Any generic advice other than the NetApp (their NFS oracle
tuning options) that might be useful? (e.g. turning off snapshots)
I was using PostgreSQL on a 980c, but feature-wise they're
48 matches
Mail list logo