On Thu, 9 Oct 2003, Gaetano Mendola wrote:
Andriy Tkachuk wrote:
On Wed, 8 Oct 2003, Tom Lane wrote:
Andriy Tkachuk [EMAIL PROTECTED] writes:
At second. calc_total() is immutable function:
but it seems that it's not cached in one session:
It's not supposed to be.
but it's
On Thu, 9 Oct 2003, Dror Matalon wrote:
On Thu, Oct 09, 2003 at 07:07:00PM -0400, Greg Stark wrote:
Dror Matalon [EMAIL PROTECTED] writes:
Actually what finally sovled the problem is repeating the
dtstamp last_viewed
in the sub select
That will at least convince the
On Thu, 9 Oct 2003, Seth Ladd wrote:
Hello,
I am running 7.3.2 RPMs on RH9, on a celeron 1.7 w/ 1gig ram.
I have a table that has 6.9 million rows, 2 columns, and an index on
each column. When I run:
SELECT DISTINCT column1 FROM table
It is very, very slow (10-15 min to complete).
Is there any way to speed this up, or is that DISTINCT going to keep
hounding me?
I checked the mailing list, and didn't see anything like this.
Any tips or hints would be greatly appreciated. Thanks for your help!
Seth
Try group by instead. I think this is an old bug its fixed in
7.3.2
Thanks for the tip, I'll give this a shot soon. I am curious, your
example above does not use GROUP BY yet you have an INDEX SCAN. I am
using a similar query, yet I get a full table scan. I wonder how they
are different?
Please send us the results of EXPLAIN ANALYZE the query. The EXPLAIN
On Fri, 10 Oct 2003, Seth Ladd wrote:
Is there any way to speed this up, or is that DISTINCT going to keep
hounding me?
I checked the mailing list, and didn't see anything like this.
Any tips or hints would be greatly appreciated. Thanks for your help!
Seth
Try group by
On 9 Oct 2003, Greg Stark wrote:
I don't quite follow your #2 so I can only comment on the above idea of using
an LVM snapshot. If you have the hardware and the LVM-fu to be able to do this
properly I would recommend it.
Just to be a bit clearer incase it was my wording:
Method #2 is nearly
I took this approach with a former company in designing an dynamic
e-commerce system. This kept the addition of new products from
requiring an alteration of the schema. With an ORB manager and cache
control the performance was not significantly, but the automatic
extensibility and the ease
Heya Guys n Gals,
Having been following the thread on go for a script! / ex: PostgreSQL vs.
MySQL. I thought I would throw something together in Perl. My current issue
is that I only have access to a RH Linux box and so cannot make it
cross-platform on my own :-(. Anyhow please find it attached.
- Original Message -
From: Nick Barr [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Friday, October 10, 2003 1:35 PM
Subject: go for a script! / ex: PostgreSQL vs. MySQL
I will also post it on me website and as I develop it further new
versions
will appear there
David Griffiths wrote:
This is a timely thread for myself, as I'm in the middle of testing
both databases as an Oracle replacement.
As of this moment, I know more about MySQL (tuning, setup, features)
than I do about Postgres. Not because I like MySQL more, but because
1) the MySQL docs are
On Thu, 9 Oct 2003, Bruce Momjian wrote:
scott.marlowe wrote:
I was testing to get some idea of how to speed up the speed of pgbench
with IDE drives and the write caching turned off in Linux (i.e. hdparm -W0
/dev/hdx).
The only parameter that seems to make a noticeable difference
Greg Stark writes:
Call it a wishlist bug. The problem is it would be a hard feature to
implement properly. And none of the people paid to work on postgres
by various companies seem to have this on their to-do lists. So
don't expect it in the near future.
We are using Postgres heavily, and we
Hi,
One other small question ... Does PostgreSQL is scalable ?
I mean ... is it possible to have two servers, one rack of disks connected to
the 2 servers to get access in same time to the same database ?
Other point is there any possibilties to make servers clusters with PostgreSQL
? If no
On Fri, 10 Oct 2003, [iso-8859-15] Hervé Piedvache wrote:
If I do a SQL request ... does PostgreSQL use one or more processor ?
Nope. Just one processor this is because PG is process not thread based.
However, if you opened 4 connections and each issued a sql statement all 4
processors would
On Fri, 10 Oct 2003, [iso-8859-15] Hervé Piedvache wrote:
A simple question about PostgreSQL ... I have a Pentium Xeon Quadri
processors ... If I do a SQL request ... does PostgreSQL use one or more
processor ?
Each connection becomes a process, and each process runs on one processor.
So,
Hervé Piedvache wrote:
Hi,
A simple question about PostgreSQL ... I have a Pentium Xeon Quadri processors
...
If I do a SQL request ... does PostgreSQL use one or more processor ?
PostgreSQL uses one processor per connection. If you have 4 simultaneous
connections, you'll use all four
On Fri, Oct 10, 2003 at 12:42:04PM -0400, Bill Moran wrote:
4) It simply isn't practical to expect a single query to
execute on multiple processors simultaneously.
Do you know of any RDBMS that actually will execute a single query
on multiple processors?
Yes, DB2 will do this if
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of Bill Moran
Sent: Friday, October 10, 2003 12:42 PM
To: Hervé Piedvache
Cc: Postgresql Performance
Subject: Re: [PERFORM] One or more processor ?
Hervé Piedvache wrote:
Hi,
A simple question
scott.marlowe wrote:
On Thu, 9 Oct 2003, Bruce Momjian wrote:
scott.marlowe wrote:
I was testing to get some idea of how to speed up the speed of pgbench
with IDE drives and the write caching turned off in Linux (i.e. hdparm -W0
/dev/hdx).
The only parameter that seems to
johnn wrote:
On Fri, Oct 10, 2003 at 12:42:04PM -0400, Bill Moran wrote:
4) It simply isn't practical to expect a single query to
execute on multiple processors simultaneously.
Do you know of any RDBMS that actually will execute a single query
on multiple processors?
Yes, DB2 will do this
Dror,
Ouch. I just double checked and you're right. Is this considered a bug,
or just an implementation issue?
It's an implementation issue, which may be fixed by 7.5 but not sooner.
Basically, the free ability of PostgreSQL users to define their own
aggregates limits our ability to define
Nick,
Having been following the thread on go for a script! / ex: PostgreSQL vs.
MySQL. I thought I would throw something together in Perl.
Cool! Would you be willing to work with me so that I can inject some of my
knowledge of .conf tuning?
--
Josh Berkus
Aglio Database Solutions
San
Herve'
One other small question ... Does PostgreSQL is scalable ?
Given that we have several members of our community with 2TB databases, and
one entitiy with a 32TB database, I'd say yes.
I mean ... is it possible to have two servers, one rack of disks connected
to the 2 servers to get
On Fri, 10 Oct 2003, Josh Berkus wrote:
Bruce,
Yes. If you were doing multiple WAL writes before transaction fsync,
you would be fsyncing every write, rather than doing two writes and
fsync'ing them both. I wonder if larger transactions would find
open_sync slower?
Want me to
On Fri, Oct 10, 2003 at 10:32:32AM -0700, Josh Berkus wrote:
Dror,
Ouch. I just double checked and you're right. Is this considered a bug,
or just an implementation issue?
It's an implementation issue, which may be fixed by 7.5 but not sooner.
Basically, the free ability of PostgreSQL
On Fri, 2003-10-10 at 10:41, Josh Berkus wrote:
Herve'
One other small question ... Does PostgreSQL is scalable ?
Given that we have several members of our community with 2TB databases, and
one entitiy with a 32TB database, I'd say yes.
It depends on what is meant by scalable. In terms
Josh Berkus wrote:
Bruce,
Yes. If you were doing multiple WAL writes before transaction fsync,
you would be fsyncing every write, rather than doing two writes and
fsync'ing them both. I wonder if larger transactions would find
open_sync slower?
Want me to test? I've got an
SC == Sean Chittenden [EMAIL PROTECTED] writes:
SC patches to extract info from their OS so that initdb can make useful
SC decisions. Or, lastly, does anyone think that this should be in a
SC different, external program? -sc
Well, there should definitely be a way to run a get current best
NB == Nick Barr [EMAIL PROTECTED] writes:
NB So far:
NB shared_buffers = 1/16th of total memory
NB effective_cache_size = 80% of the supposed kernel cache.
Please take into account the blocksize compiled into PG, too...
--
Vivek,
NB shared_buffers = 1/16th of total memory
NB effective_cache_size = 80% of the supposed kernel cache.
Please take into account the blocksize compiled into PG, too...
We can;t change the blocksize in a script that only does the .conf file. Or
are you suggesting something else?
--
Chris,
If I do a SQL request ... does PostgreSQL use one or more processor ?
Just one processor.
For one query, yes. For multiple queries, PostgreSQL will use multiple
processors, and that's what he's concerned about given his earlier posts.
--
Josh Berkus
Aglio Database Solutions
San
Bruce,
I would be interested to see if wal_sync_method = fsync is slower than
wal_sync_method = open_sync. How often are we doing more then one write
before a fsync anyway?
OK. I'll see if I can get to it around my other stuff I have to do this
weekend.
--
Josh Berkus
Aglio Database
BM == Bruce Momjian [EMAIL PROTECTED] writes:
Sounds reasonable to me. Are there many / any scenarios where a plain
fsync would be faster than open_sync?
BM Yes. If you were doing multiple WAL writes before transaction fsync,
BM you would be fsyncing every write, rather than doing two
JB == Josh Berkus [EMAIL PROTECTED] writes:
JB Vivek,
NB shared_buffers = 1/16th of total memory
NB effective_cache_size = 80% of the supposed kernel cache.
Please take into account the blocksize compiled into PG, too...
JB We can;t change the blocksize in a script that only does the .conf
Vivek Khera wrote:
BM == Bruce Momjian [EMAIL PROTECTED] writes:
Sounds reasonable to me. Are there many / any scenarios where a plain
fsync would be faster than open_sync?
BM Yes. If you were doing multiple WAL writes before transaction fsync,
BM you would be fsyncing every write,
- Original Message -
From: Ron Johnson
On Fri, 2003-10-10 at 16:04, David Busby wrote:
List,
I'm creating this multi company POS database.
My inventory table looks like (all items are unique):
id,category_id,invoice_id,x,y,z,gid,uid
I have a primary key on id, and then
NB So far:
NB shared_buffers = 1/16th of total memory
NB effective_cache_size = 80% of the supposed kernel cache.
Please take into account the blocksize compiled into PG, too...
Would anyone object to a patch that exports the blocksize via a
readonly GUC? Too many tunables are page
On Fri, 10 Oct 2003, Josh Berkus wrote:
Bruce,
Yes. If you were doing multiple WAL writes before transaction fsync,
you would be fsyncing every write, rather than doing two writes and
fsync'ing them both. I wonder if larger transactions would find
open_sync slower?
Want me to
On Fri, 10 Oct 2003, David Busby wrote:
- Original Message -
From: Ron Johnson
On Fri, 2003-10-10 at 16:04, David Busby wrote:
List,
I'm creating this multi company POS database.
My inventory table looks like (all items are unique):
Sean Chittenden wrote:
NB So far:
NB shared_buffers = 1/16th of total memory
NB effective_cache_size = 80% of the supposed kernel cache.
Please take into account the blocksize compiled into PG, too...
Would anyone object to a patch that exports the blocksize via a
readonly GUC?
On Fri, 2003-10-10 at 18:59, Sean Chittenden wrote:
NB So far:
NB shared_buffers = 1/16th of total memory
NB effective_cache_size = 80% of the supposed kernel cache.
Please take into account the blocksize compiled into PG, too...
Would anyone object to a patch that exports the
List,
I'm creating this multi company POS database.
My inventory table looks like (all items are unique):
id,category_id,invoice_id,x,y,z,gid,uid
I have a primary key on id, and then an foreign keys on category_id and
invoice_id.
GID is the group ID of the company, UID is the companies user,
43 matches
Mail list logo