"Jignesh K. Shah" <[EMAIL PROTECTED]> writes:
> Many times PostgreSQL has many things based on assumption that it will
> run on Linux and it is left to Solaris to emulate that behavior.
Au contraire --- PG tries its best to be OS-agnostic. I've personally
resisted people trying to optimize it b
"patrick keshishian" <[EMAIL PROTECTED]> writes:
> My dev box is much slower hardware than the customer's
> server. Even with that difference I expected to be able to
> pg_restore the database within one day. But no.
Seems a bit odd. Can you narrow down more closely which step of the
restore is
Chris <[EMAIL PROTECTED]> writes:
> Francisco Reyes wrote:
>> Doing my first write heavy database.
>> What settings will help improve inserts?
>> Only a handfull of connections, but each doing up to 30 inserts/second.
> If you can, use copy instead:
> http://www.postgresql.org/docs/8.1/interactive
I am thinking the most flexible solution would be to get a dual Operon
machine, and initially do both data loading and queries on the same
machine. When the load gets too high, buy a second machine and set it
up as a Slony slave and run your queries on that, and do the data loads
on the original
Bruce,
Hard to answer that... People like me who know and love PostgreSQL and
Solaris finds this as an opportunity to make their favorite database
work best on their favorite operating system.
Many times PostgreSQL has many things based on assumption that it will
run on Linux and it is le
Jim C. Nasby wrote:
No, I don't agree with this. Too many people waste time designing for
"what if..." scenarios that never happen. You don't want to be dumb and
design something that locks out a foreseeable and likely future need, but
referential integrity doesn't meet this criterion. There
Greetings,
I have 395M pg_dump from a PostgreSQL 7.4.2 database.
This dump is from one of our customer's servers. There is
a web-based administration UI which has been reported to
be extremely slow and unusable.
To see what's going on with their data I have grabbed a
copy of their nightly pg_dum
Francisco Reyes wrote:
Doing my first write heavy database.
What settings will help improve inserts?
Only a handfull of connections, but each doing up to 30 inserts/second.
Plan to have 2 to 3 clients which most of the time will not run at the
same time, but ocasionaly it's possible two of them
Doing my first write heavy database.
What settings will help improve inserts?
Only a handfull of connections, but each doing up to 30 inserts/second.
Plan to have 2 to 3 clients which most of the time will not run at the
same time, but ocasionaly it's possible two of them may bump into each
othe
On Wed, Apr 12, 2006 at 05:32:32PM -0700, Sriram Dandapani wrote:
> Hi
>
> When I update a table that has 20 columns and the where clause includes
> 16 of the columns (this is a data warehousing type update on aggregate
> fields),
>
> The bitmap scan is not used by the optimizer. The table is ind
"Jim C. Nasby" <[EMAIL PROTECTED]> writes:
> Why are AFTER COMMIT triggers impossible?
What happens if such a trigger gets an error? You can't un-commit.
regards, tom lane
---(end of broadcast)---
TIP 2: Don't 'kill -9' the
Hi
When I update a table that has 20 columns and the where
clause includes 16 of the columns (this is a data warehousing type update on
aggregate fields),
The bitmap scan is not used by the optimizer. The table is
indexed on 3 of the 20 fields. The update takes really long to finish (on a
On Wed, Apr 12, 2006 at 04:03:43PM -0700, Josh Berkus wrote:
> PFC,
>
> > It would be nice to have ON COMMIT triggers for this use.
> >
> > However you can emulate ON COMMIT triggers with a modification of the
> > memcache update process :
>
> Well, I'm back in touch with the GORDA projec
PFC,
> It would be nice to have ON COMMIT triggers for this use.
>
> However you can emulate ON COMMIT triggers with a modification of the
> memcache update process :
Well, I'm back in touch with the GORDA project so possibly we can have
BEFORE COMMIT triggers after all.
BTW, it's i
Adding
-performance back in
-Original Message-From: Oscar Picasso
[mailto:[EMAIL PROTECTED]Sent: Wednesday, April 12, 2006 5:51
PMTo: Jim NasbySubject: Re: [PERFORM] Better index
stategy for many fields with few values
I would like to try it.However in an other post I adde
People,
> Lately I find people are not so receptive to VxFS, and Sun is promoting
> ZFS, and we don't have a reasonable near term option for Raw IO in
> Postgres, so we need to work to find a reasonable path for Solaris users
> IMO. The long delays in ZFS production haven't helped us there, as th
Oscar,
On 4/10/06 9:58 AM, "Oscar Picasso" <[EMAIL PROTECTED]> wrote:
> - My items table:
> code int -- can take one of 100 values
> property varchar(250) -- can take one of 5000 values
> param01 char(10) -- can take one of 10 values
> param02 char(10) -- can take one of 10
Bruce,
On 4/12/06 12:56 PM, "Bruce Momjian" wrote:
> It is hard to imagine why people spend so much time modifying Sun
> machines run with acceptable performance when non-Sun operating systems
> work fine without such hurtles.
There are a lot of Solaris customers that we support and that we'd l
On Wed, Apr 12, 2006 at 10:36:28AM -0700, Craig A. James wrote:
> Jim C. Nasby wrote:
> >>1. You have only one application that modifies the data. (Otherwise, you
> >>have to duplicate the rules across many applications, leading to a
> >>code-maintenance nightmare).
> >
> >You forgot something:
On Wed, Apr 12, 2006 at 02:59:32PM +0200, Markus Schaber wrote:
> > I was thinking about using a multicolumns index, but I have read that
> > we should limit multicolumns indice to at most 2 or 3 columns.
>
> Yes, that's true, the index overhead gets too high.
>
> > I was also thinking about abou
Luke Lonergan wrote:
> Alvaro,
>
> On 4/5/06 2:48 PM, "Alvaro Herrera" <[EMAIL PROTECTED]> wrote:
>
> > This essentially means stopping all bgwriter activity, thereby deferring
> > all I/O until checkpoint. Was this considered? With
> > checkpoint_segments to 128, it wouldn't surprise me that t
Jim C. Nasby wrote:
1. You have only one application that modifies the data. (Otherwise, you
have to duplicate the rules across many applications, leading to a
code-maintenance nightmare).
You forgot something:
1a: You know that there will never, ever, ever, ever, be any other
application th
On Wed, Apr 12, 2006 at 07:45:17AM -0700, Craig A. James wrote:
> All good advice, but... there are no absolutes in this world.
> Application-enforced referential integrity makes sense if (and probably
> ONLY if):
>
> 1. You have only one application that modifies the data. (Otherwise, you
>
On Wed, Apr 12, 2006 at 09:22:52AM +0200, PFC wrote:
>
> >> I think this is an old question, but I want to know if it really is
> >>well worth to not create some foreign keys an deal with the referential
> >>integrity at application-level?
>
> Trust me : do it in the application an
On Wed, 2006-04-12 at 09:49, Rodrigo Sakai wrote:
> Thanks for all help!! But my problem is with performance, I agree with all
> of you, the RI must be maintained by the database, because a bunch of
> reasons that everyone knows!
> But, I'm dealing with a very huge database that servers more th
What kinds of operations are made slow by foreign key checks ? Is it :
- Simple checks on INSERT ?
- Simple checks on UPDATE ?
- Cascaded deletes ?
- Cascaded updates ?
- Locks ?
- Something else ?
Foreign keys are to ensure that the value
On Apr 12, 2006, at 23:49 , Rodrigo Sakai wrote:
Thanks for all help!! But my problem is with performance, I agree
with all
of you, the RI must be maintained by the database, because a bunch of
reasons that everyone knows!
You've gotten a variety of good advice from a number of people. Fo
On 4/11/06, Rodrigo Sakai <[EMAIL PROTECTED]> wrote:
>
> Hi,
>
> I think this is an old question, but I want to know if it really is well
> worth to not create some foreign keys an deal with the referential integrity
> at application-level?
> Specifically, the system we are developing is
Thanks for all help!! But my problem is with performance, I agree with all
of you, the RI must be maintained by the database, because a bunch of
reasons that everyone knows!
But, I'm dealing with a very huge database that servers more than 200
clientes at the same time, and because of it, each
I think this is an old question, but I want to know if it really
is well worth to not create some foreign keys an deal with the
referential integrity at application-level?
Trust me : do it in the application and you'll enter a world of
hurt. I'm doing it with some mysql apps, and
Hi, Michael,
Hi, Rodrigo,
Michael Glaesemann wrote:
> If I had to choose between one or the other, I'd leave all referential
> integrity in the database and deal with the errors thrown when
> referential integrity is violated in the application. PostgreSQL is
> designed to handle these kinds o
Thanks Mark,
>Hmm - that first query needs to do a sort, so you might want to
experiment with the sort_mem parameter. Could you show us output from
explain analyze for >both the above queries?
Not too concerned about the sort, more about the query performance with
seq scan as the tables size inc
Hi, Oscar,
Oscar Picasso wrote:
> [ all the 20 paramXX columns are used in the query}
> How can I optimize this kind of query?
PostgreSQL 8.1 has so-called bitmap index scans, which can combine
several index scans before actually accessing the data.
So I think it's best to create an index on e
I think this is an old question, but I want to know if it really is
well worth to not create some foreign keys an deal with the referential
integrity at application-level?
Trust me : do it in the application and you'll enter a world of hurt. I'm
doing it with some mysql apps, and i
It would be nice to have ON COMMIT triggers for this use.
However you can emulate ON COMMIT triggers with a modification of the
memcache update process :
- A standard trigger sends the data to update to memcache
- The trigger also sends the PID
- Instead of b
35 matches
Mail list logo