On Thu, Jun 26, 2003 at 03:17:12AM -0400, [EMAIL PROTECTED] wrote:
I have a table with 13 fields. Is that
too many fields for one table.
Mark
Thirteen? No way. I've got you beat with 21:
Pfft! Is *that* all? I've got a table with 116 fields. Very soon we'll be
upgrading to 7.3 and we
On Thu, 26 Jun 2003, u15074 wrote:
I have a small test program (using libpq) inserting a lot of data into the
database. Each command inserts a small large object (about 5k) into the database
and inserts one row into a table, that references the large object oid.
I repeat this 100.000 times.
On Thu, Jun 26, 2003 at 03:17:12AM -0400, [EMAIL PROTECTED]
wrote:
I have a table with 13 fields. Is that
too many fields for one table.
Mark
Thirteen? No way. I've got you beat with 21:
Pfft! Is *that* all? I've got a table with 116 fields.
I *knew* a number of these responses
On 26 Jun 2003 at 3:44, [EMAIL PROTECTED] wrote:
On Thu, Jun 26, 2003 at 03:17:12AM -0400, [EMAIL PROTECTED]
wrote:
I have a table with 13 fields. Is that
too many fields for one table.
Mark
Thirteen? No way. I've got you beat with 21:
Pfft! Is *that* all? I've got a table with
Could be the the database is checkpointing then. Try experimenting with :
checkpoint_segments
checkpoint_timeout
Might be worth playing with :
wal_buffers
as well
regards
Mark
u15074 wrote:
The performance is ok and stays constant over the whole time. But I have the
following effect: from
// check if entry already exists
SELECT COUNT(*) FROM tablename WHERE [cond]
..
if($count 0)
UPDATE
else
INSERT
but this will double the hit to the database server, because
for every
operation I need to do SELECT COUNT(*) first. The data itself
is not a lot,
and the condition
Hello!
I have a question regarding foreign keys and general garbage collection
of data... If anyone could provide assistance, it'd be much appreciated!
Basically, we have a fairly complex database, with many tables
(customers, etc) that need to reference addresses that are contained in
a generic
On Thu, Jun 26, 2003 at 01:02:06 -0400,
Mike Mascari [EMAIL PROTECTED] wrote:
Hi,
Just a quick question, not unrelated to my
previous question, which I don't think will
get answered. I have a table with 13 fields.
Is that too many fields for one table.
Mathematically, a binary
On Thu, Jun 26, 2003 at 08:33:10 +0100,
Peter Childs [EMAIL PROTECTED] wrote:
On Thu, 26 Jun 2003, u15074 wrote:
Need to be run after deletes and updates (a delete is actually a delete
and an insert) if you do it with verbose on. Look at the vac number if it
Just to avoid confusion, the
On Thu, Jun 26, 2003 at 12:00:07 +0100,
Matt Browne [EMAIL PROTECTED] wrote:
Other tables also reference records in the address table, using a
similar sort of scheme.
I have foreign keys set up so that if, for example, a record in customer
is deleted, the corresponding records in the
I have foreign keys set up so that if, for example, a record
in customer
is deleted, the corresponding records in the customer_addresses table
are also removed. However, I can't find a way of ensuring
records in the
address table are deleted too, given that lots of different
tables will
Matt Browne wrote:
Hello!
I have a question regarding foreign keys and general garbage collection
of data... If anyone could provide assistance, it'd be much appreciated!
Basically, we have a fairly complex database, with many tables
(customers, etc) that need to reference addresses that are
Hello again -
This problem has now been resolved, using triggers.
A big thank you to everyone who reponded! I'd buy you all a beer if...
Er... This list was a bar.
Cheers!
--
Matt Browne [EMAIL PROTECTED]
---(end of broadcast)---
TIP 4: Don't
That's why I resorted to lock table, select, then insert/update. You have
to block all the reads of other processes that are considering an insert.
This is not great for performance, but I was certain it will work, unlike
the race-vulnerable suggestions (are people here actually using those?
The original developers didn't really have a concept of storing different
info in different tables.
That kind of stuff drives me nuts. Where do people get their CS degrees?
It took me less that 2 days to teach our ARTISTS how to construct
fully-normalized tables (it's a long story as to why I
Mark Kirkwood [EMAIL PROTECTED] writes:
Could be the the database is checkpointing then.
Or the system 'syncer' process woke up and wrote a bunch of pages.
If the interval between pauses doesn't vary when you change
checkpoint_timeout and checkpoint_segments, then I'd blame the syncer
(or
On Thursday 26 Jun 2003 1:40 pm, Rich Shepard wrote:
Matt Browne wrote:
Basically, we have a fairly complex database, with many tables
(customers, etc) that need to reference addresses that are contained in
a generic address table.
So:
customer_addresses [table]
Lincoln Yeoh [EMAIL PROTECTED] writes:
(Related: I also suggested arbitrary user locks years back, but I wasn't
able to implement them.)
Don't we have 'em already? See contrib/userlock/.
regards, tom lane
---(end of
Maksim Likharev [EMAIL PROTECTED] writes:
basically I complaining that PG does not do what I told to do or
was hoping to do.
Okay, now I get the point: you want to prevent the pt sub-select from
being flattened into the outer query.
7.3.1 through 7.3.3 will actually do what you want (they
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
The quick answer is no: you cannot specifically match a certain column
and not match a certain column at the same time.
You are already creating the SQL statements manually, so why not just create
different ones for each situation?
if
On Wed, 25 Jun 2003 15:29:20 -0400, Andrew Sullivan
[EMAIL PROTECTED] wrote:
On Sat, Jun 14, 2003 at 07:33:19PM +0200, Sven K?hler wrote:
select * should refelect the cosmetical order of the columns.
Why? You asked for everything, and specified no order.
AFAICS it's a matter of standard
Hello,
I have a table with a bunch of records already inserted. When I want to add or remove
fields from the, what I've done is produce an sql file, add the fields to the create
table directive, and add the fields manually to each record to be inserted. Then I do
an
\i db_name
which destroys
I have a table with a bunch of records already inserted. When
I want to add or remove fields from the, what I've done is
produce an sql file, add the fields to the create table
directive, and add the fields manually to each record to be
inserted. Then I do an
\i db_name
which
Just wondering (I don't use or intend to use plpython), but why does it
need to be marked untrusted is the rexec code has been corrected.
Bruce Momjian wrote:
Patch applied. Thanks.
---
Kevin Jacobs wrote:
Because rexec was considered so broken by Guido that it was removed in its
entirety. Hopefully some day it will be readded in a more secure form,
and then the plpython module can be reinstated as a trusted language.
On Thu, 26 Jun 2003, DeJuan Jackson wrote:
Just wondering (I don't use or
DeJuan Jackson [EMAIL PROTECTED] writes:
Just wondering (I don't use or intend to use plpython), but why does it
need to be marked untrusted is the rexec code has been corrected.
Now that the rexec code is gone, it MUST be marked untrusted --- this is
not a question for debate. Installing it
On Thu, 2003-06-26 at 11:59, Tom Lane wrote:
DeJuan Jackson [EMAIL PROTECTED] writes:
Just wondering (I don't use or intend to use plpython), but why does it
need to be marked untrusted is the rexec code has been corrected.
Now that the rexec code is gone, it MUST be marked untrusted ---
On Thursday 26 June 2003 18:34, Benjamin Jury wrote:
(...)
You can use ALTER TABLE.
ALTER TABLE table ADD [column] column type
Can also rename columns, add constraints, change to NOT NULL, etc.
However you cannot currently remove a column...
Oh yes you can:
ALTER TABLE tbl DROP COLUMN
A while back I posted a message saying that I got MS
Access to connect to PostgreSQL via pgsqlODBC while
all of them were running together in Linux via
Crossover Office 2.0.
Although initial test results were very hopeful, I'm
finding that this setup is not reliable. There have
been many cases
We are trying to do an all tables dump using the 7.3.3 pg_dump, but
are getting no love. The pg_dump command which worked before, in 7.2,
no longer returns any tables:
pg_dump -t * dbname
Is this by design, or by accident?
Paul
--
__
/
| Paul Ramsey
| Refractions
Ron Johnson [EMAIL PROTECTED] writes:
In what version is rexec removed? v2.3? If so, then there are
many people with Python 2.2 and even 2.1 who could still use
trusted PlPython.
Only if they don't mind being exposed to well-publicized security holes.
If we continued to support the
Ron Johnson [EMAIL PROTECTED] writes:
On Thu, 2003-06-26 at 11:59, Tom Lane wrote:
Now that the rexec code is gone, it MUST be marked untrusted --- this is
not a question for debate. Installing it as trusted would be a security
hole.
In what version is rexec removed? v2.3? If so,
--- Joe Conway [EMAIL PROTECTED] wrote:
Just for info, there is a (C language) crosstab
function in
contrib/tablefunc.
The one in 7.3 has a serious limitation (the source
query must ensure
that for each row in the crosstab, there is a record
representing each
column of the crosstab,
We have one main table for our users, as well as several related
tables
that contain user information. We would like to set up triggers
so
that the following occurs:
1. If someone deletes a user from the user table, it deletes all
the
occurences of the user's information in all the
On Thursday 26 June 2003 16:31, Roderick A. Anderson wrote:
I tried to build new RPMs this morning from the 7.3.3-1PGDG src RPM
because I didn't want tcl or python support. I've done this before with
7.2.1 or 7.3.1. I modified the spec file and the build went OK but when I
tried to install
On Thu, 26 Jun 2003, Lamar Owen wrote:
{/usr/src/redhat/RPMS/i386}# rpm -Uvh --test *.rpm
error: Failed dependencies:
perl(Pg) is needed by postgresql-contrib-7.3.3-1PGDG
Argh. That's supposed to be fixed; apparently I did something wrong. Install
it with --nodeps for now, while
Lamar Owen [EMAIL PROTECTED] writes:
If you want to help troubleshoot, look at the
filter-requires-perl-Pg.sh script (Source16) and see where it needs
to be invoked
But ... contrib *depends* on perl (see contrib/rserv for
instance). May be contrib is too generic. What about split it
In 7.1.3 I can use
select timestamp(date '2001-01-01', time '00:00');
but in 7.2.4 I get
db1=# select timestamp(date '2001-01-01', time '00:00');
ERROR: parser: parse error at or near date
I get the same error when using actual columns in actual tables. Both
PostgreSQL versions are
On Thursday 26 June 2003 18:34, Manuel Sugawara wrote:
Lamar Owen [EMAIL PROTECTED] writes:
If you want to help troubleshoot, look at the
filter-requires-perl-Pg.sh script (Source16) and see where it needs
to be invoked
But ... contrib *depends* on perl (see contrib/rserv for
On Friday 27 June 2003 00:47, Holger Marzen wrote:
In 7.1.3 I can use
select timestamp(date '2001-01-01', time '00:00');
but in 7.2.4 I get
db1=# select timestamp(date '2001-01-01', time '00:00');
ERROR: parser: parse error at or near date
changed in 7.2, see:
On Fri, 27 Jun 2003, Holger Marzen wrote:
In 7.1.3 I can use
select timestamp(date '2001-01-01', time '00:00');
but in 7.2.4 I get
db1=# select timestamp(date '2001-01-01', time '00:00');
ERROR: parser: parse error at or near date
To be closer to the standard, timestamp() became a
This is a rather ill-defined enquiry. The main reason behind it is to
find out if I'm going down the right path as a Postgres newbie.
I have a set of complex selects that I'm worried about from the point of
view of 1) can't these be made simpler 2) performance 3) describing the
selects to other
On 26 Jun 2003, Manuel Sugawara wrote:
Lamar Owen [EMAIL PROTECTED] writes:
If you want to help troubleshoot, look at the
filter-requires-perl-Pg.sh script (Source16) and see where it needs
to be invoked
But ... contrib *depends* on perl (see contrib/rserv for
instance). May be
ALTER TABLE ADD COLUMN
On Thu, Jun 26, 2003 at 12:42:18PM -0400, MT wrote:
Hello,
I have a table with a bunch of records already inserted. When I want to add or
remove fields from the, what I've done is produce an sql file, add the fields to the
create table directive, and add the fields
Title: column to row
Hi,
Please help...
How can I transfer the column data to become row data?
Thanks,
Milet
Thanks Tom,
works, have to test performance
-Original Message-
From: Tom Lane [mailto:[EMAIL PROTECTED]
Sent: Thursday, June 26, 2003 7:36 AM
To: Maksim Likharev
Cc: GENERAL
Subject: Re: [GENERAL] Query plan question
Maksim Likharev [EMAIL PROTECTED] writes:
basically I
Jay O'Connor [EMAIL PROTECTED] writes:
I want to delete a bunch of procs from the database because I'v stopped
using some and changed the arguments on others. Is it sufficient to do
something like
DELETE FROM pg_proc WHERE proname IN ();
Or do I have to use DROP FUNCTION to clean
[EMAIL PROTECTED] writes:
As long as we are playing who's is biggest, I have one with 900+
attributes (normalized) but there is a big warning - if you have a
query that returns hundreds of columns it will be very, very slow.
Is the SELECT * the only circumstance? That is, if you specify a
Added to TODO:
* Improve performance for queries with many columns
We already have an item for tables with many columsn.
---
Tom Lane wrote:
[EMAIL PROTECTED] writes:
As long as we are playing who's is biggest,
Bruce Momjian [EMAIL PROTECTED] writes:
Added to TODO:
* Improve performance for queries with many columns
We already have an item for tables with many columsn.
That one's a duplicate then.
regards, tom lane
---(end of
Is the issue of many columns in a tuple the same issue as a SELECT
having many columns?
---
Tom Lane wrote:
Bruce Momjian [EMAIL PROTECTED] writes:
Added to TODO:
* Improve performance for queries with many columns
Bruce Momjian [EMAIL PROTECTED] writes:
Is the issue of many columns in a tuple the same issue as a SELECT
having many columns?
I believe all the same inefficiencies need to be fixed whichever
way you look at it. Probably many columns in SELECT is the more
accurate description though.
On Thu, 26 Jun 2003, Stephan Szabo wrote:
On Fri, 27 Jun 2003, Holger Marzen wrote:
In 7.1.3 I can use
select timestamp(date '2001-01-01', time '00:00');
but in 7.2.4 I get
db1=# select timestamp(date '2001-01-01', time '00:00');
ERROR: parser: parse error at or near date
53 matches
Mail list logo