I have a simple benchmark which runs too slow on a 100M row table, and
I am not sure what my next step is to make it faster.
It's a simple setup, part of a larger system. There are three data
tables, each with a BIGINT id and a data column of dofferent types.
There is a fourth table with BIGINT
[EMAIL PROTECTED] writes:
I have a simple benchmark which runs too slow on a 100M row table, and
I am not sure what my next step is to make it faster.
The EXPLAIN ANALYZE you showed ran in 32 msec, which ought to be fast
enough for anyone on that size table. You need to show us data on the
Title: RE: [GENERAL] postmaster going down own its on
hi,
i noticed the script, and at places it says
received fast shutdown request2006-04-10 10:25:05
IST%LOG: aborting any active transactions2006-04-10 10:25:05
IST%idleFATAL: terminating connection due to administrator
Bjørn T Johansen wrote:
Is there an easier way to create the same trigger for n tables or do
I need to run create trigger n times?
You'll need to run CREATE TRIGGER several times I'm afraid. You might be
able to wrap this in a function if you'd like though.
--
Richard Huxton
Archonet
Andrus wrote:
I have multi company database where each company is stored in different
schema.
When I create incrementally new companis and add data to it ANALYZE command
takes a lot of time: every time it analyzes the previous company data also.
How to run ANALYZE command for a single
In my Postgresql 8.06 db I have a table in which the key fields are a
date and a code.
Now I need to load data from a text file (or a
temporary table) that either should update OR should be insert-ed into
that table.
Is there any ready-to-use, contributed function allowing
this 1-shot
Gavin Hamill wrote:
On Sun, 09 Apr 2006 17:00:14 +0100
Simon Riggs [EMAIL PROTECTED] wrote:
Sniff the live log for SELECT statements (plus their live durations),
Wow, how wonderfully low-tech - hence it's right up my street :) Yay,
some tail + psql fun coming up!
Be careful though -
nearly ready to usereplace (int4, text, timestamp) with your fieldtypes; its convention: first param is primary keyreplace update bcachekredbetr set
betreuer=$2, letztespeicherung=$3
where id_p=$1;
with the appropriate update (where clause - references primary key)CREATE OR REPLACE FUNCTION
Ottavio Campana wrote:
I need to document the database I develop so that other people can
easily understand how it works.
I particularly want to document the stored procedures. By now I've used
a javadoc style to document them. I can't use tools like doxygen on them
but it is always better
On Sun, Apr 09, 2006 at 02:48:33PM -0700, Chris Travers wrote:
Tyler MacDonald wrote:
Martijn van Oosterhout kleptog@svana.org wrote:
I'd call that the short term solution, with the long term solution
being to finally convince the right people to remove that clause from
OpenSSL's
Hi List
I'm currently playing with SSL support in PostgreSQL and have a few
questions:
SSL in general seems to work fine, but the client does not seem to
perform any server verification (Hostname or CA). Is suport for this
planned?
Client Authentication seems to work as well, but there
On Sat, 08 Apr 2006 14:04:28 +0200, Philipp Ott wrote:
(..snippetysnip..)
I just wanted to know - I would like to have universal binaries of libpg
and psql to deploy.
Currently 8.1.3 compiles and runs just fine on OSX 10.4.6 + XCode 2.2.1,
but generates binaries just for the current host
Title: RE: [GENERAL] postmaster going down own its on
hi,
i noticed the script, and at places it says
received fast shutdown request2006-04-10 10:25:05
IST%LOG: aborting any active transactions2006-04-10 10:25:05
IST%idleFATAL: terminating connection due to administrator
Hi all,
I'm trying to restore one of our production databases on our development
system, but restore locks itself out.
The symptoms: restoring goes fine up to a certain point. Reaching that
point the database is idle, and apparently waiting on a lock. Server
load is minimal.
As this is a
Hi,
I have a query optimization problem and I have failed to find the part
of the Postgres docs that explains the meaning of the loops value in
the EXPLAIN ANALYSE output.
For example, my EXPLAIN ANALYSE output contains the following line:
Unique (cost=9775.21..10015.32 rows=1 width=8)
Aaron Bingham wrote:
Unique (cost=9775.21..10015.32 rows=1 width=8) (actual
time=264.889..264.889 rows=1 loops=791)
Does that mean that the entire operation took 264.889 ms, or that a
single iteration took that long? The time for the entire query would
suggest the latter interpretation
On Mon, Apr 10, 2006 at 04:08:01PM +0100, Richard Huxton wrote:
Aaron Bingham wrote:
Unique (cost=9775.21..10015.32 rows=1 width=8) (actual
time=264.889..264.889 rows=1 loops=791)
Does that mean that the entire operation took 264.889 ms, or that a
single iteration took that long? The
Alban Hertroys [EMAIL PROTECTED] writes:
postgres 15092 0.0 0.3 43692 12924 ? D14:11 0:00 postgres:
postgres vh3_live [local] INSERT
This process is not blocked on a lock: it's waiting for disk I/O.
Thoughts that come to mind include (1) it's going fine and you're not
patient
Aaron Bingham [EMAIL PROTECTED] writes:
For example, my EXPLAIN ANALYSE output contains the following line:
Unique (cost=9775.21..10015.32 rows=1 width=8) (actual
time=264.889..264.889 rows=1 loops=791)
Does that mean that the entire operation took 264.889 ms, or that a
single iteration
Tom Lane wrote:
Alban Hertroys [EMAIL PROTECTED] writes:
postgres 15092 0.0 0.3 43692 12924 ? D14:11 0:00 postgres:
postgres vh3_live [local] INSERT
This process is not blocked on a lock: it's waiting for disk I/O.
Thoughts that come to mind include (1) it's going fine and
Alban Hertroys wrote:
If your sysadmin wants to use 7.4.7 rather than 7.4.latest, he
needs swift application of a cluestick. I'll grant that there
might be application-compatibility reasons to stay on 7.4.*, but
not to avoid being up to date in that release series. See
I implemented wal archiving and it seems to be working. The segments
are being copied by the shell script, and in the pg_log file I see
this line:
LOG: archived transaction log file 0001001D0096
But the file is still int he pg_xlog directory. In the documentation I
read that it
On April 10, 2006 09:28 am, Just Someone [EMAIL PROTECTED] wrote:
I implemented wal archiving and it seems to be working. The segments
are being copied by the shell script, and in the pg_log file I see
this line:
LOG: archived transaction log file 0001001D0096
But the file is
Alan Hodgson [EMAIL PROTECTED] writes:
On April 10, 2006 09:28 am, Just Someone [EMAIL PROTECTED] wrote:
So my question is: Is what I'm seeing meaning the WAL archiving is
working? Or should I expect the file to be deleted?
If the file is showing up in your archive target location, it's
Martijn van Oosterhout kleptog@svana.org wrote:
Well, it's a Debian problem that possibly applies to Linux distrubutors
in general. Here is a good write up:
http://www.gnome.org/~markmc/openssl-and-the-gpl.html
The issue is that while anybody else can take advantage of the
components
HI,I would like to implement a search by distance to my application.Something like (pseudo sql):select * from userswhere users.location is less than 15 miles from chicago.Any documentation on how to implements that?I guess I also need a database of the cities coordinates. Where could I find
On 2006-04-10, Alban Hertroys [EMAIL PROTECTED] wrote:
Tom Lane wrote:
Alban Hertroys [EMAIL PROTECTED] writes:
postgres 15092 0.0 0.3 43692 12924 ? D14:11 0:00 postgres:
postgres vh3_live [local] INSERT
This process is not blocked on a lock: it's waiting for disk I/O.
On Sun, Apr 09, 2006 at 06:22:40PM -0400, Tom Lane wrote:
Frank [EMAIL PROTECTED] writes:
I have a perl script running as a daemon. It's using DBD::Pg (1.43) to
connect to my Postgres server (8.0.7) running on the same box and talking
over a socket. When I start the server, it runs fine
Hi All,
I am having the clipid field in my table.In my application i am taking
Max(clipid) and inserting it into the table by incrementing that max
value. so there is a problem when 2 users click sumbit button at a
time. we r getiing same value . so hoe to solve the problem
Hi,
I'm interested in using postgresql_autodoc to document my postgres databases
on Linux, but I'm unable to find any information on how to install the
program or use it. Can someone point me in the right direction?
I've downloaded postgresql_autodoc-1.25.tar.gz from
-Original Message-
From: Tyler MacDonald[EMAIL PROTECTED]
Sent: 10/04/06 21:08:29
To: Chris Travers[EMAIL PROTECTED], Tom Lane[EMAIL PROTECTED], Chris
Travers[EMAIL PROTECTED], Chris Travers[EMAIL PROTECTED], Scott
Marlowe[EMAIL PROTECTED], Douglas McNaught[EMAIL PROTECTED],
Hi
I'm trying to write an aggrecate collect_distinct(int8) which puts all
distinct values into an array. My first try was defining an aggregate
collect using array_append, and doing select collect(distinct field) ..,
but this is quite slow - probably because distinct sorts the values, instead
of
Tom Lane wrote:
Ed L. [EMAIL PROTECTED] writes:
With our set of 4 DBs, that amounts to once every 40 minutes for
the given database. I see LOG: autovacuum: processing
database xyz in the log, but I do not see any analyze/vacuum
commands being issued at all (does it log when it
Hi, how can I tell in which order are triggered different triggers on the same table, let me explain ,
I have three triggers for table A, all of then are intended for before
insert on the table, in ASA I can tell the Db in which order I want
the triggers to fire, is there an equivalent for
On Mon, Apr 10, 2006 at 02:51:30AM -0400, Tom Lane wrote:
[EMAIL PROTECTED] writes:
I have a simple benchmark which runs too slow on a 100M row table, and
I am not sure what my next step is to make it faster.
The EXPLAIN ANALYZE you showed ran in 32 msec, which ought to be fast
enough for
On Mon, 2006-04-10 at 07:02, [EMAIL PROTECTED] wrote:
Hi All,
I am having the clipid field in my table.In my application i am taking
Max(clipid) and inserting it into the table by incrementing that max
value. so there is a problem when 2 users click sumbit button at a
time. we r getiing
Hi, how can I tell in which order are triggered different triggers on the
same table, let me explain ,
I have three triggers for table A, all of then are intended for before
insert on the table, in ASA I can tell the Db in which order I want the
triggers to fire, is there an equivalent for
On Monday 10 April 2006 05:09 pm, Hugo saith:
Hi, how can I tell in which order are triggered different triggers on the
same table, let me explain ,
I have three triggers for table A, all of then are intended for before
insert on the table, in ASA I can tell the Db in which order I want the
Martijn van Oosterhout wrote:
-- Start of PGP signed section.
On Wed, Mar 29, 2006 at 02:57:44PM +0400, Ivan Zolotukhin wrote:
So let's make such analysis here within PostgreSQL community! There
were a lot of talks about XML support in Postgres, but they did not
lead to any steady work in
Oscar Picasso wrote:
HI,
I would like to implement a search by distance to my application.
Something like (pseudo sql):
select * from users
where users.location is less than 15 miles from chicago.
Any documentation on how to implements that?
I guess I also need a database of the cities
Dave Page dpage@vale-housing.co.uk wrote:
GnuTLS is LGPL, which isn't quite as liberal as postgresql's
license, but should still be ubiqutous enough to be worthwhile.
The note on the fsf directory (http://directory.fsf.org/gnutls.html) is a
little off-putting:
The program is
What is the best way to find the latest partial WAL file?
Based on my tests, using the mtime isn't 100% accurate, as if a
pg_start_backup/pg_stop_backup() operation is run, the .backup file
created might be newer than the last WAL file. It also seems that the
WAL file related to the backup is
Just Someone [EMAIL PROTECTED] writes:
Is there a way to discover what is the real current WAL file?
If you sort first by mtime and second by file name you should find the
right one in all cases, ie, take the latest mtime among the
properly-named files, breaking ties by taking the higher
Florian G. Pflug [EMAIL PROTECTED] writes:
Using perl, and a perl-hash was even slower, so I wrote my to c-functions
(actualy c++), which use a STL hash_set to filter out duplicates.
This makes me fairly nervous, because what's going to ensure that the
memory used by the hash_set is reclaimed?
Tom Lane wrote:
Florian G. Pflug [EMAIL PROTECTED] writes:
Using perl, and a perl-hash was even slower, so I wrote my to c-functions
(actualy c++), which use a STL hash_set to filter out duplicates.
This makes me fairly nervous, because what's going to ensure that the
memory used by the
Terry Lee Tucker wrote:
On Monday 10 April 2006 05:09 pm, Hugo saith:
Hi, how can I tell in which order are triggered different triggers on the
same table, let me explain ,
I have three triggers for table A, all of then are intended for before
insert on the table, in ASA I can tell the Db in
Florian G. Pflug [EMAIL PROTECTED] writes:
hash_set can be told to use a user-defined allocator class, which in turn
can use palloc/pfree, with an appropriate memory context. I'm not
really sure what the appropriate context is, as using CurrentMemoryContext
leads to strange crashes. For now,
Hi Tom,
If you sort first by mtime and second by file name you should find the
right one in all cases, ie, take the latest mtime among the
properly-named files, breaking ties by taking the higher filename.
It'd probably be better if we had a function to report this, but
you can get along
Thanks for your reply, Martijn. And see below.
On Sun, Apr 09, 2006 at 12:07:50PM -0800, Wei Wei wrote:
I try to understand how the DT information is stored/presented
in PG. In the application, the data is reported as
Sun Apr 09 12:40:52 PDT 2006 - Pacific Standard Time
But, in the
On Apr 10, 2006, at 6:24 PM, Tom Lane wrote:
Just Someone [EMAIL PROTECTED] writes:
Is there a way to discover what is the real current WAL file?
If you sort first by mtime and second by file name you should find the
right one in all cases, ie, take the latest mtime among the
properly-named
I'm looking for a way to calculate a hashcode or fingerprint for a row /
tuple of arbitrary width. The goal is to be able to store, in an audit
table, a fingerprint which could be compared against a later fingerprint
to detect changes (this application does not call for cryptographic
level
Wei Wei [EMAIL PROTECTED] writes:
In the application, the data is reported as
Sun Apr 09 12:40:52 PDT 2006 - Pacific Standard Time
But, in the DB, it is stated as
2006-04-09 14:40:53.093-07
It doesn't seen right to me. Both are on the same box and the
date column is with time zone.
Randall Lucas [EMAIL PROTECTED] writes:
I'm looking for a way to calculate a hashcode or fingerprint for a row /
tuple of arbitrary width. The goal is to be able to store, in an audit
table, a fingerprint which could be compared against a later fingerprint
to detect changes (this application
Thanks for your response, Tom.
And please see the below.
In the application, the data is reported as
Sun Apr 09 12:40:52 PDT 2006 - Pacific Standard Time
But, in the DB, it is stated as
2006-04-09 14:40:53.093-07
It doesn't seen right to me. Both are on the same box and the
Wei Wei [EMAIL PROTECTED] writes:
A returned value of select new() is correct, but the TZ is -7 where
the TZ of OS is set to Pacific Day Time Saving Time.
That sounds like PG is working like it's supposed to.
The application is
written with Java. And I use the Java API Calendar class to
55 matches
Mail list logo