On Tue, 3 Jun 2008, Roberts, Jon [EMAIL PROTECTED] writes:
PostgreSQL does not have either a shared disk or shared nothing
architecture.
But there are some turn arounds for these obstacles:
- Using pgpool[1], sequoia[2], or similar tools[3] you can simulate a
shared nothing architecture.
-
Yes, we are copying from pg_xlog. By doing so we let the WAL-segments fill up
(not using timeout) and we are able to recover within a 10 minute interval.
Could it be that this copy operation is causing the problem?
Per
-Original Message-
From: Magnus Hagander [mailto:[EMAIL PROTECTED]
Roberts, Jon wrote:
He's talking about having the raw database files on a file server (eg
SMB share). DB's like firebird and sqlite can handle this way of
accessing the data using the embedded engines.
Active-active, multiple server databases are either a shared nothing or
a shared disk
Hi!
Yes, almost certianly. Windows has major issues with more than one
process opening the same file, so it's very likely that this is your
issue. The only way you can safely get the file off the system without
affecting the running PostgreSQL instance is to use a Volume Shadow
Copy snapshot.
On Wed, Jun 4, 2008 at 12:32 AM, Guy Rouillier [EMAIL PROTECTED] wrote:
Roberts, Jon wrote:
He's talking about having the raw database files on a file server (eg
SMB share). DB's like firebird and sqlite can handle this way of
accessing the data using the embedded engines.
Active-active,
Hi,
I have two feature proposals for psql:
1. After executing some set of commands from a file via \e foo.sql,
pressing C-p or Up brings executed commands, instead of
\e foo.sql. Shouldn't psql be bringing \e foo.sql?
2. It would be really neat to be able to issue
\et regex - Edit
Hi, and thanks for the replies!
OK. I think we will reconsider this. The backup procedure was set up a few
years ago. I have personally made several point in time recoveries using this
technique (for testing purposes), and it works. But I guess an undesirable
side-effect is a recovery failure
I have implemented partitioning using inheritance following the proposed
solution here (using trigger):
http://www.postgresql.org/docs/8.3/interactive/ddl-partitioning.html
My problem is that when my Hibernate application inserts to the master
table, postgres returns 0 rows affected, which
Volkan YAZICI wrote:
2. It would be really neat to be able to issue
\et regex - Edit table. (Create script of table will get dumped.)
\et+ regex - Edit table with dependents. (With create script of
INDEXes, triggers, etc.)
How do you intend to use it?
Albretch Mueller wrote:
import/export the data into/out of PG, so you will be essentially
duplicating the data and having to synch it. This is exactly what I am
trying to avoid, I would like for PG to handle the data right from the
data feed
You could write a set-returning function that reads
On Wed, 04 Jun 2008, Klint Gore [EMAIL PROTECTED] writes:
postgres=# create table bar (foo int);
CREATE TABLE
postgres=# \e
ERROR: relation bar already exists
postgres=#
Not exactly like that, consider placing a \c new_copy line to your
script file.
\ef regex - Edit function.
Per Lauvås wrote:
Hi, and thanks for the replies!
OK. I think we will reconsider this. The backup procedure was set up
a few years ago. I have personally made several point in time
recoveries using this technique (for testing purposes), and it works.
But I guess an undesirable side-effect
-Original Message-
From: [EMAIL PROTECTED] [mailto:pgsql-general-
[EMAIL PROTECTED] On Behalf Of Guy Rouillier
Sent: Wednesday, June 04, 2008 1:33 AM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] does postgresql works on distributed systems?
Roberts, Jon wrote:
He's
-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160
Rather than try to document my experiences here in an email thread,
I created a wiki page to document the database upgrade process in general
for MediaWiki. Please look it over and edit it as needed:
On Wed, Jun 4, 2008 at 11:34 AM, Volkan YAZICI [EMAIL PROTECTED] wrote:
I'm planning to make a survey regarding PostgreSQL performance on
OpenSSI. There are some obstacles mostly caused by shared-memory
architecture of PostgreSQL, but that claim is -- AFAIK -- totally
theoratical. There
When I try to run pg_restore (as the postgres superuser), it appears to
freeze after printing the following:
pg_restore -U yours_truly -d somedb /path/to/somedb.dmp
pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 24; 1255 365299855
Hi,
I log into my database, then i do the following select statement, SELECT *
FROM pg_tables; this lists the
following:
schemaname tablename tableowner tablespace.
pg_catalog | pg_authid | postgres | pg_global
pg_catalog | pg_database | postgres | pg_global
public |
Can someone explain me about the exception handling in postgresql. Iam
not understanding the scope of a exception block. The exact confusion
that iam facing is as follows:
I have a procedure as follows
BEGIN
EXECUTE an update statement
EXECUTE an insert statement
EXCEPTION WHEN
On Wed, 4 Jun 2008, sam wrote:
Can someone explain me about the exception handling in postgresql. Iam
not understanding the scope of a exception block. The exact confusion
that iam facing is as follows:
I have a procedure as follows
BEGIN
EXECUTE an update statement
EXECUTE an insert
Greetings!
A customer has two servers, one hosting a PostGRES database and our main
application, and the other, the model server, hosting a couple of other
applications that run rarely but require a few minutes and most of the
processor when they do run. The model server is a leased machine, and
We have a function which has been throwing a key violation. We're
tracking the purchase history for a limited number of unique items,
and flagging the most recent purchase with purchase_status = 1.
Stripped down, here's the schema and the function:
CREATE TABLE purchases (
purchase_id
This is my first 'real' script, one that verifies proper format for a
user-entered date string.
Once that is done I want the script to return the UNIX time.
I plan on invoking this script on a psql connection (via .psqlrc), so
that I can call it from the command line.
Here's what I have at
Michael Glaesemann [EMAIL PROTECTED] writes:
We have a function which has been throwing a key violation. We're
tracking the purchase history for a limited number of unique items,
and flagging the most recent purchase with purchase_status = 1.
Um, the function is ensuring that there is no
On Jun 4, 2008, at 5:39 PM, Tom Lane wrote:
I think you've anonymized the example into nonsense :-(.
Now that I've provided yet another example of the perils of not
providing the exact code, I've tried to patch it (below, for those of
you willing to give me a second chance).
However,
On Wed, 4 Jun 2008, Ralph Smith wrote:
-- ==
good_date := to_date(year||'-'||month||'-'||day , '-MM-DD') ;
RAISE NOTICE 'good_date = %',good_date ;
Usecs := EXTRACT(EPOCH FROM TIMESTAMP good_date) ;
END ;
QUERY: SELECT
CREATE OR REPLACE FUNCTION
purchase(IN in_item_id integer,
IN in_purchased_by bigint,
IN in_purchase_price integer)
RETURNS VOID AS
$BODY$
BEGIN
-- some selects
UPDATE purchases
SET purchase_status = 0
WHERE item_id = in_item_id
AND
On Jun 4, 2008, at 2:56 PM, Stephan Szabo wrote:
On Wed, 4 Jun 2008, Ralph Smith wrote:
-- ==
good_date := to_date(year||'-'||month||'-'||day , '-MM-DD') ;
RAISE NOTICE 'good_date = %',good_date ;
Usecs := EXTRACT(EPOCH FROM TIMESTAMP
On Jun 4, 2008, at 6:00 PM, Jeff Davis wrote:
Some other concurrent transaction could still
insert something after the UPDATE but before the INSERT, so the unique
constraint violation can still occur.
Yes, I saw the comment. I'm guessing I'm missing something wrt
transaction isolation
On Wed, 4 Jun 2008, Ralph Smith wrote:
date_string := to_date(year||'-'||month||'-'||day , '-MM-DD') ;
RAISE INFO 'date_string = %', date_string ;
good_date := to_timestamp(date_string, '-MM-DD') ;
RAISE INFO 'good_date = %', good_date ;
This seems like alot of extra
Michael Glaesemann [EMAIL PROTECTED] writes:
However, my point regarding the example in the docs still holds. Why
is the exception block necessary? Doesn't wrapping the statements in a
function ensure the unique_violation couldn't occur?
Well, the point of that example is to deal correctly
I have a query that takes 2 sec if I run it from a freshly restored
dump. If I run a full vacuum on the database it then takes 30 seconds.
Would someone please comment as to why I would see a 15x slow down by
only vacuuming the DB? Reindexing does not help, and a full vacuum was
run just
Same problem, see below
On Jun 4, 2008, at 3:30 PM, Stephan Szabo wrote:
On Wed, 4 Jun 2008, Ralph Smith wrote:
date_string := to_date(year||'-'||month||'-'||day , '-MM-DD') ;
RAISE INFO 'date_string = %', date_string ;
good_date :=
On Wed, 2008-06-04 at 17:02 -0500, Jason Long wrote:
I have a query that takes 2 sec if I run it from a freshly restored
dump. If I run a full vacuum on the database it then takes 30 seconds.
If you run it a second time after the vacuum full?
Joshua D. Drake
--
Sent via pgsql-general
Try doing what was suggested ?
Greg Williamson
Senior DBA
DigitalGlobe
Confidentiality Notice: This e-mail message, including any attachments, is for
the sole use of the intended recipient(s) and may contain confidential and
privileged information and must be protected in accordance with those
I do believe I did.
I tired with and w/o the DATE word in the EXTRACT statement.
Without the DATE I get:
-
airburst=# select usecs_from_date('2008-06-04');
INFO: good_date = 2008-06-04
ERROR: function pg_catalog.date_part(unknown, unknown) is not
unique
LINE 1:
On Wed, 4 Jun 2008, Ralph Smith wrote:
-- ==
good_date := to_date(year||'-'||month||'-'||day , '-MM-DD') ;
RAISE INFO 'good_date = %', good_date ;
UsecsD := EXTRACT(EPOCH FROM DATE 'good_date') ;
You want something like:
UsecsD :=
Jason Long [EMAIL PROTECTED] writes:
I have a query that takes 2 sec if I run it from a freshly restored
dump. If I run a full vacuum on the database it then takes 30 seconds.
Would someone please comment as to why I would see a 15x slow down by
only vacuuming the DB?
EXPLAIN ANALYZE of
Yes.
This is what I do.
1. Vacuum full
2. Reindex force
This is done ever night
My users complained about a report being slow.
I grabbed a dump and restored it to my development machine.
The query worked just fine, but not on the production server.
I did a vacuum full and then reran the
Just post the explain analyze output here on the list. There's lots
of folks here who can read it.
On Wed, Jun 4, 2008 at 5:49 PM, Jason Long
[EMAIL PROTECTED] wrote:
Yes.
This is what I do.
1. Vacuum full
2. Reindex force
This is done ever night
My users complained about a report being
On Tue, Jun 3, 2008 at 7:38 AM, [EMAIL PROTECTED] wrote:
I have implemented partitioning using inheritance following the proposed
solution here (using trigger):
http://www.postgresql.org/docs/8.3/interactive/ddl-partitioning.html
My problem is that when my Hibernate application inserts to
I've tried SO MANY variations of w/ and w/o the apostrophes, but
apparently not just the right one.
As you knew and I doubted, it NOW WORKS!
Whew!
You can bet that I'm keeping this snippet of code handy.
Thank you very much,
Ralph
==
On Jun 4, 2008, at 4:34
I am resending this is plain text as it was rejected. What is the best
way to include this kind of output to the mailing list?
I would greatly appreciate any advice on how to read the following
output from EXPLAIN ANALYZE.
When running the query the first time from a fresh restore it takes
On Wed, 2008-06-04 at 18:21 -0400, Michael Glaesemann wrote:
Yes, I saw the comment. I'm guessing I'm missing something wrt
transaction isolation level or locking. Would I need to use
SERIALIZABLE or some kind of locking? Is the function in the example
any different than the following
Has anyone written or know of a zenoss plugin that can be used in the
same way as the mySQLMonitor ?
I can use the nagios one but was hoping to get some of the same sort of
performance metrics as the mysql one provides.
--
Jason Long wrote:
I would greatly appreciate any advice on how to read the following
output from EXPLAIN ANALYZE.
When running the query the first time from a fresh restore it takes
about 55 sec.
On a second run is take about 2 sec.
After the vacuum it takes about 36 sec no matter how
I just bought a new Ubuntu Laptop so that I could tryout and hopefully
offer support for a few of my favorite pgfoundry projects.
Would anyone be able to give any dirction on what I need to do to get
passed this error?
***
My first attempt with building from
46 matches
Mail list logo