Re: [GENERAL] Duplicated entries are not ignored even if a "do instead nothing" rule is added.

2012-01-04 Thread Daniele Varrazzo
On Wed, Jan 4, 2012 at 8:09 PM, Alban Hertroys  wrote:

> On 4 Jan 2012, at 3:03, 邓尧 wrote:

>> True, I don't need transactions, neither do I want them, but psycopg2 create 
>> transactions for me automatically :-(
>
> Well, if psycopg didn't, Postgres would wrap each statement in a transaction 
> anyway. But that doesn't matter.

FYI, you can have psycopg in autocommit mode if you need that:
.

-- Daniele

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Duplicated entries are not ignored even if a "do instead nothing" rule is added.

2012-01-04 Thread Alban Hertroys
You accidentally clicked "Reply" instead of "Reply-all" ;)

On 4 Jan 2012, at 3:03, 邓尧 wrote:

>> On Tue, Jan 3, 2012 at 3:42 PM, Alban Hertroys  wrote:
>> On 3 Jan 2012, at 5:20, 邓尧 wrote:
>> 
>> > Hi,
>> >
>> > I'm new to pgsql, I need the do something like the "INSERT IGNORE" in 
>> > mysql. After some searching I got a solution, which is adding a "do 
>> > instead nothing" rule to the corresponding table, but it fails sometimes.
>> 
>> Yeah, if a concurrent transaction tries to create the same record, one of 
>> the transactions is going to find that it already exists on transaction 
>> commit. An INSERT-rule is not going to protect you against that.
>>  
> I modified the code, just commit after every insertion. Haven't seen this 
> kind of error for a few hours. It might hurt the performance, but for my 
> application, database isn't the bottleneck.
>  
> 
>> > The table and the rule is created with the following sql statements:
>> >
>> > create sequence ACCOUNT_ID_SEQ;
>> > create table ACCOUNT (
>> >   ID bigint primary key default nextval('ACCOUNT_ID_SEQ'),
>> >   HOME char(255) not null,
>> >   NAME char(255)
>> > );
>> > create unique index on ACCOUNT(HOME);
>> > create index on ACCOUNT(NAME);
>> 
>> It seems to me that account(home) is actually the PK - do you really need 
>> the artificial id column? That is a matter of personal preference; wars are 
>> waged on artificial vs natural keys. People in here will usually tell you to 
>> use what fits the problem best, both sides have benefits and drawbacks ;)
>> 
>> Another problem you'll have is that char columns are padded up to their full 
>> size with spaces - you'll end up trimming every value in your client 
>> applications. You probably want varchar(255) or perhaps better, text. The 
>> latter also rids you of that 255 length limit from mysql.
>> 
> Thanks for the advice, I actually ran into a bug because of the padded spaces 
> ;p
>  
> 
>> > There are about 20 clients do the following insertion (no UPDATE, some of 
>> > them might DELETE):
>> > begin transaction:
>> > insert into ACCOUNT(HOME) values (v1);
>> > insert into ACCOUNT(HOME) values (v2);
>> > ...
>> > commit;
>> >
>> > Sometimes I got the error says the unique constraint "account_home_idx" is 
>> > violated.
>> >
>> > Any suggestions?
>> 
>> I assume you're talking about parallel inserts from a multi-process tool for 
>> importing this data? 
>> Yes, I'm running a specific crawler. Crawl a twitter like web site. Python 3 
>> + psycopg2 is my programming platform.
>> 
>> 
>> First of all, especially if you're inserting a lot of data like this, see if 
>> you can use COPY FROM STDIN instead. That loads the whole transaction 
>> contents in one go, which is a lot more efficient then thousands of 
>> sequential inserts. As it's a single statement that way, you don't even need 
>> to wrap it in a transaction anymore - you'll get an implicit transaction per 
>> single statement, which is in this case exactly what you want for this 
>> single COPY statement.
> True, I don't need transactions, neither do I want them, but psycopg2 create 
> transactions for me automatically :-(

Well, if psycopg didn't, Postgres would wrap each statement in a transaction 
anyway. But that doesn't matter.

The problem was that you were verifying whether you were running into a 
conflict in one transaction (say 'A') before a potentially conflicting commit 
in a parallel transaction (say 'B').

So while you were verifying in 'A', transaction 'B' wasn't committed yet and 
therefore the conflicting row from 'B' wasn't visible to 'A'. When you then 
tried to commit 'A', the database found there was a conflicting record: namely 
the one from transaction 'B' that was just committed.

Alban Hertroys

P.S: Normally I would have trimmed down this mail to the bottom part of the 
conversation, but as the original mail didn't make it to the list I decided to 
keep all of it.

--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] help... lost database after upgrade from 9.0 to 9.1

2012-01-04 Thread Adrian Klaver
On Wednesday, January 04, 2012 9:46:42 am you wrote:
> On Wed, Jan 04, 2012 at 07:41:32AM -0800, Adrian Klaver wrote:
> Hello!
> 
> > Define scrambled backup.

I am CCing list so more eyes can see this.

> 
> well disks on both side had block loss, without me noticing
> so the backups were on the wrong partitions, and most of them are now
> part of lost+found if not lost completely...
> 
> my luck that boths disks gave up more or less, at the same time, and
> more or less silently

Alright, so really scrambled.

> 
> > On original machine verify that the locations and files in above command
> > exist: /var/lib/postgresql/9.0/main
> > /var/log/postgresql/postgresql-9.0-main.log
> > /etc/postgresql/9.0/main/postgresql.conf
> 
> all three ok!

So when you are running pg_ctlcluster 9.0 main start what user are you running 
as?

Have you tried to directly start the 9.0 cluster as the postgres user?:

usr/lib/postgresql/9.0/bin/pg_ctl /usr/lib/postgresql/9.0/bin/pg_ctl start  -D\ 
/var/lib/postgresql/9.0/main\
 -l /var/log/postgresql/postgresql-9.0-main.log -s -o -c\
config_file="/etc/postgresql/9.0/main/postgresql.conf"


> 
> > Get the 9.0 server running.
> 
> heh :D
> that's what i didn't manage to do

-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PGAdmin3 for 9.1 and CentOS-5_x86_64

2012-01-04 Thread James B. Byrne

I cannot seem to find a copy of pgadmin3 built for 9.1 on
the x86_64 platform for the CentOS-5 (RHEL5) distro.   I
can find a pgadmin3 v1.14.1 rpm package for rhel6.x86_64
but none for rhel5.x86_64.

I sent a message off to the maintainer last week but with
the holidays no doubt it is sitting in his inbox.

Does anyone here know if there is reason why CentOS-5 has
the i386 package built for pgadmin3 v1.14.1 but not the
x86_64?  Or does it exist and I am simply not finding it?

-- 
***  E-Mail is NOT a SECURE channel  ***
James B. Byrnemailto:byrn...@harte-lyne.ca
Harte & Lyne Limited  http://www.harte-lyne.ca
9 Brockley Drive  vox: +1 905 561 1241
Hamilton, Ontario fax: +1 905 561 0757
Canada  L8E 3C3


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] help... lost database after upgrade from 9.0 to 9.1

2012-01-04 Thread Adrian Klaver
On Wednesday, January 04, 2012 6:50:25 am Bruno Boettcher wrote:
> Hello!
> 
> just made a stupid move... upgraded a working system and without
> checking if the backup was ok
> 
> so i end up with a debian system having upgraded to 9,1 without
> converting the database, and a scrambled backup which is totally
> unusable

Define scrambled backup.

> 
> i tried to start the old tree with
> pg_ctlcluster 9.0 main start
> Error: could not exec   start -D /var/lib/postgresql/9.0/main -l
> /var/log/postgresql/postgresql-9.0-main.log -s -o  -c
> config_file="" :

On original machine verify that the locations and files in above command exist:
/var/lib/postgresql/9.0/main
/var/log/postgresql/postgresql-9.0-main.log
/etc/postgresql/9.0/main/postgresql.conf


> 
> 
> so i tried to copy the old 9.0 tree to a machine with a still working 9,0
> postgres, but it stops with
> Starting PostgreSQL 9.0 database server: mainError: could not exec
> /usr/lib/postgresql/9.0/bin/pg_ctl /usr/lib/postgresql/9.0/bin/pg_ctl
> start -D /var/lib/postgresql/9.0/main -l
> /var/log/postgresql/postgresql-9.0-main.log -s -o -c
> config_file="/etc/postgresql/9.0/main/postgresql.conf" : ... failed!
>  failed!

This is more of the same as above. For now lets concentrate on the original 
install.

> 
> 
>  so what can i do to extract the data of that tree and feed it into the
>  9.1 tree?

Get the 9.0 server running.

> 
>  thanks in avance!

-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] help... lost database after upgrade from 9.0 to 9.1

2012-01-04 Thread bboett
Hello!  



just made a stupid move... upgraded a working system and without

checking if the backup was ok   



so i end up with a debian system having upgraded to 9,1 without 

converting the database, and a scrambled backup which is totally

unusable



i tried to start the old tree with  

pg_ctlcluster 9.0 main start

Error: could not exec   start -D /var/lib/postgresql/9.0/main -l

+/var/log/postgresql/postgresql-9.0-main.log -s -o  -c  

+config_file="/etc/postgresql/9.0/main/postgresql.conf" :   





so i tried to copy the old 9.0 tree to a machine with a still working 9,0   

postgres, but it stops with 

Starting PostgreSQL 9.0 database server: mainError: could not exec  

/usr/lib/postgresql/9.0/bin/pg_ctl /usr/lib/postgresql/9.0/bin/pg_ctl   

start -D /var/lib/postgresql/9.0/main -l

/var/log/postgresql/postgresql-9.0-main.log -s -o -c

config_file="/etc/postgresql/9.0/main/postgresql.conf" : ... failed!

 failed! 
 so what can i do to extract the data of that tree and feed it into the 

 9.1 tree?  



 thanks in avance!  

--  

ciao bboett 

==  

bbo...@adlp.org 

http://inforezo.u-strasbg.fr/~bboett/   

=== 

   

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] delayed post: Subject: help... lost database after upgrade from 9.0 to 9.1

2012-01-04 Thread bboett
Hello,

  got a delaying message for my post because i visibly send from the
  wrong machine, tryed (hopefully) from the right one to resend it

-- 
ciao bboett
==
bbo...@adlp.org
http://inforezo.u-strasbg.fr/~bboett/
===

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] help... lost database after upgrade from 9.0 to 9.1

2012-01-04 Thread bboett
Hello!  



just made a stupid move... upgraded a working system and without

checking if the backup was ok   



so i end up with a debian system having upgraded to 9,1 without 

converting the database, and a scrambled backup which is totally

unusable



i tried to start the old tree with  

pg_ctlcluster 9.0 main start

Error: could not exec   start -D /var/lib/postgresql/9.0/main -l

+/var/log/postgresql/postgresql-9.0-main.log -s -o  -c  

+config_file="/etc/postgresql/9.0/main/postgresql.conf" :   





so i tried to copy the old 9.0 tree to a machine with a still working 9,0   

postgres, but it stops with 

Starting PostgreSQL 9.0 database server: mainError: could not exec  

/usr/lib/postgresql/9.0/bin/pg_ctl /usr/lib/postgresql/9.0/bin/pg_ctl   

start -D /var/lib/postgresql/9.0/main -l

/var/log/postgresql/postgresql-9.0-main.log -s -o -c

config_file="/etc/postgresql/9.0/main/postgresql.conf" : ... failed!

 failed!


so what can i do to extract the data of that tree and feed it into the  
   
 9.1 tree?  



 thanks in avance!
-- 
ciao bboett
==
bbo...@adlp.org
http://inforezo.u-strasbg.fr/~bboett/
===

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] help... lost database after upgrade from 9.0 to 9.1

2012-01-04 Thread Bruno Boettcher
Hello!

just made a stupid move... upgraded a working system and without
checking if the backup was ok

so i end up with a debian system having upgraded to 9,1 without
converting the database, and a scrambled backup which is totally
unusable

i tried to start the old tree with 
pg_ctlcluster 9.0 main start
Error: could not exec   start -D /var/lib/postgresql/9.0/main -l 
/var/log/postgresql/postgresql-9.0-main.log -s -o  -c 
config_file="/etc/postgresql/9.0/main/postgresql.conf" : 


so i tried to copy the old 9.0 tree to a machine with a still working 9,0
postgres, but it stops with 
Starting PostgreSQL 9.0 database server: mainError: could not exec
/usr/lib/postgresql/9.0/bin/pg_ctl /usr/lib/postgresql/9.0/bin/pg_ctl
start -D /var/lib/postgresql/9.0/main -l
/var/log/postgresql/postgresql-9.0-main.log -s -o -c
config_file="/etc/postgresql/9.0/main/postgresql.conf" : ... failed!
 failed!


 so what can i do to extract the data of that tree and feed it into the
 9.1 tree?

 thanks in avance!
-- 
ciao bboett
==
bbo...@adlp.org
http://inforezo.u-strasbg.fr/~bboett/
===

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgresql 9.1 logging

2012-01-04 Thread Andreas Kretschmer
Birta Levente  wrote:

> Hi all
>
> I use postgresql 9.1.2 on centos 6.2 and I want to use pgfouine, but in  
> my log file appear #011, #015 ... characters and the pgfouine can't  
> handle it.
> Can I configure the server or rsyslog to log without these characters or  
> I need filter separately?
>
> thanks
> Levi
>
> the log section from my configuration file:

Set lc_message = 'C' in your postgresql.conf and restart/reload the server.


Andreas
-- 
Really, I'm not out to destroy Microsoft. That will just be a completely
unintentional side effect.  (Linus Torvalds)
"If I was god, I would recompile penguin with --enable-fly."   (unknown)
Kaufbach, Saxony, Germany, Europe.  N 51.05082°, E 13.56889°

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Postgresql 9.1 logging

2012-01-04 Thread Birta Levente

Hi all

I use postgresql 9.1.2 on centos 6.2 and I want to use pgfouine, but in 
my log file appear #011, #015 ... characters and the pgfouine can't 
handle it.
Can I configure the server or rsyslog to log without these characters or 
I need filter separately?


thanks
Levi

the log section from my configuration file:

log_destination = 'syslog'
logging_collector = on
log_directory = 'pg_log'
log_filename = 'postgresql-%a.log
log_truncate_on_rotation = on
log_rotation_age = 0
log_rotation_size = 0
syslog_facility = 'LOCAL0'
syslog_ident = 'postgres'
silent_mode = on
client_min_messages = log
log_min_messages = warning
log_min_error_statement = info
log_min_duration_statement = 0
#debug_print_parse = off
#debug_print_rewritten = off
#debug_print_plan = off
#debug_pretty_print = on
log_checkpoints = on
#log_connections = off
#log_disconnections = off
log_duration = off
#log_error_verbosity = default
#log_hostname = off
log_line_prefix = 'user=%u,db=%d,host=%h '
#log_lock_waits = off
log_statement = 'none'
log_temp_files = 4096
#log_timezone = unknown



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Streaming replication with automatic failover?

2012-01-04 Thread Saurabh
Hi all,

After a discussion (https://groups.google.com/group/pgsql.performance/
browse_thread/thread/98ff8dd854be0b10/a26f49b4f4c60012?
hl=en#a26f49b4f4c60012), we have decide to stick with streaming
replication. But now the question is how to implement the automatic
failover either using some application or trigger file? If using
application then which application and if using trigger then how to
use it?

Thanks,
Saurabh Gupta

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general