[GENERAL] How to join materalized view to child tables

2007-07-10 Thread Postgres User
Hi, I have a quasi materialized view that's maintained by INS, UPD, and DEL triggers on several child tables. The tables involved have different structures, but I needed a single view for selecting records based on a few common fields. This approach is much faster than querying the separate

Re: [GENERAL] Duplicate Unique Key constraint error

2007-07-10 Thread Ron St-Pierre
Harpreet Dhaliwal wrote: Hi, I keep getting this duplicate unique key constraint error for my primary key even though I'm not inserting anything duplicate. It even inserts the records properly but my console throws this error that I'm sure of what it is all about. Corruption of my Primary

Re: [GENERAL] Postgres 8.2 binary for ubuntu 6.10?

2007-07-10 Thread Hannes Dorbath
On 10.07.2007 03:09, novnov wrote: I have postgres 8.1 installed on ubuntu 6.10 via synapic package manager. I would like to install 8.2, but it's not offered in the list. I think 8.2 is offered on 7.x ubuntu, and I wonder if 8.2 will be offered on 6.10? Probably the recommondation will be to

[GENERAL] pgpass.conf

2007-07-10 Thread Ashish Karalkar
Hello All, I am trying tu run a script to create database from a batch programme and dont want to supply password everytime. So i tried to setup pgpass.conf file. File is kept in user profile/application data i.e C:\Documents and Settings\postgres\Application Data\postgresql\pgpass.conf file

Re : [GENERAL] Postgres 8.2 binary for ubuntu 6.10?

2007-07-10 Thread Laurent ROCHE
Hi, I am not moving from 6.10 to anything else for now. Ubuntu 6.10 LTS is Long Term Support. So for a server that's what I want: everyhting working better and better (via updates) and no major changes ! Getting always the latest version is definitely asking for troubles. I don't need the

Re: [GENERAL] plpgsql equivalent to plperl $_SHARED and plpythonu global dictionary GD?

2007-07-10 Thread hubert depesz lubaczewski
On 7/9/07, Zlatko Matic [EMAIL PROTECTED] wrote: Does plpgsql has something equivalent to plperl $_SHARED or plpythonu global dictionary GD? no, but you can use some table to emulate this. or a temp table. depesz -- http://www.depesz.com/ - nowy, lepszy depesz

Re: [GENERAL] Postgres 8.2 binary for ubuntu 6.10?

2007-07-10 Thread Jan Muszynski
On 10 Jul 2007 at 9:13, Hannes Dorbath wrote: On 10.07.2007 03:09, novnov wrote: I have postgres 8.1 installed on ubuntu 6.10 via synapic package manager. I would like to install 8.2, but it's not offered in the list. I think 8.2 is offered on 7.x ubuntu, and I wonder if 8.2 will be offered on

Re: Re : [GENERAL] Postgres 8.2 binary for ubuntu 6.10?

2007-07-10 Thread Mario Guenterberg
On Tue, Jul 10, 2007 at 01:13:07AM -0700, Laurent ROCHE wrote: Hi, I am not moving from 6.10 to anything else for now. Ubuntu 6.10 LTS is Long Term Support. So for a server that's what I want: everyhting working better and better (via updates) and no major changes ! Getting always the

Re: [GENERAL] Postgres 8.2 binary for ubuntu 6.10?

2007-07-10 Thread Dimitri Fontaine
Le mardi 10 juillet 2007, novnov a écrit : I have postgres 8.1 installed on ubuntu 6.10 via synapic package manager. I would like to install 8.2, but it's not offered in the list. I think 8.2 is offered on 7.x ubuntu, and I wonder if 8.2 will be offered on 6.10? Probably the recommondation

Re: [GENERAL] pgpass.conf

2007-07-10 Thread Raymond O'Donnell
On 10/07/2007 08:47, Ashish Karalkar wrote: Still the batch asks for the password.!!! I am just not getting why its not reading password from pgpass file. Probably a silly question, but if you're using the createdb utility in the batch file, have you inadvertently included the -W option? -

Re: [GENERAL] Vacuum issue

2007-07-10 Thread Dimitri Fontaine
Le lundi 09 juillet 2007, Gregory Stark a écrit :0 The output of vacuum verbose can be hard to interpret, if you want help adjusting the fsm settings send it here. Using pgfouine, one gets easy to read reports: http://pgfouine.projects.postgresql.org/vacuum.html

Re: [GENERAL] pgpass.conf

2007-07-10 Thread Ashish Karalkar
- Original Message - From: Raymond O'Donnell [EMAIL PROTECTED] To: Ashish Karalkar [EMAIL PROTECTED] Cc: pgsql-general@postgresql.org Sent: Tuesday, July 10, 2007 3:51 PM Subject: Re: [GENERAL] pgpass.conf On 10/07/2007 08:47, Ashish Karalkar wrote: Still the batch asks for the

Re: [GENERAL] pgpass.conf

2007-07-10 Thread Raymond O'Donnell
On 10/07/2007 11:28, Ashish Karalkar wrote: I have set this succesfully on redhat linux but iam messed up in Windows XP prof. Is there any other thing to do? I'm not a guru, but maybe it's a permissions problem on the pgpass file? Ray.

Re: [GENERAL] pgpass.conf

2007-07-10 Thread Dave Page
Ashish Karalkar wrote: Hello All, I am trying tu run a script to create database from a batch programme and dont want to supply password everytime. So i tried to setup pgpass.conf file. File is kept in user profile/application data i.e C:\Documents and Settings\postgres\Application

Re: [GENERAL] pgpass.conf

2007-07-10 Thread Ashish Karalkar
- Original Message - From: Dave Page [EMAIL PROTECTED] To: Ashish Karalkar [EMAIL PROTECTED] Cc: pgsql-general@postgresql.org Sent: Tuesday, July 10, 2007 4:25 PM Subject: Re: [GENERAL] pgpass.conf Ashish Karalkar wrote: Hello All, I am trying tu run a script to create database

Re: [GENERAL] pgpass.conf

2007-07-10 Thread Magnus Hagander
On Tue, Jul 10, 2007 at 04:34:56PM +0530, Ashish Karalkar wrote: Hello All, I am trying tu run a script to create database from a batch programme and dont want to supply password everytime. So i tried to setup pgpass.conf file. File is kept in user profile/application data i.e

Re: [GENERAL] pgpass.conf

2007-07-10 Thread Dave Page
Ashish Karalkar wrote: The batch file is run under postgres user, also owner of the pgpass.conf file is postgres. As far as my knowledge the permission checking is not done on windows anyways the owner is same so i dont think there is any problem of permission OK - have you tried 127.0.0.1

Re: [GENERAL] catalog location

2007-07-10 Thread John DeSoi
On Jul 7, 2007, at 8:16 AM, Carmen Martinez wrote: Please, I need to know where the catalog tables (pg_class, pg_attrdef...) are located in the postgresql rdbms. Because I can not see them in the pgAdminII interface, like other tables or objects. And I can not find any reference about

[GENERAL] vacuumdb: PANIC: corrupted item pointer

2007-07-10 Thread Alain Peyrat
Hello, System: Red Hat Linux 4 64bits running postgres-7.4.16 (production) Initial problem: # pg_dump -O dbname -Ft -f /tmp/database.tar pg_dump: query to get table columns failed: ERROR:  invalid memory alloc request size 9000688640 After some research, it seems to be related

Re: [GENERAL] russian case-insensitive regexp search not working

2007-07-10 Thread Karsten Hilbert
On Tue, Jul 10, 2007 at 08:40:24AM +0400, alexander lunyov wrote: Just to clarify: lower() on both sides of a comparison should still work as expected on multibyte encodings ? It's been suggested here before. lower() on both sides also does not working in my case, it still search for

Re: [GENERAL] plpgsql equivalent to plperl $_SHARED and plpythonu global dictionary GD?

2007-07-10 Thread Zlatko Matic
Hello. OK. I created a new table that holds information about rows inserted/updated in a transaction. I realized that after row-level trigger fires always before after statement-level trigger. Therefore I can use row-level triger to populate the auxiliary table which holds information about

[GENERAL] free scheduled import utility

2007-07-10 Thread Zlatko Matic
Hello. Is there any free program/utility for batch imports from .csv files, that can be easily scheduled for daily inserts of data to PostgreSQL tables? Regards, Zlatko ---(end of broadcast)--- TIP 5: don't forget to increase your free space

Re: [GENERAL] russian case-insensitive regexp search not working

2007-07-10 Thread alexander lunyov
Karsten Hilbert wrote: Just to clarify: lower() on both sides of a comparison should still work as expected on multibyte encodings ? It's been suggested here before. lower() on both sides also does not working in my case, it still search for case-sensitive data. String in this example have

Re: [GENERAL] free scheduled import utility

2007-07-10 Thread Reid Thompson
On Tue, 2007-07-10 at 14:32 +0200, Zlatko Matic wrote: Hello. Is there any free program/utility for batch imports from .csv files, that can be easily scheduled for daily inserts of data to PostgreSQL tables? Regards, Zlatko ---(end of

Re: [GENERAL] free scheduled import utility

2007-07-10 Thread Dimitri Fontaine
Le mardi 10 juillet 2007, Zlatko Matic a écrit : Is there any free program/utility for batch imports from .csv files, that can be easily scheduled for daily inserts of data to PostgreSQL tables? COPY itself would do the job, but you can also use pgloader:

Re: [GENERAL] free scheduled import utility

2007-07-10 Thread A. Kretschmer
am Tue, dem 10.07.2007, um 14:32:58 +0200 mailte Zlatko Matic folgendes: Hello. Is there any free program/utility for batch imports from .csv files, that can be easily scheduled for daily inserts of data to PostgreSQL tables? Regards, You can use the scheduler from your OS. For Unix-like

[GENERAL] TOAST, large objects and ACIDity

2007-07-10 Thread Benoit Mathieu
Hi all, I want to use postgres to store data and large files, typically audio files from 100ko to 20Mo. For those files, I just need to store et retrieve them, in an ACID way. (I don't need search, or substring, or others functionnalities) I saw postgres offers at least 2 method : bytea

Re: [GENERAL] Postgres 8.2 binary for ubuntu 6.10?

2007-07-10 Thread Leonel
On 7/9/07, novnov [EMAIL PROTECTED] wrote: I have postgres 8.1 installed on ubuntu 6.10 via synapic package manager. I would like to install 8.2, but it's not offered in the list. I think 8.2 is offered on 7.x ubuntu, and I wonder if 8.2 will be offered on 6.10? Probably the recommondation will

Re: [GENERAL] vacuumdb: PANIC: corrupted item pointer

2007-07-10 Thread Richard Huxton
Alain Peyrat wrote: Hello, System: Red Hat Linux 4 64bits running postgres-7.4.16 (production) Initial problem: # pg_dump -O dbname -Ft -f /tmp/database.tar pg_dump: query to get table columns failed: ERROR: invalid memory alloc request size 9000688640 After some research, it

Re: [GENERAL] TOAST, large objects and ACIDity

2007-07-10 Thread Alexander Staubo
On 7/10/07, Benoit Mathieu [EMAIL PROTECTED] wrote: I saw postgres offers at least 2 method : bytea column with TOAST, or large objects API. From the documentation: All large objects are placed in a single system table called pg_largeobject. PostgreSQL also supports a storage system called

Re: [GENERAL] russian case-insensitive regexp search not working

2007-07-10 Thread Tom Lane
alexander lunyov [EMAIL PROTECTED] writes: With this i just wanted to say that lower() doesn't work at all on russian unicode characters, In that case you're using the wrong locale (ie, not russian unicode). Check show lc_ctype. Or [ checks back in thread... ] maybe you're using the wrong

[GENERAL] Vaccum Stalling

2007-07-10 Thread Brad Nicholson
Version 7.4.12 AIX 5.3 Scenario - a large table was not being vacuumed correctly, there now ~ 15 million dead tuples that account for approximately 20%-25% of the table. Vacuum appears to be stalling - ran for approximately 10 hours before I killed it. I hooked up to the process with gdb and

Re: [GENERAL] TOAST, large objects and ACIDity

2007-07-10 Thread Tomasz Ostrowski
On Tue, 10 Jul 2007, Alexander Staubo wrote: My take: Stick with TOAST unless you need fast random access. TOAST is faster, more consistently supported (eg., in Slony) and easier to work with. Toasted bytea columns have some other disadvantages also: 1. It is impossible to create its value

[GENERAL] Problems with linkage

2007-07-10 Thread Kevin martins
Hello, I am new in using c in postgresql. My problem is that when i compile my program code,it generate the folowing error mensage: fu01.o(.idata$3+0xc): undefined reference to `libpostgres_a_iname'nmth00.o(.idata$4+0x0): undefined reference to `_nm__SPI_processed'collect2: ld

Re: [GENERAL] Vaccum Stalling

2007-07-10 Thread Tom Lane
Brad Nicholson [EMAIL PROTECTED] writes: Scenario - a large table was not being vacuumed correctly, there now ~ 15 million dead tuples that account for approximately 20%-25% of the table. Vacuum appears to be stalling - ran for approximately 10 hours before I killed it. I hooked up to the

Re: [GENERAL] Vaccum Stalling

2007-07-10 Thread Brad Nicholson
On Tue, 2007-07-10 at 11:19 -0400, Tom Lane wrote: Oh, I forgot to mention --- you did check that vacuum_mem is set to a pretty high value, no? Else you might be doing a lot more btbulkdelete scans than you need to. regards, tom lane What would you define as high for

Re: [GENERAL] Vaccum Stalling

2007-07-10 Thread Tom Lane
Oh, I forgot to mention --- you did check that vacuum_mem is set to a pretty high value, no? Else you might be doing a lot more btbulkdelete scans than you need to. regards, tom lane ---(end of broadcast)--- TIP 4: Have you

Re: [GENERAL] Vaccum Stalling

2007-07-10 Thread Tom Lane
Brad Nicholson [EMAIL PROTECTED] writes: On Tue, 2007-07-10 at 11:19 -0400, Tom Lane wrote: Oh, I forgot to mention --- you did check that vacuum_mem is set to a pretty high value, no? Else you might be doing a lot more btbulkdelete scans than you need to. What would you define as high for

Re: [GENERAL] Vaccum Stalling

2007-07-10 Thread Brad Nicholson
On Tue, 2007-07-10 at 11:31 -0400, Tom Lane wrote: Brad Nicholson [EMAIL PROTECTED] writes: On Tue, 2007-07-10 at 11:19 -0400, Tom Lane wrote: Oh, I forgot to mention --- you did check that vacuum_mem is set to a pretty high value, no? Else you might be doing a lot more btbulkdelete

Re: [GENERAL] Vaccum Stalling

2007-07-10 Thread Tom Lane
Brad Nicholson [EMAIL PROTECTED] writes: On Tue, 2007-07-10 at 11:31 -0400, Tom Lane wrote: How big is this index again? Not sure which one it's working on - there are 6 of them each are ~ 2.5GB OK, about 300K pages each ... so even assuming the worst case that each page requires a physical

Re: [GENERAL] Vaccum Stalling

2007-07-10 Thread Pavel Stehule
Hello I have similar problem with vacuum on 8.1 I have 256M table. pgstattuple reports 128M free. I stopped vacuum after 1hour (maintenance_work_mem = 160M). I had not more time. Regards Pavel Stehule 2007/7/10, Tom Lane [EMAIL PROTECTED]: Brad Nicholson [EMAIL PROTECTED] writes: On Tue,

Re: [GENERAL] PostGreSQL Replication

2007-07-10 Thread Andrew Sullivan
On Sat, Jul 07, 2007 at 05:16:56AM -0700, Gabriele wrote: Let's have a server which feed data to multiple slaves, usually using direct online connections. Now, we may want to allow those client to sync the data to a local replica, work offline and then resync the data back to the server. Which

Re: [GENERAL] Vaccum Stalling

2007-07-10 Thread Pavel Stehule
Hello I have similar problem with vacuum on 8.1 I have 256M table. pgstattuple reports 128M free. I stopped vacuum after 1hour (maintenance_work_mem = 160M). I had not more time. I test it on 8.3 with random data. Vacuum from 190M to 94M neded 30sec. It's much better. It isn't 100%

Re: [GENERAL] Nested Transactions in PL/pgSQL

2007-07-10 Thread Nykolyn, Andrew
-Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Alvaro Herrera Sent: Friday, July 06, 2007 9:49 AM To: Nykolyn, Andrew Cc: John DeSoi; pgsql-general@postgresql.org Subject: Re: [GENERAL] Nested Transactions in PL/pgSQL Nykolyn, Andrew wrote: My real

Re: [GENERAL] vacuumdb: PANIC: corrupted item pointer

2007-07-10 Thread Tom Lane
Richard Huxton [EMAIL PROTECTED] writes: Alain Peyrat wrote: Initial problem: # pg_dump -O dbname -Ft -f /tmp/database.tar pg_dump: query to get table columns failed: ERROR: invalid memory alloc request size 9000688640 After some research, it seems to be related to a corruption of the

Re: [GENERAL] Postgres 8.2 binary for ubuntu 6.10?

2007-07-10 Thread novnov
Thanks all of you. It does seem like the backport is the way to go. So now I have 8.2 and some new postgres/linux newb questions. I can safely remove 8.1 after moving data using synaptic, ie 8.2 shouldn't be dependent on 8.1 at all? I don't understand how postgres is installed with these

Re: [GENERAL] PostGreSQL Replication

2007-07-10 Thread Guido Neitzer
On 07.07.2007, at 06:16, Gabriele wrote: Let's have a server which feed data to multiple slaves, usually using direct online connections. Now, we may want to allow those client to sync the data to a local replica, work offline and then resync the data back to the server. Which is the easiest

Re: [GENERAL] vacuumdb: PANIC: corrupted item pointer

2007-07-10 Thread Richard Huxton
Tom Lane wrote: Richard Huxton [EMAIL PROTECTED] writes: Alain Peyrat wrote: Initial problem: # pg_dump -O dbname -Ft -f /tmp/database.tar pg_dump: query to get table columns failed: ERROR: invalid memory alloc request size 9000688640 After some research, it seems to be related to a

Re: [GENERAL] Duplicate Unique Key constraint error

2007-07-10 Thread Harpreet Dhaliwal
my primary key is neither SERIAL nor a SEQUENCE. CONSTRAINT pk_dig PRIMARY KEY (dig_id) This is the clause that I have for my primary key in the create table script. thanks, ~Harpreet On 7/10/07, Ron St-Pierre [EMAIL PROTECTED] wrote: Harpreet Dhaliwal wrote: Hi, I keep getting this

Re: [GENERAL] Hyper-Trading

2007-07-10 Thread Adrian von Bidder
On Saturday 07 July 2007 11.34:04 Евгений Кононов wrote: Hello ! How to force POSTGRES to use all virtual processors at included Hyper-Trading ? If your operating system is able to schedule the threads/processes across CPUs, PostgreSQL will use them. Often, the limit is disk, not CPU, so

Re: [GENERAL] PostGreSQL Replication

2007-07-10 Thread Adrian von Bidder
On Saturday 07 July 2007 14.16:56 Gabriele wrote: I know this is a delicate topic which must be approached cautiously. Let's have a server which feed data to multiple slaves, usually using direct online connections. Now, we may want to allow those client to sync the data to a local replica,

Re: [GENERAL] vacuumdb: PANIC: corrupted item pointer

2007-07-10 Thread Tom Lane
Richard Huxton [EMAIL PROTECTED] writes: Tom Lane wrote: FWIW, a look in the source code shows that the 'corrupted item pointer' message comes only from PageIndexTupleDelete, so that indicates a damaged index which should be fixable by reindexing. Tom - could it be damage to a shared

Re: [GENERAL] Postgres 8.2 binary for ubuntu 6.10?

2007-07-10 Thread Mario Guenterberg
On Tue, Jul 10, 2007 at 10:50:39AM -0700, novnov wrote: Thanks all of you. It does seem like the backport is the way to go. So now I have 8.2 and some new postgres/linux newb questions. I can safely remove 8.1 after moving data using synaptic, ie 8.2 shouldn't be dependent on 8.1 at

Re: [GENERAL] Duplicate Unique Key constraint error

2007-07-10 Thread Harpreet Dhaliwal
I lately figured out the actual problem PHEW. Its something like two different transactions are seeing the same snapshot of the database. Transaction 1 started, saw max(dig_id) = 30 and inserted new dig_id=31. Now the time when Transaction 2 started and read max(dig_id) it was still 30 and by

Re: [GENERAL] Duplicate Unique Key constraint error

2007-07-10 Thread Michael Glaesemann
On Jul 10, 2007, at 13:22 , Harpreet Dhaliwal wrote: Transaction 1 started, saw max(dig_id) = 30 and inserted new dig_id=31. Now the time when Transaction 2 started and read max(dig_id) it was still 30 and by the time it tried to insert 31, 31 was already inserted by Transaction 1 and

[GENERAL] Adjacency Lists vs Nested Sets

2007-07-10 Thread Matthew Hixson
Does Postgres have any native support for hierarchical data storage? I'm familiar with the Adjacency List technique, but am trying to determine whether or not Nested Sets would make sense for our application or not. I understand that Nested Sets might be better for high read

Re: [GENERAL] Adjacency Lists vs Nested Sets

2007-07-10 Thread Richard Huxton
Matthew Hixson wrote: Does Postgres have any native support for hierarchical data storage? I'm familiar with the Adjacency List technique, but am trying to determine whether or not Nested Sets would make sense for our application or not. I understand that Nested Sets might be better for

Re: [GENERAL] Duplicate Unique Key constraint error

2007-07-10 Thread Tom Lane
Harpreet Dhaliwal [EMAIL PROTECTED] writes: Transaction 1 started, saw max(dig_id) = 30 and inserted new dig_id=31. Now the time when Transaction 2 started and read max(dig_id) it was still 30 and by the time it tried to insert 31, 31 was already inserted by Transaction 1 and hence the unique

Re: [GENERAL] Adjacency Lists vs Nested Sets

2007-07-10 Thread Michael Glaesemann
On Jul 10, 2007, at 13:51 , Richard Huxton wrote: Matthew Hixson wrote: Does Postgres have any native support for hierarchical data storage? I'm familiar with the Adjacency List technique, but am trying to determine whether or not Nested Sets would make sense for our application or not.

Re: [GENERAL] Duplicate Unique Key constraint error

2007-07-10 Thread Harpreet Dhaliwal
Thanks alot for all your suggestions gentlemen. I changed it to a SERIAL column and all the pain has been automatically alleviated :) Thanks a ton. ~Harpreet On 7/10/07, Tom Lane [EMAIL PROTECTED] wrote: Harpreet Dhaliwal [EMAIL PROTECTED] writes: Transaction 1 started, saw max(dig_id) = 30

Re: [GENERAL] vacuumdb: PANIC: corrupted item pointer

2007-07-10 Thread AlJeux
Richard Huxton a écrit : Alain wrote: Hello, System: Red Hat Linux 4 64bits running postgres-7.4.16 (production) Initial problem: # pg_dump -O dbname -Ft -f /tmp/database.tar pg_dump: query to get table columns failed: ERROR: invalid memory alloc request size 9000688640 After

[GENERAL] Implementing 2 different transactions in a PL/Perl function

2007-07-10 Thread Jasbinder Singh Bali
Hi, How can I have two different transactions is a plperlu function? My purpose is as follows:- Transaction 1 does some series of inserts in tbl_abc Transaction 2 updates some columns in tbl_abc fetching records from some other table. I basically want 2 independent transactions in my function

[GENERAL] Am I missing something about the output of pg_stop_backup()?

2007-07-10 Thread Ben
So, I'm working on a script that does PITR and basing it off the one here: http://archives.postgresql.org/pgsql-admin/2006-03/msg00337.php (BTW, thanks for posting that, Rajesh.) My frustration comes from the output format of pg_stop_backup(). Specifically, it outputs a string like this:

Re: [GENERAL] Implementing 2 different transactions in a PL/Perl function

2007-07-10 Thread Richard Huxton
Jasbinder Singh Bali wrote: Hi, How can I have two different transactions is a plperlu function? My purpose is as follows:- Transaction 1 does some series of inserts in tbl_abc Transaction 2 updates some columns in tbl_abc fetching records from some other table. You'll have to connect back

Re: [GENERAL] Implementing 2 different transactions in a PL/Perl function

2007-07-10 Thread Michael Glaesemann
On Jul 10, 2007, at 14:41 , Jasbinder Singh Bali wrote: I basically want 2 independent transactions in my function so that 1 commits as soon as it is done and 2 doesn't depend on it at all. If they're truly independent, I'd write them as two separate functions., possibly calling both of

Re: [GENERAL] Implementing 2 different transactions in a PL/Perl function

2007-07-10 Thread Jasbinder Singh Bali
You mean to say keep using spi_exec till I want everything in the same transaction and the point where I want a separate transaction, use DBI ? On 7/10/07, Richard Huxton [EMAIL PROTECTED] wrote: Jasbinder Singh Bali wrote: Hi, How can I have two different transactions is a plperlu

Re: [GENERAL] Hyper-Trading

2007-07-10 Thread Andrej Ricnik-Bay
On 7/10/07, Евгений Кононов [EMAIL PROTECTED] wrote: Здравствуйте, Andrej. Privet ;) ... not that I speak any Russian, really. ARB What OS are you using, and what's hyper-trading? Hyper threading ARB by any chance? That's the OSes responsibility, not the databases. I'm use Fedora Core 5,

Re: [GENERAL] Implementing 2 different transactions in a PL/Perl function

2007-07-10 Thread Richard Huxton
Jasbinder Singh Bali wrote: You mean to say keep using spi_exec till I want everything in the same transaction and the point where I want a separate transaction, use DBI ? Yes - if you have two functions A,B then do everything as normal in each, except you call function B using dblink() from

[GENERAL] exit code -1073741819

2007-07-10 Thread Shuo Liu
Hi, All, I'm working on a GIS project using PostgreSQL and PostGIS. In the project I need to find locations of about 12K addresses (the process is referred to as geocoding). I wrote some script to perform this task by calling a procedure tiger_geocoding that is provided by PostGIS. My script

Re: [GENERAL] Am I missing something about the output of pg_stop_backup()?

2007-07-10 Thread Richard Huxton
Ben wrote: So, I'm working on a script that does PITR and basing it off the one here: http://archives.postgresql.org/pgsql-admin/2006-03/msg00337.php (BTW, thanks for posting that, Rajesh.) My frustration comes from the output format of pg_stop_backup(). Specifically, it outputs a string

Re: [GENERAL] vacuumdb: PANIC: corrupted item pointer

2007-07-10 Thread Richard Huxton
AlJeux wrote: Richard Huxton a écrit : 1. Have you had crashes or other hardware problems recently? No crash but we changed our server (= seems the cause). First try was using a file system copy to reduce downtime as it was two same 7.4.x version but the result was not working (maybe

Re: [GENERAL] Am I missing something about the output of pg_stop_backup()?

2007-07-10 Thread Ben
On Tue, 10 Jul 2007, Richard Huxton wrote: Have you looked in the backup history file: http://www.postgresql.org/docs/8.2/static/continuous-archiving.html#BACKUP-BASE-BACKUP The backup history file is just a small text file. It contains the label string you gave to pg_start_backup, as well

Re: [GENERAL] Hyper-Trading

2007-07-10 Thread Andrew Sullivan
On Tue, Jul 10, 2007 at 08:09:11PM +0200, Adrian von Bidder wrote: If your operating system is able to schedule the threads/processes across CPUs, PostgreSQL will use them. But notice that hyperthreading imposes its own overhead. I've not seen evidence that enabling hyperthreading actually

Re: [GENERAL] Am I missing something about the output of pg_stop_backup()?

2007-07-10 Thread Greg Smith
On Tue, 10 Jul 2007, Ben wrote: The backup history file is just a small text file. It contains the label string you gave to pg_start_backup, as well as the starting and ending times and WAL segments of the backup. For instance, in the case when the backup history file from the previous

Re: [GENERAL] Hyper-Trading

2007-07-10 Thread Andrej Ricnik-Bay
On 7/11/07, Andrew Sullivan [EMAIL PROTECTED] wrote: On Tue, Jul 10, 2007 at 08:09:11PM +0200, Adrian von Bidder wrote: If your operating system is able to schedule the threads/processes across CPUs, PostgreSQL will use them. But notice that hyperthreading imposes its own overhead. I've

Re: [GENERAL] Hyper-Trading

2007-07-10 Thread Tom Lane
Andrej Ricnik-Bay [EMAIL PROTECTED] writes: On 7/11/07, Andrew Sullivan [EMAIL PROTECTED] wrote: But notice that hyperthreading imposes its own overhead. I've not seen evidence that enabling hyperthreading actually helps, although I may have overlooked a couple of cases. Have a look at

Re: [GENERAL] vacuumdb: PANIC: corrupted item pointer

2007-07-10 Thread Tom Lane
Richard Huxton [EMAIL PROTECTED] writes: First try was using a file system copy to reduce downtime as it was two same 7.4.x version but the result was not working (maybe related to architecture change 32bits = 64 bits) so I finally dropped the db and performed an dump/restore. I think, the

Re: [GENERAL] exit code -1073741819

2007-07-10 Thread Tom Lane
Shuo Liu [EMAIL PROTECTED] writes: The log shows the following message: CurTransactionContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used ExecutorState: 122880 total in 4 blocks; 1912 free (9 chunks); 120968 used ExprContext: 8192 total in 1 blocks; 8176 free (0 chunks); 16 used

Re: [GENERAL] Hyper-Trading

2007-07-10 Thread Andrej Ricnik-Bay
On 7/11/07, Tom Lane [EMAIL PROTECTED] wrote: Conventional wisdom around here has been that HT doesn't help database performance, and that IBM link might provide a hint as to why: the only item for which they show a large loss in performance is disk I/O. Ooops. Thanks Tom, great summary. How

Re: [GENERAL] exit code -1073741819

2007-07-10 Thread Shuo Liu
Hi, Tom, Thanks for the reply. I'll try to provide as much information as I can. ExecutorState: 122880 total in 4 blocks; 1912 free (9 chunks); 120968 used ExprContext: 8192 total in 1 blocks; 8176 free (0 chunks); 16 used ExprContext: 8192 total in 1 blocks; 8176 free (0 chunks); 16 used

Re: [GENERAL] exit code -1073741819

2007-07-10 Thread Shuo Liu
Hi, Tom, One more question. I'm new to PostgreSQL and not an expert in debugging. After checking the manual, I think I need to turn on the following parameters in order to generate debug info. Do you think doing so would give us what we need to pinpoint the problem? debug_assertions

Re: [GENERAL] exit code -1073741819

2007-07-10 Thread Tom Lane
Shuo Liu [EMAIL PROTECTED] writes: TopMemoryContext: 11550912 total in 1377 blocks; 123560 free (833 chunks); 11427352 used Whoa ... that is a whole lot more data than I'm used to seeing in TopMemoryContext. How many stats dump lines are there exactly (from here to the crash report)? If

Re: [GENERAL] exit code -1073741819

2007-07-10 Thread Tom Lane
Shuo Liu [EMAIL PROTECTED] writes: One more question. I'm new to PostgreSQL and not an expert in debugging. After checking the manual, I think I need to turn on the following parameters in order to generate debug info. Do you think doing so would give us what we need to pinpoint the

Re: [GENERAL] exit code -1073741819

2007-07-10 Thread Shuo Liu
Whoa ... that is a whole lot more data than I'm used to seeing in TopMemoryContext. How many stats dump lines are there exactly (from here to the crash report)? OK, I didn't know that was a surprise. There are about 600 stats dump lines in between. The spatial database that the script is

Re: [GENERAL] exit code -1073741819

2007-07-10 Thread Shuo Liu
OK, so maybe it's dependent on the size of the table. Try generating a test case by loading up just your schema + functions + a lot of dummy entries generated by script. Is the data proprietary? If not, maybe you could arrange to send me a dump off-list. A short test-case script would be

Re: [GENERAL] exit code -1073741819

2007-07-10 Thread Tom Lane
Shuo Liu [EMAIL PROTECTED] writes: That's what I was planning to do. I'll generate a table with dummy entries. I think we may try to use the smaller base table first. Once I can reproduce the problem I'll dump the database into a file and send it to you. Is there a server that I can upload