Re: [GENERAL] autovaccum task got cancelled

2013-11-02 Thread Gary Fu

On 11/01/13 03:23, Kevin Grittner wrote:

Sergey Konoplev gray...@gmail.com wrote:


As far as I know, the application programs do not make any
specific lock on the 'file' table.  I'm not sure if it is caused
by the pgpool or something else.

[...]


2013-10-31 18:01:30 UTCLOG:  sending cancel to blocking autovacuum PID 8614
2013-10-31 18:01:30 UTCDETAIL:  Process 8677 waits for ShareRowExclusiveLock on 
relation 11959608 of database 596746.
2013-10-31 18:01:30 UTCSTATEMENT:  LOCK TABLE file IN SHARE ROW EXCLUSIVE MODE
2013-10-31 18:01:30 UTCERROR:  canceling autovacuum task
2013-10-31 18:01:30 UTCCONTEXT:  automatic vacuum of table sd3ops1.public.file

 From the release notes to 9.0.12:

Fix performance problems with autovacuum truncation in busy
workloads (Jan Wieck)

I don't think the problem described here has anything to do with
that.  It looks to me like there is an explicit LOCK TABLE
statement being executed for a mode which conflicts with a normal
vacuum or analyze, even without truncation.  The cited change
*avoids* this sort of cancellation for the truncation phase, so it
is not getting that far.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Thanks for all the replies.  I'm pretty sure right now, it is the pgpool
since I searched the pgpool source codes and found those strings.
Also, I have the pgpool configuration 'insert_lock' on (by default),
but without applying the 'insert_lock.sql' as pgpool suggested.

However, I don't know why it did not happen before.  By the way,
I think Kevin is right, since the problem happened to our test instance
also and it is with postgres 9.2.4.

For pgpool, if anyone knows that if I can apply the 'insert_lock.sql' when
the pgpool is still running (maybe I should ask this in pgpool groups) ?

Thanks,
Gary



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] autovaccum task got cancelled

2013-10-31 Thread Gary Fu

Hello,

I'm running an application (with programs in Perl) through pgpool 3.1 
with replication mode to two postgresql db servers (version 9.0.13).  
Recently, I noticed that the following messages repeatedly showed in 
postgres log files.  As far as I know, the application programs do not 
make any specific lock on the 'file' table.  I'm not sure if it is 
caused by the pgpool or something else.


Thanks for any help in advance.
Gary

2013-10-31 17:58:56 UTCDETAIL:  Process 8580 waits for 
ShareRowExclusiveLock on relation 11959608 of database 596746.
2013-10-31 17:58:56 UTCSTATEMENT:  LOCK TABLE file IN SHARE ROW 
EXCLUSIVE MODE

2013-10-31 17:58:56 UTCERROR:  canceling autovacuum task
2013-10-31 17:58:56 UTCCONTEXT:  automatic vacuum of table 
sd3ops1.public.file

2013-10-31 18:01:30 UTCLOG:  sending cancel to blocking autovacuum PID 8614
2013-10-31 18:01:30 UTCDETAIL:  Process 8677 waits for 
ShareRowExclusiveLock on relation 11959608 of database 596746.
2013-10-31 18:01:30 UTCSTATEMENT:  LOCK TABLE file IN SHARE ROW 
EXCLUSIVE MODE

2013-10-31 18:01:30 UTCERROR:  canceling autovacuum task
2013-10-31 18:01:30 UTCCONTEXT:  automatic vacuum of table 
sd3ops1.public.file
2013-10-31 18:01:49 UTCLOG:  could not receive data from client: 
Connection reset by peer

2013-10-31 18:01:49 UTCLOG:  unexpected EOF within message length word
2013-10-31 18:02:04 UTCLOG:  sending cancel to blocking autovacuum PID 8753
2013-10-31 18:02:04 UTCDETAIL:  Process 8777 waits for 
ShareRowExclusiveLock on relation 11959608 of database 596746.
2013-10-31 18:02:04 UTCSTATEMENT:  LOCK TABLE file IN SHARE ROW 
EXCLUSIVE MODE

2013-10-31 18:02:04 UTCERROR:  canceling autovacuum task
2013-10-31 18:02:04 UTCCONTEXT:  automatic vacuum of table 
sd3ops1.public.file

2013-10-31 18:03:09 UTCLOG:  sending cancel to blocking autovacuum PID 8782
2013-10-31 18:03:09 UTCDETAIL:  Process 8806 waits for 
ShareRowExclusiveLock on relation 11959608 of database 596746.
2013-10-31 18:03:09 UTCSTATEMENT:  LOCK TABLE file IN SHARE ROW 
EXCLUSIVE MODE

2013-10-31 18:03:09 UTCERROR:  canceling autovacuum task
2013-10-31 18:03:09 UTCCONTEXT:  automatic vacuum of table 
sd3ops1.public.file

2013-10-31 18:04:04 UTCLOG:  sending cancel to blocking autovacuum PID 8810
2013-10-31 18:04:04 UTCDETAIL:  Process 8395 waits for 
ShareRowExclusiveLock on relation 11959608 of database 596746.
2013-10-31 18:04:04 UTCSTATEMENT:  LOCK TABLE file IN SHARE ROW 
EXCLUSIVE MODE




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] slow building index and reference after Sybase to Pg

2011-03-01 Thread Gary Fu

 On 02/28/11 19:30, Andres Freund wrote:

Hi,

On Wednesday 23 February 2011 19:31:58 Gary Fu wrote:

I'm testing on converting a big Sybase db to Pg.  It took about 45 hours
to convert all sybase tables (bcp) to Pg (copy) without index and
reference.  After that I built the index (one by one, sequentially) and
it took about 25 hours and then I started to add the references (one by
one), however, it has been more than 30 hours and still has no sign of
finishing. I wonder, is there any suggestion that may speed up the index
and reference building (on Pg).

I think some additional information would be useful:
* pg version
* kernel version
* distribution


Andres


Here are the information :

modaps_lads= show server_version;
 server_version

 9.0.1

9:58am 32 gfu@moddblads:/dump/gfu uname -a
Linux moddblads 2.6.18-194.17.1.el5 #1 SMP Wed Sep 29 12:50:31 EDT 2010 
x86_64 x86_64 x86_64 GNU/Linux


9:58am 34 gfu@moddblads:/dump/gfu cat /proc/version
Linux version 2.6.18-194.17.1.el5 (mockbu...@builder10.centos.org) (gcc 
version 4.1.2 20080704 (Red Hat 4.1.2-48)) #1 SMP Wed Sep 29 12:50:31 
EDT 2010


Also, we have RAID10 with 600GB SAS drives 15000RPM

Another question here is that why building the reference will lock the 
table for reading ?  I mean why I cannot build two references at the 
same time on the same reference table.  Does the reference build just read ?


Thanks,
Gary



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] slow building index and reference after Sybase to Pg

2011-02-28 Thread Gary Fu

On 02/25/11 18:11, Gary Fu wrote:

On 02/25/11 17:22, Andy Colson wrote:

On 2/23/2011 12:31 PM, Gary Fu wrote:

Hi,

I'm testing on converting a big Sybase db to Pg. It took about 45 hours
to convert all sybase tables (bcp) to Pg (copy) without index and
reference. After that I built the index (one by one, sequentially) and
it took about 25 hours and then I started to add the references (one by
one), however, it has been more than 30 hours and still has no sign of
finishing. I wonder, is there any suggestion that may speed up the index
and reference building (on Pg).

Thanks,
Gary



In addition to Toms answer, disable fsync for a bit.

( http://www.postgresql.org/docs/9.0/static/non-durability.html )

Also, why one at a time? Are you IO bound? If you are IO bound then
ok, but otherwise try a few at a time. (I mean COPY, create index, and
add constraint)

While this was going on, did you view vmstat? Did you look at PG's log?


-Andy


Thanks for your information. Here are more information about my situation:

Below is the listing of the time for creating the references after we
ported the tables and built the indexes with the following configuration
info:

wal_buffers = 8MB
checkpoint_segments = 30
effective_cache_size = 21GB
maintenance_work_mem = 1GB
fsync = on

5.301638 min FK_FILE_REF_FILETYPE
7.250384 min FK_PGE_REF_PGE_DEF
15.024702 min FK_FILESONDISKLOST_REF_FILE
21.143256 min FK_FILEEXPORTED_REF_FILE
22.404361 min FK_PGE_INPUTFILE_REF_PGE
23.439486 min FK_FMC_METFILEID_REF_FILE
24.942795 min FK_FM_ARCHIVESET_REF_FMC
33.286959 min FK_PGE_LOGFILE_PCF_REF_FILE
46.875006 min FK_FILEMETA_NV_REF_FMC
51.223537 min FK_FM_BJ_REF_FMC
52.603217 min FK_FM_L1L2_REF_FMC
73.314357 min FK_FM_L3L4T_REF_FMC
76.118838 min FK_FMC_REF_PGE
89.317196 min FK_FMC_REF_FM_ALL
248.595640 min FK_EMS_FILES_REF_FILE
258.633713 min FK_EXPORT_FILES_REF_FILE
269.605100 min FK_FILESONDISK_REF_FILE
299.187822 min FK_FILEREQHF_REF_FILE
331.076144 min FK_FILESNOTON_REF_FILE
334.494474 min FK_FM_ALL_REF_FILE
608.402847 min FK_PGE_INPUTFILE_REF_FILE


We changed with the following configuration and tried to rebuild some of
the references with worse results:

wal_buffers = 16MB
checkpoint_segments = 256
effective_cache_size = 30GB
maintenance_work_mem = 2GB
fsync = on


75 min FK_FM_L1L2_REF_FMC (52 min previous)
311 min FK_EXPORT_FILES_REF_FILE (258 min previous)
still running FK_FM_ALL_REF_FILE


We are also going to run parallel (2 refs) at at a time to see what happen.

Also, after that we are going to try Andy's suggestion to set fsync = off.

By the way, I just did vmstat -n 1 with the following results (building
the reference
FK_FM_ALL_REF_FILE). However, I don't know how to interpret it.

6:02pm 116 gfu@moddblads:/dump/gfu vmstat -n 1
procs ---memory-- ---swap-- -io --system--
-cpu--
r b swpd free buff cache si so bi bo in cs us sy id wa st
0 1 556 137340 70280 48446004 0 0 10 24 0 0 0 0 100 0 0
0 1 556 132896 70296 48449828 0 0 4212 328 1655 1115 0 0 96 4 0
0 1 556 140768 70296 48442580 0 0 4240 28 1585 956 0 0 96 4 0
0 1 556 132368 70296 48451308 0 0 8424 0 1573 820 0 0 96 4 0
0 1 556 130800 70272 48452784 0 0 13536 0 1589 755 1 0 96 3 0
0 1 556 136148 70264 48447920 0 0 6344 0 1611 1082 0 0 96 4 0
0 1 556 132368 70280 48451416 0 0 3960 376 1492 829 0 0 96 4 0
0 1 556 135784 70284 48448180 0 0 8240 0 1856 957 0 0 96 4 0
0 1 556 139092 70288 48444668 0 0 8700 0 1828 917 1 0 96 4 0
0 1 556 134052 70292 48449608 0 0 5076 0 1566 880 0 0 96 4 0
0 1 556 140460 70276 48443636 0 0 5536 0 1509 724 0 0 96 4 0
0 1 556 131648 70300 48452340 0 0 8616 336 1598 826 0 0 96 4 0
1 0 556 135524 70284 48448112 0 0 8004 0 1588 836 0 0 96 4 0


Any other suggestions that I should try ?

By the way, as far as I know that Sybase does not check the reference for
each records when creating the reference. Is there a way for pg to do the
same ?

Thanks,
Gary



Here are the more test results on the 3rd ref and parallel on 2nd and 
3rd references:


 325 mins on FK_FM_ALL_REF_FILE   (334 mins previous)

parallel results on 2nd and 3rd references: (much worse on 2nd ref)

   610 mins on FK_EXPORT_FILES_REF_FILE   (??? almost double the time)
   340 mins on FK_FM_ALL_REF_FILE

There are more than 250 millions records in table FileMeta_All and 80 
millions records in table Export_Files that have references on the 280 
millions records of File table on index FileId.


Here are some information on my system:

1:07pm 20 gfu@moddblads:/dump/gfu free -m   (48MB of memory)
 total   used   free sharedbuffers cached
Mem: 48036  47867168  0294  46960

1:08pm 21 gfu@moddblads:/dump/gfu hinv
Total CPU's: 24
 Intel(R) Xeon(R) CPU   X5660  @ 2.80GHz  2793 MHZ
 Cache Size:  12288 KB

4:19pm 23 gfu@moddblads:/dump/gfu df
Filesystem   1K-blocks  Used Available Use% Mounted on
/dev/sda1  8123168   5682344   2021532  74% /
/dev/sdb1

Re: [GENERAL] slow building index and reference after Sybase to Pg

2011-02-28 Thread Gary Fu

 On 02/28/11 17:32, Andy Colson wrote:




Here are the more test results on the 3rd ref and parallel on 2nd and
3rd references:

325 mins on FK_FM_ALL_REF_FILE (334 mins previous)

parallel results on 2nd and 3rd references: (much worse on 2nd ref)





Here are some information on my system:

1:07pm 20 gfu@moddblads:/dump/gfu free -m (48MB of memory)
total used free shared buffers cached
Mem: 48036 47867 168 0 294 46960

1:08pm 21 gfu@moddblads:/dump/gfu hinv
Total CPU's: 24
Intel(R) Xeon(R) CPU X5660 @ 2.80GHz 2793 MHZ
Cache Size: 12288 KB



Huh.  Dunno.  Ah, how about locks?

while its building a reference, look at pg_locks (select * from 
pg_locks).


Looking at the build times:
 610 mins on FK_EXPORT_FILES_REF_FILE (??? almost double the time)
 340 mins on FK_FM_ALL_REF_FILE

looks like the first one ran and the second waited for it to complete 
before it ran.


Are those two touching the same tables?

-Andy

There are no other processes running on the db when the reference is 
rebuilt.


Yes, they are reference to the same table and the 1st one must lock the 
table before the 2nd one can run.


Gary

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] slow building index and reference after Sybase to Pg

2011-02-28 Thread Gary Fu

 On 02/28/11 17:56, Andres Freund wrote:

Hi,

On Saturday, February 26, 2011 12:11:19 AM Gary Fu wrote:

wal_buffers = 16MB

sensible

checkpoint_segments = 256

A setting that high seems unlikely to be beneficial... I suggest you configure
log_checkpoints to monitor this.


effective_cache_size = 30GB

Not likely to matter in this case.


maintenance_work_mem = 2GB

1GB is the max value taking effect.

I would also suggest setting
wal_sync_method=fdatasync

Already set this.

You haven't configured shared_buffers at all? I would suggest setting it to 2GB
or such.


It is set to 12GB.

Do you recommend to set fsync to off ?  It needs to restart the server.

Thanks,
Gary

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] slow building index and reference after Sybase to Pg

2011-02-25 Thread Gary Fu

Hi,

I'm testing on converting a big Sybase db to Pg.  It took about 45 hours 
to convert all sybase tables (bcp) to Pg (copy) without index and 
reference.  After that I built the index (one by one, sequentially) and 
it took about 25 hours and then I started to add the references (one by 
one), however, it has been more than 30 hours and still has no sign of 
finishing. I wonder, is there any suggestion that may speed up the index 
and reference building (on Pg).


Thanks,
Gary

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] slow building index and reference after Sybase to Pg

2011-02-25 Thread Gary Fu

 On 02/25/11 17:22, Andy Colson wrote:

On 2/23/2011 12:31 PM, Gary Fu wrote:

Hi,

I'm testing on converting a big Sybase db to Pg. It took about 45 hours
to convert all sybase tables (bcp) to Pg (copy) without index and
reference. After that I built the index (one by one, sequentially) and
it took about 25 hours and then I started to add the references (one by
one), however, it has been more than 30 hours and still has no sign of
finishing. I wonder, is there any suggestion that may speed up the index
and reference building (on Pg).

Thanks,
Gary



In addition to Toms answer, disable fsync for a bit.

( http://www.postgresql.org/docs/9.0/static/non-durability.html )

Also, why one at a time?  Are you IO bound?  If you are IO bound then 
ok, but otherwise try a few at a time.  (I mean COPY, create index, 
and add constraint)


While this was going on, did you view vmstat?  Did you look at PG's log?


-Andy


Thanks for your information.  Here are more information about my situation:

Below is the listing of the time for creating the references after we 
ported the tables and built the indexes with the following configuration 
info:


wal_buffers = 8MB
checkpoint_segments = 30
effective_cache_size = 21GB
maintenance_work_mem = 1GB
fsync = on

 5.301638 minFK_FILE_REF_FILETYPE
 7.250384 minFK_PGE_REF_PGE_DEF
15.024702 minFK_FILESONDISKLOST_REF_FILE
21.143256 minFK_FILEEXPORTED_REF_FILE
22.404361 minFK_PGE_INPUTFILE_REF_PGE
23.439486 minFK_FMC_METFILEID_REF_FILE
24.942795 minFK_FM_ARCHIVESET_REF_FMC
33.286959 minFK_PGE_LOGFILE_PCF_REF_FILE
46.875006 minFK_FILEMETA_NV_REF_FMC
51.223537 minFK_FM_BJ_REF_FMC
52.603217 minFK_FM_L1L2_REF_FMC
73.314357 minFK_FM_L3L4T_REF_FMC
76.118838 minFK_FMC_REF_PGE
89.317196 minFK_FMC_REF_FM_ALL
   248.595640 minFK_EMS_FILES_REF_FILE
   258.633713 minFK_EXPORT_FILES_REF_FILE
   269.605100 minFK_FILESONDISK_REF_FILE
   299.187822 minFK_FILEREQHF_REF_FILE
   331.076144 minFK_FILESNOTON_REF_FILE
   334.494474 minFK_FM_ALL_REF_FILE
   608.402847 minFK_PGE_INPUTFILE_REF_FILE


We changed with the following configuration and tried to rebuild some of the 
references with worse results:

wal_buffers = 16MB
checkpoint_segments = 256
effective_cache_size = 30GB
maintenance_work_mem = 2GB
fsync = on


 75  minFK_FM_L1L2_REF_FMC  (52 min previous)
 311 minFK_EXPORT_FILES_REF_FILE(258 min previous)
still runningFK_FM_ALL_REF_FILE


We are also going to run parallel (2 refs) at at a time to see what happen.

Also, after that we are going to try Andy's suggestion to set fsync = off.

By the way, I just did vmstat -n 1 with the following results (building the 
reference
FK_FM_ALL_REF_FILE).  However, I don't know how to interpret it.

6:02pm 116 gfu@moddblads:/dump/gfu  vmstat -n 1
procs ---memory-- ---swap-- -io --system-- 
-cpu--
 r  b   swpd   free   buff  cache   si   sobibo   in   cs us sy id wa st
 0  1556 137340  70280 4844600400102400  0  0 100  
0  0
 0  1556 132896  70296 4844982800  4212   328 1655 1115  0  0 96  4 
 0
 0  1556 140768  70296 4844258000  424028 1585  956  0  0 96  4 
 0
 0  1556 132368  70296 4845130800  8424 0 1573  820  0  0 96  4 
 0
 0  1556 130800  70272 4845278400 13536 0 1589  755  1  0 96  3 
 0
 0  1556 136148  70264 4844792000  6344 0 1611 1082  0  0 96  4 
 0
 0  1556 132368  70280 4845141600  3960   376 1492  829  0  0 96  4 
 0
 0  1556 135784  70284 4844818000  8240 0 1856  957  0  0 96  4 
 0
 0  1556 139092  70288 4844466800  8700 0 1828  917  1  0 96  4 
 0
 0  1556 134052  70292 4844960800  5076 0 1566  880  0  0 96  4 
 0
 0  1556 140460  70276 4844363600  5536 0 1509  724  0  0 96  4 
 0
 0  1556 131648  70300 4845234000  8616   336 1598  826  0  0 96  4 
 0
 1  0556 135524  70284 4844811200  8004 0 1588  836  0  0 96  4 
 0


Any other suggestions that I should try ?

By the way, as far as I know that Sybase does not check the reference for
each records when creating the reference.  Is there a way for pg to do the
same ?

Thanks,
Gary



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] what's the message: CONTEXT: PL/pgSQL function proc_sub_b line 7 at assignment

2010-12-29 Thread Gary Fu

Hi,

Why do I get the message in Subject when I do 'select proc_sub_b()' 
under psql ?  If I comment out the RAISE statement in proc_sub_a then I 
don't see the message.


Thanks,
Gary

int_admin.modaps_int select proc_sub_b();
INFO:  id=11
CONTEXT:  PL/pgSQL function proc_sub_b line 7 at assignment
 proc_sub_b

  0
(1 row)


CREATE OR REPLACE FUNCTION proc_sub_a(
  v_id  int
) RETURNS INTEGER AS $$
DECLARE
BEGIN

  RAISE INFO 'id=%', v_id;

  return 1;

end;
$$ LANGUAGE plpgsql;


CREATE OR REPLACE FUNCTION proc_sub_b(
) RETURNS INTEGER AS $$
DECLARE
  v_ret int;
BEGIN

  v_ret := proc_sub_a(11);

  return 0;

end;
$$ LANGUAGE plpgsql;

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] escape character for 'psql -c ' command

2010-11-16 Thread Gary Fu

On 11/15/10 21:10, Derrick Rice wrote:

Short answer: for simple commands, you can use shell-escaping of a
double-quoted string.

psql -c \\copy \Table\ from 'text file'



This works on sh, but I tried this syntax on tcsh, it fails:

11:38am 72 g...@modular:~/sybase2postgres psql -c \\copy \Table\ from 
'text file'

Unmatched .

How to make it work on tcsh ?

Thanks,
Gary




Note: double \\ is intentional. You need to escape the backslash, which
normally escapes other special characters, like $ and . Watch out for
other special characters though, which is why I prefer the long answer...


Long answer: A *nix shell will concatenate string literals that are
immediately following each other, even when they aren't the same type
(single quoted or double quoted).  So the following:

abc'def'hij  (reads: double quote, abc, double quote, single quote,
def, single quote, double quote, hij, double quote)

is abc + 'def' + hij or abcdefhij to the shell

So if you have a single-quoted string, to insert a single quote you (1)
stop the single quoted string (2) start a double-quoted string (3) write
a single quote as the content of the double-quoted string (4) stop the
double-quoted string (5) restart the single quoted string.  All without
any spaces (unless they are inside either the double or single quoted
strings as part of your content).

You can obviously insert 2 consecutive single quotes within a single
double-quoted string - or any characters... just be aware you are in
double-quotes now, so you need to escape special characters or go back
to single quotes.

Your example:

psql -c ' Copy Table from '''text file''

Derrick

On Mon, Nov 15, 2010 at 6:17 PM, Gary Fu g...@sigmaspace.com
mailto:g...@sigmaspace.com wrote:

Hi,

How do I escape both  and ' to be used in the 'psql -c ' command ?
For example, how to make the psql command {\copy Table from
'txt_file'} to be used in the psql with -c option (psql -c) ?
The Table has to be double quoted here.

Thanks,
Gary

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org
mailto:pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] escape character for 'psql -c ' command

2010-11-15 Thread Gary Fu

Hi,

How do I escape both  and ' to be used in the 'psql -c ' command ?
For example, how to make the psql command {\copy Table from 
'txt_file'} to be used in the psql with -c option (psql -c) ?

The Table has to be double quoted here.

Thanks,
Gary

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] select sql slow inside function

2010-09-15 Thread Gary Fu

On 09/15/2010 02:28 AM, Sergey Konoplev wrote:

Hi,

On 15 September 2010 01:56, Gary Fug...@sigmaspace.com  wrote

I have a function proc_TaskComplete that inserts a record to table
TaskHistory and then calls another function proc_ExportTaskComplete, that
will retrieve (select) the record just inserted based on an index column
(TaskId) in that table TaskHistory.  I noticed that the select sql (inside
proc_ExportTaskComplete) will take 3 seconds.  Under normal condition (psql)
the select sql is fast enough with the index. Can anyone explain why and how
to fix the problem ?
My postgresql version is 8.4.4


Could you please provide a use-case?




Thanks,
Gary

--

We found out the problem and it was caused by the type used in the 2nd 
function for the key column (argument passed in from the main function) 
is numerical (12) instead of T_TaskId type, even though the T_TaskId is 
defined as 'numerical(12), 'not null'.


Thanks,
Gary

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] select sql slow inside function

2010-09-15 Thread Gary Fu

On 09/15/2010 09:46 AM, Gary Fu wrote:

On 09/15/2010 02:28 AM, Sergey Konoplev wrote:

Hi,

On 15 September 2010 01:56, Gary Fug...@sigmaspace.com wrote

I have a function proc_TaskComplete that inserts a record to table
TaskHistory and then calls another function proc_ExportTaskComplete,
that
will retrieve (select) the record just inserted based on an index column
(TaskId) in that table TaskHistory. I noticed that the select sql
(inside
proc_ExportTaskComplete) will take 3 seconds. Under normal condition
(psql)
the select sql is fast enough with the index. Can anyone explain why
and how
to fix the problem ?
My postgresql version is 8.4.4


Could you please provide a use-case?




Thanks,
Gary

--


We found out the problem and it was caused by the type used in the 2nd
function for the key column (argument passed in from the main function)
is numerical (12) instead of T_TaskId type, even though the T_TaskId is
defined as 'numerical(12), 'not null'.

Thanks,
Gary


Sorry, I made mistake again, the T_TaskId is Integer, not Numerical(12) 
and I think due to the type difference, the table scan, instead of the 
index is used.


Thanks,
Gary

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] select sql slow inside function

2010-09-14 Thread Gary Fu

Hi,

I have a function proc_TaskComplete that inserts a record to table 
TaskHistory and then calls another function proc_ExportTaskComplete, 
that will retrieve (select) the record just inserted based on an index 
column (TaskId) in that table TaskHistory.  I noticed that the select 
sql (inside proc_ExportTaskComplete) will take 3 seconds.  Under normal 
condition (psql) the select sql is fast enough with the index. Can 
anyone explain why and how to fix the problem ?

My postgresql version is 8.4.4

Thanks,
Gary

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Why DBI (DBD::Pg) takes so much memory ?

2010-08-12 Thread Gary Fu

Hi,

I have a simple perl program (see below) with DBI call to connect to the 
Postgresql db on a CentOS system.  I don't know why the top command 
shows it takes more than 110m (under VIRT column).  I tried with newer 
DBI and DBD versions still have the same result.  However, I tried it on 
host with Mandriva, and it only takes about 8m.  Is there any special on 
CentOS or some of the libraries are not linked or built correctly ?


Thanks,
Gary

- OS Version
$ cat /proc/version
Linux version 2.6.18-194.8.1.el5 (mockbu...@builder10.centos.org) (gcc 
version 4.1.2 20080704 (Red Hat 4.1.2-48)) #1 SMP Thu Jul 1 19:04:48 EDT 
2010


- DBI  DBD::Pg versions
$ perl -e 'use DBI; print $DBI::VERSION\n'
1.607
2:46pm 675 sd3d...@sd3dev1:~/tmp$ perl -e 'use DBD::Pg; print 
$DBD::Pg::VERSION\n'

2.11.5

- libpq version
$  ldd 
/usr/lib64/perl5/vendor_perl/5.8.8/x86_64-linux-thread-multi/auto/DBD/Pg/Pg.so

libpq.so.4 = /usr/lib64/libpq.so.4 (0x2b8adedb7000)
libm.so.6 = /lib64/libm.so.6 (0x2b8adefd9000)
libc.so.6 = /lib64/libc.so.6 (0x2b8adf25c000)
libssl.so.6 = /lib64/libssl.so.6 (0x2b8adf5b4000)
libcrypto.so.6 = /lib64/libcrypto.so.6 (0x2b8adf80)
libkrb5.so.3 = /usr/lib64/libkrb5.so.3 (0x2b8adfb51000)
libcrypt.so.1 = /lib64/libcrypt.so.1 (0x2b8adfde7000)
libresolv.so.2 = /lib64/libresolv.so.2 (0x2b8ae001f000)
libnsl.so.1 = /lib64/libnsl.so.1 (0x2b8ae0234000)
libpthread.so.0 = /lib64/libpthread.so.0 (0x2b8ae044d000)
/lib64/ld-linux-x86-64.so.2 (0x0039ffa0)
libgssapi_krb5.so.2 = /usr/lib64/libgssapi_krb5.so.2 
(0x2b8ae0668000)

libcom_err.so.2 = /lib64/libcom_err.so.2 (0x2b8ae0896000)
libk5crypto.so.3 = /usr/lib64/libk5crypto.so.3 
(0x2b8ae0a99000)

libdl.so.2 = /lib64/libdl.so.2 (0x2b8ae0cbe000)
libz.so.1 = /usr/lib64/libz.so.1 (0x2b8ae0ec2000)
libkrb5support.so.0 = /usr/lib64/libkrb5support.so.0 
(0x2b8ae10d7000)

libkeyutils.so.1 = /lib64/libkeyutils.so.1 (0x2b8ae12df000)
libselinux.so.1 = /lib64/libselinux.so.1 (0x2b8ae14e2000)
libsepol.so.1 = /lib64/libsepol.so.1 (0x2b8ae16fa000)


- test program
#! /usr/local/bin/perl -w

use strict;
use DBI;

my $data_source = dbi:Pg:dbname='dev1';host='db1';
my $dbh = DBI-connect ($data_source, 'user', 'passwd')
or die Failed to connect;

sleep 600;
exit;

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] psql problem

2010-07-28 Thread Gary Fu

On 07/27/2010 10:20 PM, Tom Lane wrote:

Gary Fugary...@sigmaspace.com  writes:

Below is an example that I created.  It works okay, but when I add any
character in the comment or in the table definition, it fails (hangs).
I checked the server process (with ps command), and I can see that
connection is 'idle'.  By the way, the size 1484 may not mean anything,
since I tried another case (with different comment and table) and the
problem still happens but the size 1484 is not the break point.



I think this may be CentOS(64 bits)/ssh related, since I don't have the
problem with CentOS(32 bits) and we have the same application to install
the tables with the same command on mandriva.


FWIW, I cannot reproduce this problem using 8.4.latest on Fedora 13 64bit.
So it might indeed be something specific to the openssl version you're
using.  I assume you tested that the problem goes away if you use a
non-SSL connection?

The openssl installation I'm testing with is
openssl-1.0.0a-1.fc13.x86_64
I don't know offhand what RHEL/CentOS 5.x are using but it's probably
quite a lot older.

regards, tom lane



Thanks for your response.  Our SA said that there was a network 
configuration set up incorrectly. After the 'Jumbo Frames' was enabled 
on the network between the 10G and 1G hosts, the problem was gone. 
Sorry, I don't know the detail about the network configuration.


Thanks,
Gary

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] psql problem

2010-07-23 Thread Gary Fu

On 07/22/2010 09:02 PM, Andy Colson wrote:

On 07/22/2010 04:03 PM, Gary Fu wrote:

Hi,

System information:
- psql 8.4.4 on a client with CentOS 5.5 (64 bits)
- postgres 8.4.4 on the server with CentOS 5.5 (64 bits)
- the client is connected with vpn

I have a script to create a table with some comments in front. When I
use the command 'psql -f script.sql' to load it, it hangs. However, if
I remove the comments, OR remove some of the columns from the table,
it works okay. It looks like to me, the psql will hang with large size
of the script file. I tried 'psql  script.sql' and 'cat script.sql |
psql' with the same result.

However, I tried it on another client host (CentOS 5.5 32 bits), I
don't see this problem.

Any idea and suggestion ?

Thanks,
Gary


Line endings?

How about a sample?

What comment style: -- /* (* # ; ' //


-Andy


Below is an example that I created.  It works okay, but when I add any 
character in the comment or in the table definition, it fails (hangs). 
I checked the server process (with ps command), and I can see that 
connection is 'idle'.  By the way, the size 1484 may not mean anything, 
since I tried another case (with different comment and table) and the 
problem still happens but the size 1484 is not the break point.


I think this may be CentOS(64 bits)/ssh related, since I don't have the 
problem with CentOS(32 bits) and we have the same application to install 
the tables with the same command on mandriva.


Thanks,
Gary

Sample file:

/*

=head1 NAME

ProblemFiles

=head1 DESCRIPTION

The ProblemFiles table is used to store the file names that have
problem to be handled by PollAndArchive and VerifyFiles programs.

=head1 FIELDS

  ProblemId - The Id for the problem file
  FileName - The full file name with problem
  Reason - The reason for the file to be inserted
  IsDN -  This FileName is a DN (DDR or DS) file
  DNFile - The DN file for the FileName in problem
  DNType - The DN type (1 for DDR, 2 for DS, 0 for Unknown)
  FtpPath - The ftp incoming path for the problem file, so we know 
where to get

the file again if necessary
  Adhoc - None for sd3e normal subscription,
  SD3E for sd3e adhoc (handled as subscription)
  Land/Ocean/Atmosphere/Ozone/Sounder/NICSE for peates' ad-hoc
  CkSum - Th checksum of the file (only for file with DDR file on Reason :
  Missing, Duplicate, so that they can be verified again if 
necessary

  test test test tt

=cut

*/

--
-- Name: ProblemFiles  Type: Table
--

create table ProblemFiles
(
ProblemIdserial   primary key,
FileName varchar(256) not null,
Reason   varchar(16)  not null,
IsDN int  not null default 0,
DNFile   varchar(256) null,
DNType   int  not null default 1,
InsertTime   timestampnot null default now()
);





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] psql problem

2010-07-22 Thread Gary Fu

Hi,

System information:
- psql 8.4.4 on a client with CentOS 5.5 (64 bits)
- postgres 8.4.4 on the server with CentOS 5.5 (64 bits)
- the client is connected with vpn

I have a script to create a table with some comments in front.  When I 
use the command 'psql -f script.sql' to load it, it hangs.  However, if 
I remove the comments, OR remove some of the columns from the table, it 
works okay.   It looks like to me, the psql will hang with large size of 
the script file.  I tried 'psql  script.sql' and 'cat script.sql | 
psql' with the same result.


However, I tried it on another client host (CentOS 5.5 32 bits), I don't 
see this problem.


Any idea and suggestion ?

Thanks,
Gary

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] No lidbl.so in libpq.so (postgresql 8.4.4)

2010-05-28 Thread Gary Fu

Nilesh Govindarajan wrote:

On Tue, May 25, 2010 at 7:48 PM, Gary Fu gary...@sigmaspace.com wrote:
  

Hi,

On my mandriva linux, I don't have problem to build pgpool 2.3.3 with
postgresql 8.4.2.  But when I upgraded to 8.4.4, I cannot build pgpool again
due to the libdl.so is not required in libpq.so (from the ldd command).  Do
you know how to build the 8.4.4 so that libdl.so is required in libpq.so ?

10:16am 32 g...@nppdist:~/postgres/postgresql-8.4.4 locate libdl
/lib/libdl-2.6.1.so
/lib/libdl.so.2

10:16am 33 g...@nppdist:~/postgres/postgresql-8.4.4 ldd
./src/interfaces/libpq/libpq.so
   linux-gate.so.1 =  (0xe000)
   libcrypt.so.1 = /lib/libcrypt.so.1 (0x40026000)
   libc.so.6 = /lib/i686/libc.so.6 (0x40054000)
   /lib/ld-linux.so.2 (0x8000)


Thanks,
Gary

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general





Could you please explain a little more what you want to do actually ?

  
I have the following problem to run 'configure' when building pgpool 
2.3.3 with libpq from normal built of postgreSQL 8.4.4.


configure:24789: result: no
configure:24799: error: libpq is not installed or libpq is old


However, when I rebuilt the 8.4.4 with 'LDFLAGS='-ldl' during configure, 
I don't have problem to build pgpool 2.3.3.


Thanks,
Gary




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] No lidbl.so in libpq.so (postgresql 8.4.4)

2010-05-27 Thread Gary Fu

Hi,

On my mandriva linux, I don't have problem to build pgpool 2.3.3 with 
postgresql 8.4.2.  But when I upgraded to 8.4.4, I cannot build pgpool 
again due to the libdl.so is not required in libpq.so (from the ldd 
command).  Do you know how to build the 8.4.4 so that libdl.so is 
required in libpq.so ?


10:16am 32 g...@nppdist:~/postgres/postgresql-8.4.4 locate libdl
/lib/libdl-2.6.1.so
/lib/libdl.so.2

10:16am 33 g...@nppdist:~/postgres/postgresql-8.4.4 ldd 
./src/interfaces/libpq/libpq.so

linux-gate.so.1 =  (0xe000)
libcrypt.so.1 = /lib/libcrypt.so.1 (0x40026000)
libc.so.6 = /lib/i686/libc.so.6 (0x40054000)
/lib/ld-linux.so.2 (0x8000)


Thanks,
Gary

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: how to clean up temporary schemas (how to sync the system table with pg_dump)

2008-06-06 Thread Gary Fu

Tom Lane wrote:

Gary Fu [EMAIL PROTECTED] writes:

My question now is why those temporary schemas won't be cleaned
after I restart the db ?


Just leave them alone and you'll be fine.  These tools actually have
had most of the bugs worked out of them ;-) ... if you think pg_dump is
omitting something, you are probably mistaken.

regards, tom lane


Thanks for the response.  Yes, normally it will be okay.  However, when
I tried PgAdmin with Pgpool, it will cause problem.  The PgAdmin will
try to access pg_namespace when making a connection to a db, if the
temporary schemas are different between the backend db servers,
the pgpool will return mismatch error and fail the PgAdmin connection.

Thanks,
Gary

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] how to sync the system table with pg_dump

2008-06-05 Thread Gary Fu

Hi,

I tried to use pg_dump to restore (sync) a database, but I noticed that
the system table pg_namespace was not synced.

I tried the following pg_dump command to just restore that table without 
success either.


Does pg_dump support for the system tables or something I missed ?
Is there another way to sync the system tables ?

Thanks,
Gary

% pg_dump -t pg_namespace -h nppdist nppsd3 | psql -h nppsds1 -d nppsd3
SET
SET
SET
SET
SET
SET
SET
SET
ERROR:  relation pg_namespace already exists
ALTER TABLE
ERROR:  duplicate key violates unique constraint 
pg_namespace_nspname_index

CONTEXT:  COPY pg_namespace, line 1: pg_toast  10  \N
ERROR:  permission denied: pg_namespace is a system catalog
ERROR:  permission denied: pg_namespace is a system catalog
REVOKE
REVOKE
GRANT

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: how to clean up temporary schemas (how to sync the system table with pg_dump)

2008-06-05 Thread Gary Fu



Gary Fu wrote:

I tried to use pg_dump to restore (sync) a database, but I noticed that
the system table pg_namespace was not synced.


If you restore a database, entries in pg_namespace will be created if
the dump contains any CREATE SCHEMA statements, i.e. if there are
schemas in your original database.

Check if the dump was created and restored by a database user with
the appropriate permissions (a superuser ideally), and look out for
error messages.

Do not try to manually change pg_namespace. Just don't.

Yours,
Laurenz Albe



Thanks for the response.  I think the problem is because there are
temporary schemas (pg_temp_1, ..) in the source db and the pg_dump
does not allow them to be restored (see below).

My question now is why those temporary schemas won't be cleaned
after I restart the db ?

Thanks,
Gary

% pg_dump -n pg_temp_1 -h nppdist
--
-- PostgreSQL database dump
--
SET client_encoding = 'LATIN1';
SET standard_conforming_strings = off;
SET check_function_bodies = false;
SET client_min_messages = warning;
SET escape_string_warning = off;

--
-- Name: pg_temp_1; Type: SCHEMA; Schema: -; Owner: postgres
--

CREATE SCHEMA pg_temp_1;


ALTER SCHEMA pg_temp_1 OWNER TO postgres;

--
-- PostgreSQL database dump complete
--

-
% pg_dump -n pg_temp_1 -h nppdist | psql -h nppsds1
SET
SET
SET
SET
SET
ERROR:  unacceptable schema name pg_temp_1
DETAIL:  The prefix pg_ is reserved for system schemas.
ERROR:  schema pg_temp_1 does not exist

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] allocate chunk of sequence

2007-06-16 Thread Gary Fu

Scott Marlowe wrote:

Gary Fu wrote:


hello,

I try to allocate a chunk of ids from a sequence with the following 
proc.  However, if I don't use the 'lock lock_table', the proc may not 
work when it runs at the same time by different psql sessions.  Is 
there a better way without using the 'lock lock_table' ?



aNewFileId = setval('aa_seq', aFileId + nNumberOfFiles - 1);


This is NOT the best way to use sequences.

Much better would be to use a loop to allocate the them one after the 
other, and put them into a record type or something.


Do they HAVE to be contiguous?

If they're always the same size, then set the increment value of the 
sequence on creation to reflect that.


i.e.:

create sequence abc increment by 20

then just select nextval, and you have that plus the 20 after it all to 
yourself.  Lots of ways to handle this, but setval is generally the 
worst way to handle anything in a highly parallel env.




Thanks for the reply.

The chunk to be allocated is not the same size, so to set the increment 
value will not help.


I'm not sure how the nextval function to handle this internally, if it 
has to read and update the sequence object. Does it use some kind of 
lock ? Otherwise the problem mentioned here should happen to nextval 
function also.


The lock command does not work for the sequence, so in my example, I 
have to use a dummy table for lock to work.  Another thought is to have 
the nextval function takes an extra argument for the increment value 
(instead of the default increment value).


Gary

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] allocate chunk of sequence

2007-06-14 Thread Gary Fu

hello,

I try to allocate a chunk of ids from a sequence with the following 
proc.  However, if I don't use the 'lock lock_table', the proc may not 
work when it runs at the same time by different psql sessions.  Is there 
a better way without using the 'lock lock_table' ?


Thanks,
Gary

create or replace function proc_allocate_seq(int)
returns int as $$

declare
nNumberOfFiles  alias for $1;

aFileId int;
aNewFileId int;

begin

lock lock_table;

aFileId = nextval('aa_seq');

; sleep(3);if you have the proc

aNewFileId = setval('aa_seq', aFileId + nNumberOfFiles - 1);

return aFileId;

end;
$$ language plpgsql;

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match