[GENERAL] How to delete default privileges

2017-08-15 Thread Francisco Reyes

I have a DB where we changed ownership of all objects.

We had:

ALTER DEFAULT PRIVILEGES FOR ROLE old_dbowner GRANT SELECT ON tables
TO dbgroup_ro_group;
ALTER DEFAULT PRIVILEGES FOR ROLE old_dbowner GRANT SELECT ON sequences 
TO dbgroup_ro_group;


ALTER DEFAULT PRIVILEGES FOR ROLE old_dbowner GRANT SELECT, UPDATE, 
DELETE, INSERT ON tablesTO dbgroup_rw_group;
ALTER DEFAULT PRIVILEGES FOR ROLE old_dbowner GRANT SELECT, 
USAGE  ON sequences TO dbgroup_rw_group;
ALTER DEFAULT PRIVILEGES FOR ROLE old_dbowner GRANT 
EXECUTEON functions TO dbgroup_rw_group;



But now there is a new DB owner and I have run the above, but with 
new_dbowner. How can I delete the old default grants? Can't find 
examples and don't see it on the documentation either.



Any help would be greatly appreciated.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Large pg_xlog

2016-09-29 Thread Francisco Reyes

On 09/29/2016 01:36 AM, Michael Paquier wrote:

something is broken with your installation. Don't you have in your log
files something like that?
LOG:  could not create archive status file
"pg_xlog/archive_status/00010001.ready": No such file



That was the problem.

FATAL:  could not open archive status directory 
"pg_xlog/archive_status": No such file or directory


Very puzzled on what could have deleted the folder. I am the only person 
that works on these machines and I would never do any work, much less 
deletions, inside pg_xlog.



Thanks for the help.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Large pg_xlog

2016-09-28 Thread Francisco Reyes

On 09/26/2016 09:05 PM, Michael Paquier wrote:

What are the files in pg_xlog/archive_status? Do see a lot of .ready
entries? Perhaps you could de-bloat things by using archive_command =
'/bin/true' (REM on Windows as far as I recall).



There is no archive_status

 ll 9.3/main/pg_xlog/ |grep arch
postgres@s0pdb03:~$

As for changing archive_command to /bin/true, don't see how that would 
help. My archive command is pointing to a completely different folder.



Just try anyway, changed archive_command to bin true. Files in pg_xlog 
keep growing. Currently at over 5K files. Isn't wal_keep_segments the 
upper limit of how many wal files would be retained in Postgresql 9.3?



wal_keep_segments=300

checkpoint_segments=6


Yet have over 5K files, and more getting created, in my pg_xlog folder.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Large pg_xlog

2016-09-26 Thread Francisco Reyes

On 09/26/2016 08:08 PM, Andreas Kretschmer wrote:
archive_command failed? If that happens the wal's will not deleted, 
you should see error-messages in the log. 



The only times archive command failed I believe is because the volume 
where pg_xlog is ran out of space.

FATAL:  archive command was terminated by signal 3: Quit

There are 2 of those today. So that does not seem to be the cause.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Large pg_xlog

2016-09-26 Thread Francisco Reyes

Postgresql 9.3

checkpoint_segments=6
wal_keep_segments=300


Machine is master for 2, asynchronous, slaves.


pg_xlog in /var/lib/postgresql/9.3/main/pg_xlog

NFS mount with WAL archiving in /opt/backups/walarchives/HostName


During a load of a file, using copy, the pg_xlog grew to almost 120GB 
and one of the slaves fell behind (at another data center). Since I am 
not using replication slots I expected WAL to be wall_keep_segments x 
16MB = 4GB. Any idea why it grew so large?



Once I rsync wal archives to the slave and it catches up then WAL goes down.


Any ideas why pg_xlog is going so high?



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Alter default privileges vs new schemas

2016-08-24 Thread Francisco Reyes

https://www.postgresql.org/docs/current/static/sql-alterdefaultprivileges.html

Have a client where their development teams use extensive use of 
schemas. We use default privileges like this


ALTER DEFAULT PRIVILEGES FOR ROLE jobs_owner_local GRANT SELECT ON 
tablesTO ro_group;
ALTER DEFAULT PRIVILEGES FOR ROLE jobs_owner_local GRANT SELECT ON 
sequences TO ro_group;


ALTER DEFAULT PRIVILEGES FOR ROLE jobs_owner_local GRANT SELECT, UPDATE, 
DELETE, INSERT ON tablesTO rw_group;
ALTER DEFAULT PRIVILEGES FOR ROLE jobs_owner_local GRANT SELECT, 
USAGE  ON sequences TO rw_group;
ALTER DEFAULT PRIVILEGES FOR ROLE jobs_owner_local GRANT 
EXECUTEON functions TO rw_group;



But I can't find a way to grant usage on new schemas. At DB creation I do

grant usage on schema public to rw_group;
grant usage on schema public to ro_group;


Because we revoke everything from public we have to explicitly grant 
connect to DB and usage on schemas.



Currently having to manually grant usage when a new schema is created 
and wondering if there is a better way.



Postgresql 9.3



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: pg_basebackup vs archive_command - looking for WAL archive older than archive_command start

2016-07-17 Thread Francisco Reyes

On 07/17/2016 06:35 PM, Francisco Reyes wrote:


Why is the pg_basebackup restore looking for a WAL file that is even 
older than the ones I have, when I turned on WAL archiving before I 
started the pg_basebackup?


Figured it out.. the error is from a secondary slave trying to sync from 
the machine I just restored the pg_basebackup.




Also, why is that one WAL archive named differently with .backup?


Still would be nice to know the above..


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pg_basebackup vs archive_command - looking for WAL archive older than archive_command start

2016-07-17 Thread Francisco Reyes

I turned on archive_command and have wal archiving going.

I did a pg_basebackup and copied the resulting file from source machine 
to target, yet when I restore I am getting


requested WAL segment 000508AE009B has already been removed

The earliest WAL archives I have are

000508D2005C.0028.backup.gz
000508D2005C.gz
000508D2005D.gz

Why is the pg_basebackup restore looking for a WAL file that is even 
older than the ones I have, when I turned on WAL archiving before I 
started the pg_basebackup?


Also, why is that one WAL archive named differently with .backup?

Normally I restore with pg_basebackup and streaming replication, which 
works well, but lately due to constant network disconnects I have to 
make the pg_basebackup in the source machine and rsync. That is why I 
turned on WAL archiving before starting the base backup since it would 
take some time to copy the base backup file over.


The command I am using to create the base backup is
pg_basebackup  -U UserName -h 127.0.0.1 -D - -P -Ft | gzip > 
Basebackup.tar.gz


Any suggestions?


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to audit non LDAP connections?

2015-12-03 Thread Francisco Reyes
Due to security/audits have moved most users to LDAP. Looking for a way 
to tell if a connection is/is not going through LDAP.


Other than errors, such as bad password, have not found a way to tell if 
a connection is using LDAP or postgresql internal authentication in the 
logs. Tried going through different log variables and setting verbosity 
to max and did not see any indication of LDAP connections.


Long term we plan to only allow remote connections through LDAP in 
pg_hba.conf, but hoping to track down any connection that is not using 
LDAP to have that converted in a controlled fashion vs flipping the 
switch and seeing what breaks.


Postgres 9.3



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Best way to sync table DML between databases

2015-10-09 Thread Francisco Reyes

On 10/05/2015 09:46 AM, jimbosworth wrote:

Im not in a position to change the database setup on server A.


Can you have the owners/maintainers do the needed changes to setup 
replication?

Or that is 100% out of the question?


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Test failover for PosgreSql 9.2

2015-09-25 Thread Francisco Reyes

On 09/25/2015 11:20 AM, yuryu wrote:

According to manual I have to kill completely Master and "touch" a trigger
to make Slave new Master.


You don't have to do anything in the master. If you have configured the 
slave to check for a file, then it will become Read Write when that file 
is created.


You can also do
pg_ctlcluster #.# main promote

Where #.# is version like
pg_ctlcluster 9.3 main promote

In the slave you can run this to check if it is in read only 
(replicating) or read write

select pg_is_in_recovery();


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Dropped connections with pg_basebackup

2015-09-25 Thread Francisco Reyes

On 09/24/2015 04:29 PM, Sherrylyn Branchaw wrote:

I'm assuming based on the "SSL error" that you have ssl set to 'on'.
What's your ssl_renegotiation_limit? The default is 512MB, but setting
it to 0 has solved problems for a number of people on this list,
including myself.


I have also seen instances were ssl_renegotiation_limit=0 helped and I 
already tried that. Did not help in this case.


Perhaps will try some tests with a non SSL connection. These are 
machines in an internal network so it may not be too much a security 
issue to turn off SSL at least during initial sync.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Dropped connections with pg_basebackup

2015-09-25 Thread Francisco Reyes

On 09/24/2015 04:34 PM, Alvaro Herrera wrote:

Sherrylyn Branchaw wrote:
Moreover, the default has been set to 0, because the bugs both in our
usage and in OpenSSL code itself seem never to end.  Just disable it.



Set it to 0 and did not help.
Likely will move all machines to have it =0 since I have seen some SSL 
errors in logs.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Dropped connections with pg_basebackup

2015-09-24 Thread Francisco Reyes
Have an existing setup of 9.3 servers. Replication has been rock solid, 
but recently the circuits between data centers were upgraded and 
pg_basebackup now seems to fail often when setting up streaming 
replication. What used to take 10+ hours now  only took 68 minutes, but 
had to do many retries. Many attempts fail within minutes while others 
go to 90% or higher and then drop. The reason we are doing a sync is 
because we have to swap data centers every so often for compliance. So I 
had to swap master and slave.


Calling pg_basebackup like this:
pg_basebackup -P -R -X s -h  -D  -U replicator

The error we keep having is:
Sep 23 13:36:32  postgres[16804]: [11-1] 2015-09-23 13:36:32 
EDT  [unknown] replicator LOG: SSL error: bad write retry
Sep 23 13:36:32  postgres[16804]: [12-1] 2015-09-23 13:36:32 
EDT  [unknown] replicator LOG: SSL error: bad write retry
Sep 23 13:36:32  postgres[16804]: [13-1] 2015-09-23 13:36:32 
EDT  [unknown] replicator FATAL: connection to client lost
Sep 23 13:36:32  postgres[16972]: [9-1] 2015-09-23 13:36:32 
EDT  [unknown] replicator LOG: could not receive data from client: 
Connection reset by peer


I have been working with the network team and we have even been actively 
monitoring the line, and running ping, as the replication is setup. At 
the point the connection reset by peer error happens, we don't see any 
issue with the network and ping doesn't show an issue at that point in time.


The issue also happened on another set of machines and likewise, had to 
retry many times before pg_basebackup would do the initial sync. Once 
the initial sync is set, replication is fine.


I  tried both "-X s" (stream) and "-X f" (fetch) and both fail often.

Any ideas what may be going on?


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to test SSL cert from CA?

2015-07-16 Thread Francisco Reyes

On 07/11/2015 07:32 PM, James Cloos wrote:

FR == Francisco Reyes li...@natserv.net writes:

Did you include the intermediate cert(s) in the bundle which the server
presents to the client?


Yes.


And did you confirm that the client trusts the issuer's root?  Some
require explicit configurastion of that.


The client in this case is a program a client of mine runs. I don't have 
access to the program



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to test SSL cert from CA?

2015-07-09 Thread Francisco Reyes

On 07/09/2015 03:07 PM, Vick Khera wrote:


On Wed, Jul 8, 2015 at 10:17 PM, Francisco Reyes li...@natserv.net
mailto:li...@natserv.net wrote:

openssl s_client -connect HOST:PORT -CAfile /path/to/CA.pem


According to this post: 
http://serverfault.com/questions/79876/connecting-to-postgresql-with-ssl-using-openssl-s-client?rq=1


one can not use openssl to test ssl connection to postgresql. True?



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to test SSL cert from CA?

2015-07-09 Thread Francisco Reyes

On 07/08/2015 10:52 PM, Tom Lane wrote:

What's the complaint exactly?


The error we are getting is:

The security of this transaction may be compromised. The following SSL 
errors have been reported:


* The issuer certificate of a locally looked up certificate could not be 
found.

* The root CA certificate is not trusted for this purpose



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Backup Method

2015-07-08 Thread Francisco Reyes

On 07/03/2015 08:08 AM, howardn...@selestial.com wrote:

I am trying to move away from pg_dump as it is proving too slow.



Have you looked into barman?
http://www.pgbarman.org

Also, another potential approach is to setup replication and to do the 
backups from the slave.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to test SSL cert from CA?

2015-07-08 Thread Francisco Reyes
Have a client using a commercial application. For a year plus we had 
been using a local self signed certificate without issues. As of a few 
weeks ago a change/update to the program is making it complain about the 
self signed cert.


I bought a SSL cert and installed it, but the program is still having 
the issue.


Anyone knows of a way to test the SSL connection such that it validates 
against the CA? Preferably an open source application. Connecting 
through psql works fine on SSL with what I have setup, but the 
application, xtuple, seems to still be having the issue.


The client already wrote to the application support department, but 
still waiting for an answer from them.


If I had a way to at least reproduce the error I could more easily track 
down what I am missing.


Any suggestions?


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] MD5 password storage - should be the same everywhere?

2015-05-26 Thread Francisco Reyes

On 05/25/2015 07:58 PM, Adrian Klaver wrote:

On 05/25/2015 01:41 PM, Francisco Reyes wrote:
I understood that is just a md5 hash of the password and the username
with the string md5 pre-appended, so it should be the same.


Mistery solved..
Because I usually do script of most of my work for audits I was able 
to track down what happened. Original request was to get data from 
'machine 2', but it was later changed. Found something in the output of 
my script that lead me to believe maybe the roles did not come from 
'machine 2'.


Was able to find the source machine by using
select rolname, rolpassword,rolcanlogin from pg_catalog.pg_authid where 
rolname = 'SomeUser';


Against machines I thought could have been the source and found a 
matching one.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] MD5 password storage - should be the same everywhere?

2015-05-25 Thread Francisco Reyes

Should the same password, stored in MD5, be the same across different DBs?

If I did either:
create user SomeUser encrypted password 'SomePassword';
alter user SomeUser encrypted password 'SomePassword';

On multiple machines, should the MD5 be the same?
using
select rolname, rolpassword,rolcanlogin from pg_catalog.pg_authid where 
rolname = 'SomeUser';


Should the MD5 be the same?

I see one machine where a user has a different md5 but the password is 
the same as several other machines. In other words the MD5 is different, 
but the password is the same. Is that expected?


In other words say I have machine1... machine5 and I have user with 
password 'SomePassword' in 4 of the machines the MD5 is the same, but in 
(say for example) machine 2 the MD5 is different. Yet I can login to the 
user in that machine with 'SomePassword' as password.


Restoring from this machine2 to another machine and none of the 
passwords worked.


Any pointers/ideas of what is going on?


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Kill -9 for a session process caused all the sessions to be killed

2010-09-22 Thread Francisco Reyes

atul.g...@globaldatapoint.com writes:


to be killed? What is the best way to kill a session in Postgres?



Just plain kill would have worked better. I believe the issue is with the 
'-9'.


Also see

http://www.postgresonline.com/journal/index.php?/archives/134-Terminating-An
noying-Back-Ends.html

and

http://www.postgresql.org/docs/8.4/interactive/functions-admin.html
pg_terminate_backend(pid int)
pg_cancel_backend(pid int)

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Tables with autovacuum_enabled=false still getting vacuumed

2010-09-19 Thread Francisco Reyes

Is there a way to see the status of autovacuum_enabled for a table?

The reason I am asking is because I am seeing tables getting vacuumed which 
I had set autovacuum_enabled to false. The autovacuum was already running 
when I set the setting for a large number of tables.


If I had 5 tables
a
b
c
d
e

and I set autovacuum_enabled to false to all tables when autovacuum was in 
the middle of autovacuum.. say on table c, should the last two tables d and 
e be autovacuumed?


Postgresql 8.4.4 on CentOS, installed from RPM.

I see a case, BUG #3898, and a comment by Tom Lane
http://archives.postgresql.org/pgsql-bugs/2008-01/msg00222.php

In that post by Tom Lane he mentions anti-wraparound vacuum and a 
pg_autovacuum table. From pg_stat_activity I see the comment to prevent 
wrap around. How can I prevent those too for selected tables? I have 
lots of tables which are never going to be updated and are going through 
that process. Said tables were freshly restored from another DB. Don't even 
understand why they are getting the vacuum. Is pg_autovacuum still available 
in 8.4.4? Don't see in \dS and can not select from it.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Select max(primary_key) taking a long time

2010-05-22 Thread Francisco Reyes

Merlin Moncure writes:


do a big delete recently?  any other open transactions?


Some inserts were taking place. Roughly 2 to 5 million rows inside 
transactions. We were doing some ETL and each batch represented a file we 
were loading. We need to have the entire file or roll back so each file is 
done within a transaction.



the remedy for the former is to simply eat it (one time penalty) or
rebuild the table.  for the latter you simply have to resolve the
other transaction.


I think it is related to the inserts... after they were done everything was 
back to normal.
 

how big is your table according to pg_relation_size()?


\dt+ is easier. :-)
116GB

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Select max(primary_key) taking a long time

2010-05-21 Thread Francisco Reyes

Postgres 8.4.1
CentOS 5.4

I am trying to do
select max(primary_key) from some_table;

The explain looks like:
explain select max(primary_key) from some_table;
  QUERY PLAN
 
Result  (cost=0.15..0.16 rows=1 width=0)

  InitPlan 1 (returns $0)
-  Limit  (cost=0.00..0.15 rows=1 width=8)
  -  Index Scan Backward using some_table_pkey on some_table  
(cost=0.00..161797059.16 rows=1086279613 width=8)

Filter: (trans_id IS NOT NULL)

I checked there wasn't a lock of any kind
select waiting from pg_stat_activity;
waiting
-
f
f
f
f
f
f
f
(7 rows)

IOwait on the machine is around 20%
sar 10 5
Linux 2.6.18-128.el5 (trans05.afs)  05/21/2010
10:56:49 AM  CPU   %user   %nice %system %iowait  %steal  %idle
10:56:59 AM  all5.900.002.04   20.670.00  71.39
10:57:09 AM  all5.900.001.99   23.360.00  68.75
10:57:19 AM  all5.870.002.10   22.560.00  69.47
10:57:29 AM  all5.840.002.09   23.560.00  68.51
10:57:39 AM  all6.300.002.23   21.530.00  69.94
Average: all5.960.002.09   22.340.00  69.61

Any ideas why the select would be taking long.. It has gone on for minutes 
with no answer. I can just look at the value of the sequence for the primary 
key, but I am curious why something that usually is sub-second is taking so 
long..


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Select max(primary_key) taking a long time

2010-05-21 Thread Francisco Reyes

Tom Lane writes:


Francisco Reyes li...@stringsutils.com writes:

I am trying to do
select max(primary_key) from some_table;


Are there a whole lot of nulls in that column?



Zero nulls. It is a primary key.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Alter index .. set tablespace uses ACCESS EXCLUSIVE lock?

2010-02-11 Thread Francisco Reyes
The alter index page does not show the lock mode, but it seems it is an 
ACCESS EXCLUSIVE.


Wouldn't an EXCLUSIVE lock be more appropriate and remove the index from 
planner consideration?


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] how much left for restore?

2010-01-11 Thread Francisco Reyes

Ivan Sergio Borgonovo writes:


Is there a way to know/estimate how much is left to complete a
restore?


Not sure on plain ASCII files but if your pg_dump used Fc then at restore 
you can  pass the -v flag.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Using a lock to avoid: could not open relation with OID

2010-01-11 Thread Francisco Reyes

I need to replace a table with a new one.
Example..
I create a script that continously does selects like
select count(*) from tmp_deleteme_francisco;
 enough selects to last the duration of second script
select count(*) from tmp_deleteme_francisco;


Another script then does
begin;
select * into tmp_deleteme_francisco_2 from xxx;
alter table tmp_deleteme_francisco rename to tmp_deleteme_francisco_old;
alter table tmp_deleteme_francisco_2 rename to tmp_deleteme_francisco;
drop table tmp_deleteme_francisco_old;
commit;

That results in the script doing the selects getting 
could not open relation with OID .


I thought using an explicit access exclusive lock would do the trick and 
tried a few variations including


begin;
select * into tmp_deleteme_francisco_2 from xxx;
lock tmp_deleteme_francisco  in access exclusive mode;
alter table tmp_deleteme_francisco rename to tmp_deleteme_francisco_old;
alter table tmp_deleteme_francisco_2 rename to tmp_deleteme_francisco;
drop table tmp_deleteme_francisco_old;
commit;

If I get the access exclusive lock I thought nothing else could be 
accessing the table after the lock is obtained.


Any ideas?

Postgresql 8.4.1 on CentOS 5.3


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] how much left for restore?

2010-01-11 Thread Francisco Reyes

Ivan Sergio Borgonovo writes:


It get a bit better but even knowing what are the largest tables it
is hard to get an estimate of how much is missing before complete
restore.


Agree. Also building indexes can also take quite some time.


I'm really looking at rough figures... even a: I've read 40% of the
file will give a more usable information than: I've already loaded
table A.


I don't believe there is anything like that and have not see  anything in 
the TODO list (that I recall) either.


It likely is not easy to implement. For starters to get such an estimate you 
would need to know how much data a table has. That would require to do a 
pass to get size or somehow store table size on the restore for the 
estimate. Then you have indexes.. they are just one line on the backup, but 
can take quite a while to build depending on size.


For the foreseable future -v is likely all that is and will be available.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Database size

2010-01-11 Thread Francisco Reyes

Leonardo M. Ramé writes:


A customer of mine asked me to periodically delete old, unneeded records
containing ByteA fields, because he think it will reduce the database
size on disk. Is this true?.


No.


For example, in Firebird, the database size
is not reduced until you do a Backup-Restore of the database.


Not sure about ByteA, but in general... in potgres you need to do that or a 
vacuum full.. or copy all the data of the table into a new table and rename.


Having said that.. if you delete old rows and do a vacuum and/or have 
autovacuum enabled the  space of those old rows can be re-used, slowing down 
the growth of the table. So yes, deleting old rows and doing a vacuum is a 
good thing.. but no I don't believe it will reduce database size.


I say don't believe because I don't recall if byteA was stored in the 
table itself or was stored outside using TOAST.. so I am not sure about 
how/when space is released for it.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] how much left for restore?

2010-01-11 Thread Francisco Reyes

Sam Mason writes:


maybe something like pv would help?

  http://www.ivarch.com/programs/pv.shtml


I think it may help the OP, but indexes are still going to be a rough spot.. 
if  large table has several indexes the output from pv will be missleading.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Using a lock to avoid: could not open relation with OID

2010-01-11 Thread Francisco Reyes

Craig Ringer writes:

Possible workaround: Instead of your table creation, renaming and 
dropping, use TRUNCATE.


Have to support both 8.1 and 8.4.
If I recall correctly 8.1 did not support truncate inside of a transaction.

We are in the process of upgrading everything to 8.4, but until then.. have 
to deal with all the 8.1 and 8.2 machines.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Not finding RPMs for 8.4.2!

2009-12-22 Thread Francisco Reyes

A. Kretschmer writes:


http://yum.pgsqlrpms.org/reporpms/repoview/letter_p.group.html


Only up to 8.4.1 there. 


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Not finding RPMs for 8.4.2!

2009-12-22 Thread Francisco Reyes

They also used to be at ftp.postgresql.org:/pub/binary.
There are only rpms up to 8.4.1

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Not finding RPMs for 8.4.2!

2009-12-22 Thread Francisco Reyes

Devrim GÜNDÜZ writes:


On Tue, 2009-12-22 at 15:10 -0500, Francisco Reyes wrote:
Only up to 8.4.1 there. 


Which os/distro are you looking for? I hope it is not Fedora 11 -
x86_64.


CentOS x86_64

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Default listing on -Fc files

2009-09-16 Thread Francisco Reyes

Is there a way to tell to which table a DEFAULT rule belongs to?

Example:
Have multiple files with serial. The sequence is setup as a default. Because 
all tables have the same structure the DEFAULT lines on a file pg_dumped 
with -Fc look the same.


The lines look like
DEFAULT public id user
DEFAULT public id user
DEFAULT publid id user

They have different IDs, but I don't know how to tell to which table they 
belong so when I do restores of selected tables I would be able to restore 
the proper defaults.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] is a 'pairwise' possible / feasible in SQL?

2008-08-04 Thread Francisco Reyes
On 2:08 pm 08/04/08 Rajarshi Guha [EMAIL PROTECTED] wrote:
 paircount
 - -
 123  456   1
 667  879   2

create temp table aic_cid
(
id smallint,
cid smallint

);

insert into aic_cid values (1,123);
insert into aic_cid values (2,456);
insert into aic_cid values (3,667);
insert into aic_cid values (3,879);
insert into aic_cid values (3,123);
insert into aic_cid values (4,878);
insert into aic_cid values (4,456);
insert into aic_cid values (4,123);
insert into aic_cid values (5,999);
insert into aic_cid values (5,667);
insert into aic_cid values (5,879);


 select a.cid as ac, b.cid as bc, count(*) from aic_cid a left outer join 
aic_cid b on a.cid b.cid and a.id = b.id where b.cid is not null group by
a.cid, b.cid order by a.cid;
 ac  | bc  | count
-+-+---
 123 | 456 | 1
 123 | 667 | 1
 123 | 878 | 1
 123 | 879 | 1
 456 | 123 | 1
 456 | 878 | 1
 667 | 123 | 1
 667 | 879 | 2
 667 | 999 | 1
 878 | 123 | 1
 878 | 456 | 1
 879 | 123 | 1
 879 | 667 | 2
 879 | 999 | 1
 999 | 667 | 1
 999 | 879 | 1

Is that what you are looking for?


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] non-WAL btree?

2008-08-01 Thread Francisco Reyes
On 4:36 pm 08/01/08 Alex Vinogradovs [EMAIL PROTECTED] wrote:
 It's not that I expect a lot of improvement by having non-WAL
 indexing

Have you tried using a fill factor less than 90%?
That is on my list of things to test, but have not done yet.. In particular
you need to find a balance where the speed gained justifies the increase in
size of the index.. and the potential slowdowns because of larger indexes.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Partitioned tables and views

2008-07-31 Thread Francisco Reyes
On 12:54 pm 07/31/08 Mike Gould [EMAIL PROTECTED] wrote:

 1. Is the planner/optimizer intelligent enough to know when we are
 not doing a query based on location? 

In short yes.
If the DB doesn't see the condition by which your tables are partitioned it
will search all the partitions.

 2.  How are views handled with partitioned tables?  

Same as with regular queries. A view is just a conveniently stored query.
In other words, the plan for the view will be the same plan as the plan for
the query that you made the view from.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problem running script

2008-07-30 Thread Francisco Reyes
On 9:50 am 07/30/08 Ian Lea [EMAIL PROTECTED] wrote:
 $ psql -whatever  test.sql  test.log

Also
psql -f filename  test.log
Almost the same as above..


   =\i test.sql \o test

That would be wrong.
Say you had test.sql
select * from mytable;

You can change test.sql to:
\o test.log
select * from mytable;

Then you could do \i test.sql and accomplish what you are trying to do.

If your DB has passwords enable you may need to put the user/password combo
in .pgpass.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Connecting to an existing transaction state.

2008-07-30 Thread Francisco Reyes
On 10:37 am 07/28/08 Alex Gen [EMAIL PROTECTED] wrote:
 3. Using m01-s2.sql I would like to execute a certain SQL statement
 – BUT within the scope of the transaction begun by m01-s1.sql.

I believe that is not possible.
Specially if you are within a transaction on each of the scripts.

As far as I know all the work getting done inside a transaction is ONLY
visible to that transaction. It would actually be pretty bad if the
database allowed a process to see an uncommited state from a transaction
that may end up rolling back.

What are you trying to test?
You mentioned how you are testing, but what is the business need? Or
business concern?

ie It could be something like.. we are concerned that if we run these two
scripts one may lock the other.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Error 10061

2008-07-30 Thread Francisco Reyes
On 1:51 pm 07/30/08 Ivan Garro [EMAIL PROTECTED] wrote:

 Hola Gente, les comento lo que me pasa,

Esta lista es solo para Ingles.

Favor subscribirte a la lista en espan~ol en:
http://archives.postgresql.org/pgsql-es-ayuda


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Index creation and maintenance_work_mem

2008-07-29 Thread Francisco Reyes
Besides maintenance_work_mem, what else can be changed to improve index
creation?

I just did two tests. One with maintenance_work_mem=128MB and another with 
maintenance_work_mem=1GB. Out of 3 single column index, 2 took slightly
longer with the higher value and a third took almost the same.
12GB of ram in the machine. Redhat 4 revision 6. Postgesql 8.3.3.
temp_buffers = 64MB
work_mem = 96MB
checkpoint_segments = 256
checkpoint_timeout = 10min
Indexing 60 million rows.

Tests run from a script and nothing else was running on the machine during
the tests.

maintenance_work_mem = 128MB
CREATE INDEX
Time: 449626.651 ms 7.4 minutes

CREATE INDEX
Time: 313004.025 ms 5.2 minutes

CREATE INDEX
Time: 3077382.941 ms 51.2 minutes

maintenance_work_mem = 1GB
CREATE INDEX
Time: 497358.902 ms 8.2 minutes

CREATE INDEX
Time: 312316.953 ms 5.2 minutes

CREATE INDEX
Time: 3236472.630 ms 53.9


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Index creation and maintenance_work_mem

2008-07-29 Thread Francisco Reyes
On 2:53 pm 07/29/08 Alan Hodgson [EMAIL PROTECTED] wrote:
 --sar 2 30
Linux 2.6.9-42.ELsmp (trans03)  07/29/2008

12:58:09 PM   CPU %user %nice   %system   %iowait %idle
12:58:11 PM   all 12.44  0.00  0.06  0.00 87.50
12:58:13 PM   all 12.44  0.00  0.00  0.06 87.50
12:58:15 PM   all 12.49  0.00  0.06  0.00 87.45
12:58:17 PM   all 12.43  0.00  0.06  1.62 85.88
12:58:19 PM   all 12.38  0.00  0.06  0.00 87.55
12:58:21 PM   all 12.43  0.00  0.12  0.00 87.45
12:58:23 PM   all 12.50  0.00  0.00  0.00 87.50
12:58:25 PM   all 12.42  0.00  0.12  0.00 87.45

Quadcore machine. Not using not even 100% of one core.
Disk subsystem doesn't seem to be issue (no iowait).

Based on a few links I read I thought that, holding everything else
constant, increasing maintenance_work_mem would have helped.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Index creation and maintenance_work_mem

2008-07-29 Thread Francisco Reyes
On 3:19 pm 07/29/08 Alan Hodgson [EMAIL PROTECTED] wrote:
 sure you only have 4 cores? Hyper-threading?

It seems Hyperthreading is on in that machine. Going to see if I can have
it turned off.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Clone a database to other machine

2008-07-28 Thread Francisco Reyes
On 12:44 pm 07/28/08 Garg, Manjit [EMAIL PROTECTED] wrote:
 I'm stuck to an  issue while cloning the pgsql database, can you
 please help, or give any docs to  help out.

What is the issue?

 Query - Trying to  have same database on two seprate linux servers.

Have you been able to pg_dump from the source DB and then read it in the
target?

Are you trying to do replication or you just want point in times?
How large is the database?


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Date index not used when selecting a date range

2008-07-28 Thread Francisco Reyes
On 9:09 am 07/28/08 Poul Møller Hansen  [EMAIL PROTECTED] wrote:
 But when selecting a date range I get this
 explain SELECT * FROM public.stat WHERE node = '1010101010' AND
  (date = '2008-06-30'::date AND date = '2008-01-01'::date)

 Bitmap Heap Scan on stat  (cost=710.14..179319.44 rows=39174
 width=146)   Recheck Cond: ((node)::text = '1010101010'::text)
   Filter: ((date = '2008-06-30'::date) AND (date =
 '2008-01-01'::date))   -  Bitmap Index Scan on idx_stat_node_id
 (cost=0.00..710.14 rows=55182 width=0)
 Index Cond: ((node)::text = '1010101010'::text)

You may want to do an explain analyze on the query. That would help others
help you.

Have you run analyze on the table?
How selective is the condition node = '1010101010' and the date range. In
particular, do you have an idea what percentange of the table fits into
that date range?

What about the type column? You have it in the index, but not in your
query. Have you tried adding type to the query? Will that make it more
selective?


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Clone a database to other machine

2008-07-28 Thread Francisco Reyes
On 1:32 pm 07/28/08 Garg, Manjit [EMAIL PROTECTED] wrote:

 But, actually I want to keep both the Databse in Sync. I want clone db
 to get the data from Master in  certain intervals.

Sounds like you are looking for replication. Check
http://wiki.postgresql.org/wiki/Replication%2C_Clustering%2C_and_Connection_Pooling


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] I often have to do update if exist, else insert, is my database design wrong?

2008-07-25 Thread Francisco Reyes
 Is this normal or are there something else I could do so I don't have
 to check if it exists?

I would say that it is normal.


 For the developers: a combined  insert/update command would be nice
 :-)

Mysql has such a beast along a some other non SQL compliant extensions.

One possible approach to what you are trying to do would be:
update existing table with new values
delete from new table all records that already exist
insert remaining new records

Which can be done all in pure sql. No need to write a program.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL vs FreeBSD 7.0 as regular user

2008-07-25 Thread Francisco Reyes
On 4:53 pm 07/25/08 Zoltan Boszormenyi [EMAIL PROTECTED] wrote:
 I don't know. How to determine? Running this as my own user:

Is this your own machine or at an ISP?
If it is your own machine, then most likely you are not in a jail. You
would know if you were since you would have had to do it.

If at an ISP once way to know if you are in a jail I think is to try to
ping and traceroute.

I think by default you can't do one of those within a jail.
Also try ifconfig. A jail will show you a single IP. A real machine will
show you usually at least two. 127.0.0.1 and some other address.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Any way to favor index scans, but not bitmap index scans?

2008-07-23 Thread Francisco Reyes
The data set I am working with has a very uneven distribution.

I had to to set random_page_cost = 0.75 to get good plans.
However, that first tries bitmap scans which perform very poorly.
Is there a way to have the planner to favor index scans and disfavor bitmap
scans? Is my best choice to just disable bitmap scans?

So far in this data set almost every time bitmap scans are used the queries
do worse, much worse. I had one extreme case where a sequential scan would
finish in 20 minutes and the same query using bitmap scans would take over
a day to finish.

My most recent test(before reducing random_page_cost)..
Sequential scan(default selected by planner): 12 minutes
Seq scan dissabled(bitmap index scans): 29.98 minutes
Seq scan disabled and bitmap index disabled: 3 seconds!

After I reduced random_page_cost and turned bitmap index off the planner is
picking up the right plan using an index scan. Now just testing more cases
to make sure the improvement is consistant for most of our queries.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Any way to favor index scans, but not bitmap index scans?

2008-07-23 Thread Francisco Reyes
On 12:40 pm 07/23/08 Tom Lane [EMAIL PROTECTED] wrote:
 That's fairly hard to believe.  Care to offer some details?


I will dig that actual project and run explain analyze. Will likely not
have it till middle of next week though because of a monthly process
starting out Friday.

However, I do have a current example where bitmap index scan was 3 times
worse. On the extremely bad case the data set was 5 times larger than the
sample below (1+ billion vs 215 million).

Sequential scan: 12 minutes
Seq scan dissabled - bitmap index scan: 29.98 minutes
Seq scan disabled and bitmap index disabled: 3 seconds!

I have to mask the names so any discrepancy in names likely just my mistake.

2 tables involved.
historical has 215 million rows.
join_ids has 2.5 million rows.

A join from join_ids to historical will only touch about 40% of the records
in historical.

The queries below only returned 0.2% (less than 1%) of records from the
historical table.



default_statistics_target = 1000
random_page_cost = 4.0

Default query before changing settings.
Aggregate  (cost=7656776.19..7656776.20 rows=1 width=12) (actual
time=719661.082..719661.085 rows=1 loops=1)
   -  Hash Join  (cost=9260.90..7653867.79 rows=387785 width=12) 
 (actual time=2249.423..719109.201 rows=194734 loops=1)
 Hash Cond: (historical.join_id = join_ids.join_id)
 -  Seq Scan on historical  
(cost=0.00..5825538.00 rows=207450405 width=16) (actual
time=7.966..410078.540 rows=207589225 loops=1)
   Filter: ((f5  0::numeric) AND (date  '2007-04-01'::date)
AND (date  '2008-05-01'::date))
 -  Hash  (cost=9198.15..9198.15 rows=5020 width=4) (actual
time=2210.953..2210.953 rows=4437 loops=1)
   -  Bitmap Heap Scan on join_ids join_ids 
(cost=163.00..9198.15 rows=5020 width=4)
   (actual time=247.903..2201.789 rows=4437 loops=1)
 Recheck Cond: (customer_id = ANY 
('{1014,2485,4636,4635,1255,547,374,580}'::integer[]))
 -  Bitmap Index Scan on join_ids_customer_id_join_id 
(cost=0.00..161.74 rows=5020 width=0)
 (actual time=241.111..241.111 rows=4437 loops=1)
   Index Cond: (customer_id = ANY
('{1014,2485,4636,4635,1255,547,374,580}'::integer[]))
Total runtime: 719816.542 ms -- 12 minutes


SET ENABLE_SEQSCAN TO OFF;
Aggregate  (cost=11867354.72..11867354.73 rows=1 width=12) (actual
time=1798829.579..1798829.581 rows=1 loops=1)
   -  Hash Join  (cost=4645436.35..11864446.33 rows=387785 width=12) 
 (actual time=1086218.285..1798250.004 rows=194734 loops=1)
 Hash Cond: (historical.join_id = join_ids.join_id)
 -  Bitmap Heap Scan on historical  
 (cost=4636175.45..10036116.53 rows=207450405 width=16) 
   (actual time=1086158.692..1487577.412 rows=207589225 loops=1)
   Recheck Cond: ((date  '2007-04-01'::date) AND (date 
'2008-05-01'::date))
   Filter: (f5  0::numeric)
   -  Bitmap Index Scan on historical_join_id_date  
  (cost=0.00..4584312.85 rows=210080576 width=0) 
   (actual time=1085395.070..1085395.070 rows=210233223
loops=1)
 Index Cond: ((date  '2007-04-01'::date) AND (date 
'2008-05-01'::date))
 -  Hash  (cost=9198.15..9198.15 rows=5020 width=4) (actual
time=18.712..18.712 rows=4437 loops=1)
   -  Bitmap Heap Scan on join_ids (cost=163.00..9198.15
rows=5020 width=4)
(actual time=1.541..11.654 rows=4437 loops=1)
 Recheck Cond: (customer_id = ANY 
('{1014,2485,4636,4635,1255,547,374,580}'::integer[]))
 -  Bitmap Index Scan on join_ids_customer_id_join_id  
 (cost=0.00..161.74 rows=5020 width=0) (actual
time=0.984..0.984 rows=4437 loops=1)
   Index Cond: (customer_id = ANY
('{1014,2485,4636,4635,1255,547,374,580}'::integer[]))
 Total runtime: 1798847.930 ms -- 29.98 minutes


SET ENABLE_SEQSCAN TO OFF;
SET ENABLE_BITMAPSCAN TO OFF;
Aggregate  (cost=25665216.10..25665216.11 rows=1 width=12) (actual
time=3088.894..3088.896 rows=1 loops=1)
   -  Nested Loop  (cost=0.00..25662307.70 rows=387785 width=12) 
 (actual time=0.264..2624.680 rows=194734 loops=1)
 -  Index Scan using join_ids_join_id on join_ids
 (cost=0.00..2867051.21 rows=5020 width=4) (actual
time=0.237..1236.019 rows=4437 loops=1)
   Filter: (customer_id = ANY 
('{1014,2485,4636,4635,1255,547,374,580}'::integer[]))
 -  Index Scan using historical_join_id_date on historical
 (cost=0.00..4522.43 rows=1477 width=16) (actual
time=0.010..0.153 rows=44 loops=4437)
   Index Cond: ((historical.join_id = join_ids.join_id) AND
(historical.date  '2007-04-01'::date)
   AND (historical.date  '2008-05-01'::date))
   Filter: (trans.f5  0::numeric)
 Total runtime: 3091.227 ms -- 3 

Re: [GENERAL] A couple of newbie questions ...

2008-07-23 Thread Francisco Reyes
On 12:00 pm 07/23/08 Shane Ambler [EMAIL PROTECTED] wrote:
   INSERT INTO table (fld_y,fld_z) VALUES ('y','z')

I believe that is the most common way of doing it.

   which is really doing:
   INSERT INTO table (fld_x, fld_y,fld_z) VALUES (DEFAULT, 'y','z')

Correct.
So either one should be fine.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Any way to favor index scans, but not bitmap index scans?

2008-07-23 Thread Francisco Reyes
On 2:23 pm 07/23/08 Scott Marlowe [EMAIL PROTECTED] wrote:
   However, I do have a current example where bitmap index scan was 3
   times worse. 

 What is your work_mem set to?

For the examples that I posted it is
work_mem = 64MB


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Any way to favor index scans, but not bitmap index scans?

2008-07-23 Thread Francisco Reyes
On 3:37 pm 07/23/08 Tom Lane [EMAIL PROTECTED] wrote:
 Francisco Reyes [EMAIL PROTECTED] writes:
   SET ENABLE_SEQSCAN TO OFF;
   SET ENABLE_BITMAPSCAN TO OFF;
   Aggregate  (cost=25665216.10..25665216.11 rows=1 width=12) (actual
   time=3088.894..3088.896 rows=1 loops=1)
  -  Nested Loop  (cost=0.00..25662307.70 rows=387785 width=12)
(actual time=0.264..2624.680 rows=194734 loops=1)
-  Index Scan using join_ids_join_id on join_ids
(cost=0.00..2867051.21 rows=5020 width=4) (actual
   time=0.237..1236.019 rows=4437 loops=1)
  Filter: (customer_id = ANY ('{1014,2485,4636,4635,12
 55,547,374,580}'::integer[]))
-  Index Scan using historical_join_id_date on historical
(cost=0.00..4522.43 rows=1477 width=16) (actual
   time=0.010..0.153 rows=44 loops=4437)
  Index Cond: ((historical.join_id =
   join_ids.join_id) AND (historical.date  '2007-04-01'::date)
  AND (historical.date  '2008-05-01'::date))
  Filter: (trans.f5  0::numeric)
Total runtime: 3091.227 ms -- 3 seconds

 You might be more likely to get a sane plan if you had an index on
 join_ids.customer_id.

There is an index in join_ids:
joinids_customerids_joinid btree (customer_id, joinid) WITH (fillfactor=98)

Also, that plan is only 3 seconds. That is as good as that is going to get.
Or where you refering that the other plans would have been better?

 The first indexscan above is really a
 completely silly choice, and would never have been used if you
 weren't holding a gun to the planner's head.

I have much to learn about how to properly read an explain analyze, but as
silly as that plan may look it outperforms the other plans by orders of
magnitude. 3 seconds vs 12 minutes is a very big difference. It was so fast
that I even compared the results (which happens to be a single row) to make
sure I was getting the correct value.

The index isn't contributing any selectivity at all.

Which index scan? Are analyze read bottom up right?
If it is this one you are refering to:

-  Index Scan using historical_join_id_date on historical
(cost=0.00..4522.43 rows=1477 width=16) (actual
   time=0.010..0.153 rows=44 loops=4437)
  Index Cond: ((historical.join_id =
   join_ids.join_id) AND (historical.date  '2007-04-01'::date)
  AND (historical.date  '2008-05-01'::date))
  Filter: (trans.f5  0::numeric)

I believe that is the reason performance is good with that plan.
The number of rows that need to be returned from historical is less than 1%.

 The other part of the problem is the factor-of-thirty overestimate of
 the number of rows that the inner indexscan will produce (which means
 also a factor-of-thirty overestimate of its cost).  Perhaps higher
 statistics targets for these two relations would give you a better
 estimate there.

Is it possible to go over 
default_statistics_target = 1000?


 since there's no additional join condition.  What PG version are you
 running exactly?

8.3.3

I have only been at this job for 3 months and I can say that neither the
data, nor the previous design I am trying to replace play nice with
postgresql. I can't get into specifics, but I can say that our historical
tables have about 60% data that is not used in most queries. I think that
is partly what throws off the planner so much. My first clue was when I saw
the planner trying to do sequential scans to retrieve less than 1% of rows.
It didn't make sense.

I tried several schemes with partitioning and that was even worse.

I am going to convert the tables structure names to the mapping names I
used in these thread. Perhaps that may be informative.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Any way to favor index scans, but not bitmap index scans?

2008-07-23 Thread Francisco Reyes
Table layouts:
historical
  Column   | Type |Modifiers
---+--+--
 record_id | integer  | not null default 
nextval('historical_record_id_seq'::regclass)
 f3| integer  | not null
 date  | date | not null
 f4| smallint |
 f5| numeric(9,2) | not null
 join_id   | integer  | not null
Indexes:
historical_f3 btree (f3) WITH (fillfactor=98), tablespace st2
historical_join_id_date btree (join_id, date) WITH (fillfactor=98),
tablespace st2
historical_record_id btree (record_id) WITH (fillfactor=98),
tablespace st2


join_ids
 Column |  Type   | Modifiers
+-+---
 join_id| integer | not null
customer_id | integer | not null
Indexes:
join_ids_pkey PRIMARY KEY, btree (join_id)
join_ids_customerid_joinid btree (customer_id, join_id) WITH
(fillfactor=98)

Historical has 215 million rows.
join_ids has 2.5 million rows.

The number or rows in join_ids that have a matching record in historical
(by join_id) is roughly 86 million.

The select from the previous 3 explain analyze had about 0.22% (that is 1/5
of 1%... not 22%) rows that needed to be returned for that query from the
historical table.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Any way to favor index scans, but not bitmap index scans?

2008-07-23 Thread Francisco Reyes
On 4:12 pm 07/23/08 Francisco Reyes [EMAIL PROTECTED] wrote:
 Also, that plan is only 3 seconds.

Minor update.
A co-worker is using another DB.. and re-running my query after he did his
work.. now the query using the index scans takes 2 minutes instead of 3
seconds. 3 seconds was likely data cached.

To re-list the times..
Sequential scan 12 minutes
Bitmap scans 30 minutes
index scan with not bitmap 2 minutes

It is worth pointing out that the bitmap test was run AFTER the sequential
scan test.. right after it.. so it should have benefited from OS caching.
The join_ids table fits completely in memory.

select pg_size_pretty(pg_total_relation_size('join_ids'));
 pg_size_pretty

 291 MB
(1 row)

par4mo=# select pg_size_pretty(pg_relation_size('join_ids'));
 pg_size_pretty

 94 MB
(1 row)


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Substitute a variable in PL/PGSQL.

2008-07-22 Thread Francisco Reyes
On 12:33 am 07/22/08 Steve Martin [EMAIL PROTECTED] wrote:
 Hi,

 I am trying to create a PL/PGSQL function to return the values of the
 fields in a record, e.g. 1 value per row in the output of the
 function.

Are you trying to do a generic function that would work for any table or
for just a single table?

Is it goint to run against a large data set?


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY between 7.4.x and 8.3.x

2008-07-22 Thread Francisco Reyes
On 6:01 pm 07/21/08 Jack Orenstein [EMAIL PROTECTED] wrote:
 to this:
 psql -h $SOURCE_HOST ... -c copy binary $SOURCE_SCHEMA.$SOURCE_T
 ABLE to
 stdout |\
  psql ... -c copy binary $TARGET_SCHEMA.$TARGET_TABLE from stdin

http://www.postgresql.org/docs/8.3/interactive/sql-copy.html

The BINARY key word causes all data to be stored/read as binary format
rather than as text. It is somewhat faster than the normal text mode, but a
binary-format file is less portable across machine architectures and
PostgreSQL versions.

I would suggest to not go that route.
However, you could just test it and see if it works.
If you are doing multiple tables I still think you should consider pg_dump
-Fc. You can restore just the data without the DDL.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to remove duplicate lines but save one of the lines?

2008-07-21 Thread Francisco Reyes
On 11:33 am 07/21/08 A B [EMAIL PROTECTED] wrote:
 and I want it to be
 A 1
 B 3
 C 44

The slow way
select distinct field1, field2 from sometable.

The faster way
select field1,fields2 from sometable group by field1, field2.

Distinct should in theory be the same speed, but on several tests I have
done group by was faster. I posted a message to the list and there were
some explanations why group by was faster.. Hopefully someday they should
perform just as  efficiently.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY between 7.4.x and 8.3.x

2008-07-21 Thread Francisco Reyes
On 4:05 pm 07/21/08 Jack Orenstein [EMAIL PROTECTED] wrote:
 We will now be adding 8.3.x databases to the mix, and will need to
 copy between 7.4.x and 8.3.x in both directions. The datatypes we use

I believe it should work.
Also, one feature I believe started in the 8.X line (8.2?), is the ability
to have a subquery in the copy command to refine what rows you are getting.

 What if we do a binary copy instead? 
What do you mean by a binary copy?
pg_dump -Fc?

I think a plain pg_dump and copy will likely be more portable. Specially
going from 8.3 to 7.4.

Why will you keep copying data back and forth?
Not possible to setup a new 8.3, migrate to it and then upgrade the other
7.4 machine to 8.3?


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Reducing memory usage of insert into select operations? [Solved]

2008-07-19 Thread Francisco Reyes

Martijn van Oosterhout wrote:

Can you make them not deferred?

How?


I found the issue.
I had the foreign key in the master table instead of the children.
Deleted RI from master table and put into the inherited partitions.
My whole 230 million rows merged in about an hour!
And I even had two of those running at the same time. (one setup with 14 
partitions per month and another with 5 partitions per month to test 
difference in performance).


It was so fast I even had to do a count(*) to make sure both actually 
merged.

That is 117K rows per second for rows that were about 33 bytes long.
That only comes down to about 3 MB/sec+overhead, but still 117K rows/sec 
is not too shabby.


In case it is of interest to anyone..
2 AMD dual core, 2GHz CPUs
12GB of RAM
shared_buffers 3GB
work_mem 64MB
256 check_point segments
10 min checkpoing_timeout
LSI controller with 128MB cache with BBU. Write cache enabled.


Many thanks to all that offered suggestions in the troubleshooting.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Reducing memory usage of insert into select operations? [Solved]

2008-07-19 Thread Francisco Reyes

Alvaro Herrera writes:


Heh -- but are the FKs now checked?  Try inserting something that
violates the constraints and see if they are rejected.


I knew it sounded too good to be true.
1- The trigger was not set in the master (ie nothing went to the children).
2- The master had no index and no RI.. so it was a straight insert.

I corrected (ie set the trigger in the master and RI in the children). Has 
been running for 10 hours and has not finished.


The good news is that memory doesn't seem to be going up.
I will give it till tomorrow AM.. and if hasn't finished will turn off the 
foreign keys in the children. Already modified the scripts so I can easily 
build/drop the foreign keys as needed.  


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Reducing memory usage of insert into select operations?

2008-07-18 Thread Francisco Reyes
On 9:53 am 07/18/08 Douglas McNaught [EMAIL PROTECTED] wrote:
 dedicated database server you really don't ever want to have the OOM
 killer triggered.

Found that yesterday (vm.overcommit_memory=2).
Agree that this is better than OOM. I still ran out of memory last night
and postgres just failed on the malloc(), which as you mentioned is better.

Reduced work_mem to 8MB and trying again.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Reducing memory usage of insert into select operations?

2008-07-18 Thread Francisco Reyes
On 8:13 am 07/18/08 Richard Huxton [EMAIL PROTECTED] wrote:
 Is the partition split done with triggers or rules?

I have a single trigger+function combo that dynamically computes which
partition the data has to go to.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Reducing memory usage of insert into select operations?

2008-07-18 Thread Francisco Reyes
On 11:25 am 07/18/08 Richard Huxton [EMAIL PROTECTED] wrote:
 I'm wondering whether it's memory usage either for the trigger
 itself, or for the function (pl/pgsql?).

Good point.

 If you're doing something
 like:INSERT INTO partitioned_table SELECT * FROM big_table
 then that's not only taking place within a single transaction, but
 within a single statement.

Correct.
I have kept decreasing work_mem and that does not seem to help.

 Without being a hacker, I'd say it's entirely plausible that PG might
 clean up triggers at the end of a statement meaning you would need
 memory for 200million+ triggers.

Sure hope that is not the case.

 Alternatively, it could be a memory-leak somewhere in the pl/pgsql or
 trigger code. Wouldn't have to be much to affect this particular case.

Will post an strace.

 What happens if you do the insert/select in stages but all in one
 transaction?

Will test.
The data is about a year worth of data. I will try  to do one month at a
time, within a single transaction.

A single month finishes fine.

 Do you see PG's memory requirement stay constant or grow
 in steps. That will show whether the memory is growing over the
 duration of a statement or a transaction.

Right now for the single statement/transaction (the one big process) it is
growing slowly over time. It may be a leak. It seems to start growing
somewhere between the 1st and 2nd hower. It seems to always be failing
around 4 hours.

I wrote a little process that shows the amount of free memory every 15
minutes..

I will post strace for the big process and then will try breaking the
process down by month, but within a single transaction and report that
later when I get some results.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Reducing memory usage of insert into select operations?

2008-07-18 Thread Francisco Reyes
On 11:25 am 07/18/08 Richard Huxton [EMAIL PROTECTED] wrote:

Strace of the single/large process.
Again, all the query is doing is
insert into file select subquery

The strace is pretty much a repetition of the lines below.

semop(557057, 0x7fbfffdfb0, 1)  = 0
lseek(100, 0, SEEK_END) = 671719424
write(100, 
\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0..., 8192) = 
8192
lseek(508, 0, SEEK_END) = 55697408
write(508, 
\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0..., 8192) = 
8192
read(381, 
\0\0\0\0\224\21\0\225o\10\0\30\331c\0c\225%w(\0\0\0\0\0\0\0\0\0\5\0..., 
8192) = 8192
semop(557057, 0x7fbfffd1a0, 1)  = 0
lseek(100, 0, SEEK_END) = 671727616
write(100, 
\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0..., 8192) = 
8192
semop(557057, 0x7fbfffd1c0, 1)  = 0
semop(557057, 0x7fbfffd1a0, 1)  = 0
semop(557057, 0x7fbfffd1c0, 1)  = 0
read(381, 
w\317\21\0]9\0\177\246eA(\0\0\0\0\0\0\0\0\0\5\0\2\0\30\0.\v\0\0..., 8192) = 
8192
semop(557057, 0x7fbfffd1a0, 1)  = 0
lseek(512, 0, SEEK_END) = 48144384
write(512, 
\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0..., 8192) = 
8192
semop(557057, 0x7fbfffd1c0, 1)  = 0
lseek(100, 0, SEEK_END) = 671735808
write(100, 
\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0..., 8192) = 
8192
lseek(517, 0, SEEK_END) = 89309184
write(517, 
\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0..., 8192) = 
8192
semop(557057, 0x7fbfffd1c0, 1)  = 0
semop(557057, 0x7fbfffddd0, 1)  = 0
lseek(100, 0, SEEK_END) = 671744000
write(100, 
\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0..., 8192) = 
8192
read(381, 
\212\225\202(\0\0\0\0\0\0\0\0\0\5\0\2\0\30\\v\0\0\1\0\23\2\0\0\0\t..., 
8192) = 8192
lseek(510, 0, SEEK_END) = 29351936
write(510, 
\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0..., 8192) = 
8192
lseek(100, 0, SEEK_END) = 671752192
write(100, 
\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0..., 8192) = 
8192
semop(557057, 0x7fbfffddf0, 1)  = 0
read(381, 
\0\0\0\0\0\0\5\0\2\0\30\0001\v\0\0\0\0\23\2\0\0\0\30\0\4\20\0\302\326\0\0..., 
8192) = 8192
lseek(513, 0, SEEK_END) = 19316736
write(513, 
\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0..., 8192) = 
8192
lseek(100, 0, SEEK_END) = 671760384
write(100, 
\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0..., 8192) = 
8192
read(381, [EMAIL PROTECTED]..., 8192) = 8192
lseek(100, 0, SEEK_END) = 671768576
write(100, 
\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0..., 8192) = 
8192
lseek(518, 0, SEEK_END) = 55025664
write(518, 
\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0..., 8192) = 
8192
semop(557057, 0x7fbfffd1c0, 1)  = 0
semop(557057, 0x7fbfffd1c0, 1)  = 0
semop(557057, 0x7fbfffd1c0, 1)  = 0
lseek(100, 0, SEEK_END) = 671776768


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Reducing memory usage of insert into select operations?

2008-07-18 Thread Francisco Reyes
On 12:03 pm 07/18/08 Martijn van Oosterhout [EMAIL PROTECTED] wrote:

 Perhaps you can try reducing the shared_buffers, to see if that helps
 more?

Will try.

 8MB is quite small for workmem. More shared_buffers is not
 necessarily better.

Ok, but from everything I had read shared_buffers of 1/4 seemed like a
starting point. Will try reducing it to 2GB.

 Also, how much swap are you running?

Started out with 12GB (same as memory) and last night I added 24GB more.
I had 2 instances of inserts going so each exausted about 18GB of ram!


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Reducing memory usage of insert into select operations?

2008-07-18 Thread Francisco Reyes
On 12:23 pm 07/18/08 Alvaro Herrera [EMAIL PROTECTED] wrote:
 Do you have long-running transactions?  (For example transactions that
 have been idle for a long time).

No.
The two inserts I was running were the only processes. I even did a restart
to make sure there was absolutely nothing else running and to make sure all
my postgresql.conf settings were in.

Given that memory grows over time I am beggining to wonder if it is some
type of memory leak.

Just installed the postgresql debug rpm, but not sure if did anything..
strace doesn't look   any different..

 read(81, 2\1\0\0\260~!\16\1\0\0\0\370\1\0\2\0 \4 
\0\0\0\0\300\237r\0\200\237r\0..., 8192) = 8192
write(191, 
Q=J\313\253]1\0\0\0\1\0007\33\4\0\2\0\2\t\30\0\3\302\204\0;a1OjG..., 8192) = 
8192
write(160, XQxbqQEx+yo=H\333o\2371\0\0\0\1\0.\33C\0\2\0\2\t\30\0...,
8192) = 8192
read(81, 2\1\0\0\320(\301\17\1\0\0\0\370\1\0\2\0 \4 
\0\0\0\0\300\237r\0\200\237r\0..., 8192) = 8192


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Reducing memory usage of insert into select operations?

2008-07-18 Thread Francisco Reyes
On 1:00 pm 07/18/08 Tom Lane [EMAIL PROTECTED] wrote:
 Are there any AFTER triggers (including foreign key constraints)

I have two foreign key constraints.

 the table being inserted into?  If so the list of pending trigger
 events might be your problem.

I guess I can try disablign the foreign key, but that would be less than
ideal for production. This is an analytics environment so all operations
are in bulk.

 If you can get Postgres to report an actual out-of-memory error (as
 opposed to crashing from OOM kill)

Disabled oom with vm.overcommit_memory=2.

then it should dump a memory usage
map into the postmaster log.  Looking at that would be informative.

Got it.
--
AfterTriggerEvents: 10553909248 total in 1268 blocks; 20432 free (6
chunks); 1055316 used
ExecutorState: 122880 total in 4 blocks; 68040 free (8 chunks); 54840
used
  Operator lookup cache: 24576 total in 2 blocks; 11888 free (5 chunks);
12688 used
  Operator class cache: 8192 total in 1 blocks; 1680 free (0 chunks); 6512
used
  MessageContext: 131072 total in 5 blocks; 50712 free (291 chunks); 80360
used
  smgr relation table: 24576 total in 2 blocks; 3584 free (4 chunks); 20992
used
  TransactionAbortContext: 32768 total in 1 blocks; 32736 free (0 chunks);
32 used
  Portal hash: 8192 total in 1 blocks; 1680 free (0 chunks); 6512 used
  PortalMemory: 8192 total in 1 blocks; 7888 free (0 chunks); 304 used
PortalHeapMemory: 1024 total in 1 blocks; 768 free (0 chunks); 256 used
  ExecutorState: 98784 total in 8 blocks; 24064 free (22 chunks); 74720
used
ExprContext: 8192 total in 1 blocks; 8016 free (0 chunks); 176 used
HashTableContext: 8192 total in 1 blocks; 8064 free (1 chunks); 128
used
  HashBatchContext: 532676656 total in 74 blocks; 1863936 free (5
chunks); 530812720 used
HashTableContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
  HashBatchContext: 415227952 total in 59 blocks; 6589744 free (5
chunks); 408638208 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 8192 total in 1 blocks; 8136 free (0 chunks); 56 used
Relcache by OID: 24576 total in 2 blocks; 8672 free (3 chunks); 15904 used
  CacheMemoryContext: 2390256 total in 22 blocks; 751904 free (2 chunks);
1638352 used
CachedPlan: 1024 total in 1 blocks; 336 free (0 chunks); 688 used
CachedPlanSource: 1024 total in 1 blocks; 80 free (0 chunks); 944 used
SPI Plan: 1024 total in 1 blocks; 808 free (0 chunks); 216 used
CachedPlan: 7168 total in 3 blocks; 3120 free (0 chunks); 4048 used
CachedPlanSource: 7168 total in 3 blocks; 1816 free (0 chunks); 5352 used
SPI Plan: 1024 total in 1 blocks; 784 free (0 chunks); 240 used
CachedPlan: 3072 total in 2 blocks; 792 free (0 chunks); 2280 used
CachedPlanSource: 7168 total in 3 blocks; 3600 free (0 chunks); 3568 used
SPI Plan: 1024 total in 1 blocks; 800 free (0 chunks); 224 used
pg_cast_source_target_index: 2048 total in 1 blocks; 608 free (0
chunks); 1440 used
pg_language_oid_index: 2048 total in 1 blocks; 704 free (0 chunks);
1344 used
pg_toast_2619_index: 2048 total in 1 blocks; 608 free (0 chunks); 1440
used
pg_amop_opr_fam_index: 2048 total in 1 blocks; 608 free (0 chunks);
1440 used
tcf_mnfoids_partid: 2048 total in 1 blocks; 752 free (0 chunks); 1296
used
tcf_mnfoids_pkey: 2048 total in 1 blocks; 752 free (0 chunks); 1296 used
cards_cardnum_key: 2048 total in 1 blocks; 752 free (0 chunks); 1296 used
cards_pkey: 2048 total in 1 blocks; 752 free (0 chunks); 1296 used
tcf_original_trans_partid_cardnum: 2048 total in 1 blocks; 656 free (0
chunks); 1392 used
tcf_original_trans_yearmo: 2048 total in 1 blocks; 752 free (0 chunks);
1296 used
pg_constraint_contypid_index: 2048 total in 1 blocks; 704 free (0
chunks); 1344 used
pg_constraint_conname_nsp_index: 2048 total in 1 blocks; 608 free (0
chunks); 1440 used
pg_operator_oprname_l_r_n_index: 2048 total in 1 blocks; 392 free (0
chunks); 1656 used
pg_proc_proname_args_nsp_index: 2048 total in 1 blocks; 584 free (0
chunks); 1464 used
pg_proc_oid_index: 2048 total in 1 blocks; 704 free (0 chunks); 1344 used
pg_shdepend_reference_index: 2048 total in 1 blocks; 608 free (0
chunks); 1440 used
pg_namespace_oid_index: 2048 total in 1 blocks; 704 free (0 chunks);
1344 used
pg_statistic_relid_att_index: 2048 total in 1 blocks; 608 free (0
chunks); 1440 used
pg_inherits_relid_seqno_index: 2048 total in 1 blocks; 608 free (0
chunks); 1440 used
pg_constraint_oid_index: 2048 total in 1 blocks; 704 free (0 chunks);
1344 used

Re: [GENERAL] Reducing memory usage of insert into select operations?

2008-07-18 Thread Francisco Reyes
On 3:55 pm 07/18/08 Tom Lane [EMAIL PROTECTED] wrote:
   AfterTriggerEvents: 10553909248 total in 1268 blocks; 20432 free (6
   chunks); 1055316 used

 Well, that's definitely your problem ...

So I need to remove the foreign constraints?

 HashBatchContext: 415227952 total in 59 blocks; 6589744
   free (5 chunks); 408638208 used

 although these numbers seem way outta line too.  What did you say you
 had work_mem set to?

Initially on the first crash it was 256MB. I believe at the time of the
crash I got the dump for it was down to 64MB or 8MB. I kept trying lower
values. Also tried reducing shared_buffers as someone suggested.

I will bump my shared_buffers back to 3GB and work_mem back to 64MB.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Reducing memory usage of insert into select operations?

2008-07-18 Thread Francisco Reyes
On 3:55 pm 07/18/08 Tom Lane [EMAIL PROTECTED] wrote:
   AfterTriggerEvents: 10553909248 total in 1268 blocks; 20432 free (6
   chunks); 1055316 used

 Well, that's definitely your problem ...

What is the overhead for each AfterTriggerEvent?

I guess I can write a program to process so many rows at a time, if I know
how much overhead each AfterTriggerEvent uses. I know 15 million at a time
worked fine, so I could do 5 or 10 million at a time.

When does the memory usage for those AfterTriggerEvents gets released? At
commit?


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Reducing memory usage of insert into select operations?

2008-07-18 Thread Francisco Reyes
On 4:55 pm 07/18/08 Tom Lane [EMAIL PROTECTED] wrote:
 The only thing I can think of is that you had a huge number of rows
 with all the same hash value, so that there wasn't any way to split
 the batch into smaller sections.  What are the join keys exactly in
 this query, and what can you tell us about their data distributions?

I can't put actual table or column names so I am putting the actual select
and explain, with all names changed..

insert into customer_transactions
 (record_id, date, type, amount, ids, groupid)
select
  ca.record_id, coh.date, coh.type, coh.amount, coh.ids, ids.groupid
from
customer_original_historical coh,
cards ca,
customer_ids ids
where
ca.natural_key = coh.natural_key
and ids.ids = coh.ids
and coh.yearmo  '200703';

Hash Join  (cost=712213.57..27293913.33 rows=234402352 width=24)
   Hash Cond: (coh.id = ids.id)
   -  Hash Join  (cost=551387.26..18799378.16 rows=234402352
width=22)
 Hash Cond: (coh.user_id = ca.user_id)
 -  Seq Scan on customer_original_historical coh
  (cost=0.00..6702501.40 rows=234402352 width=47)
   Filter: (yearmo  '200703'::bpchar)
 -  Hash  (cost=268355.67..268355.67 rows=14637567 width=32)
   -  Seq Scan on cards ca
   (cost=0.00..268355.67 rows=14637567 width=32)
   -  Hash  (cost=77883.25..77883.25 rows=5055525 width=6)
 -  Seq Scan on customer_ids ids
 (cost=0.00..77883.25 rows=5055525 width=6)

There was a single table, customer_original_historical, which was using a
natural key with a text field.

Most queries used the customer_original_historical by itself or joined
against a single other table which we shoudl call area.

The new schema I am testing is to split the one single file into 12 files
per month.

In addition I replaced the natural keys with a synthetic integer key.
I also replaced the area table with a customer_ids table which only has
two columns: synthetic key for historical and a region.

In order to have 12 tables per month I grouped all the regions into 12
groups. Queries are usually within a single region so what I am trying to
benchmark is if dividing 24 months of data into 24 sets of 12 regions will
perform better than a single large file.

The distribution of the joins is:
There are about 1000,000 unique natural keys. Each natural key has in
average 15 rows per month.
ids are regions where the natural_keys are. Figure 10s of thousands of
natural_keys to an id.

Is that along the lines of what you were looking for?


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Backup/Restore of single table in multi TB database

2008-07-18 Thread Francisco Reyes

Simon Riggs wrote:

Have a look at pg_snapclone. It's specifically designed to significantly
improve dump times for very large objects.

http://pgfoundry.org/projects/snapclone/
  
Also, in case the original poster is not aware, by default pg_dump 
allows to backup single tables.

Just add -t table name.



Does pg_snapclone works mostly on large rows or will it also be faster 
than pg_dump for narrow tables?


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Reducing memory usage of insert into select operations?

2008-07-17 Thread Francisco Reyes

Redhat 4
postgresql 8.3.3
Memory: 12GB

While doing a couple of operations of the type
insert into table select from some other table

The OS triggered the out of memory killer (oom-killer).

After some research and trial/error I found it was the inserts.
I see one of the inserts is using up 12GB!

How can I reduce the usage?
Postgresql.conf settings.
shared_buffers = 3GB
temp_buffers = 64MB # min 800kB

work_mem = 256MB# min 64kB
maintenance_work_mem = 1GB
  


Reducing work_mem would help?

The table I am selecting from has a few hundred million rows.
The table I am inserting into has partitions. I am benchmarking breaking up 
a large table into smaller partitions.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Reducing memory usage of insert into select operations?

2008-07-17 Thread Francisco Reyes

Douglas McNaught writes:



It does seem that reducing work_mem might help you, but others on this


I reduced it from 256MB to 64MB. It seems it is helping.
At 256MB the usage per DB connection instance was upwards of 12GB. At 64MB 
so far is around 7GB. I just reduced it further to 32MB and see how that 
works. 




It would probably be helpful for you to post the EXPLAIN output from

Hash Join  (cost=712213.57..27293913.33 rows=234402352 width=24)
  Hash Cond: (coh.id = ids.id)
  -  Hash Join  (cost=551387.26..18799378.16 rows=234402352 width=22)
Hash Cond: (coh.user_id = ca.user_id)
-  Seq Scan on customer_original_historical coh  
 (cost=0.00..6702501.40 rows=234402352 width=47)

  Filter: (yearmo  '200703'::bpchar)
-  Hash  (cost=268355.67..268355.67 rows=14637567 width=32)
  -  Seq Scan on cards ca  
  (cost=0.00..268355.67 rows=14637567 width=32)

  -  Hash  (cost=77883.25..77883.25 rows=5055525 width=6)
-  Seq Scan on customer_ids ids  
(cost=0.00..77883.25 rows=5055525 width=6)


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Reducing memory usage of insert into select operations?

2008-07-17 Thread Francisco Reyes

Douglas McNaught writes:


Is this a 32-bit installation or 64-bit?  3GB of shared_buffers is way
too big for a 32-bit setup.



64-bit.
The machine has 12GB of RAM so shared-buffers is about 1/3.
Dedicated DB server.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Default fill factor for tables?

2008-07-11 Thread Francisco Reyes
While reading http://www.postgresql.org/docs/8.3/interactive/sql-cluster.html I 
discovered that tables can have a fill factor. However, the create table page 
doesn't mention what the default is.

In the create index page it mentions the default for indexes is 90%. Is
tables simmilar/same?


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Benchmarking best practices?

2008-06-05 Thread Francisco Reyes
At work I am creating a standard postgresql benchmark suite based on the
queries and operations that we commonly do.

A couple of questions
+ Should I shutdown/restart the DB between runs?

+ How much bigger than memory should my tables be to have a good benchmark?
One issue to keep in mind is that the benchmark DB will be only a subset of
the real DBs to make it easier to copy to multiple machines. Once we show
improvements in the benchmark subset after hardware/configuration/DB
redesign then we would validate against the full sized DBs in the different
machines.

The goals are to benchmark different settings and machines to work on
improving performance by changing the DB structures (ie index changes, DB
re-design) and by buying/upgrading hardware.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Bizgrez dead?

2008-06-05 Thread Francisco Reyes
Don't see any activity in the project since 2006. Is that project dead?


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Psql crashes with Segmentation fault on copy from

2008-05-28 Thread Francisco Reyes
On 6:28 pm 05/27/08 Tom Lane [EMAIL PROTECTED] wrote:
 Can you get us a stack trace from the crash?

Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 182894175648 (LWP 4487)]
0x003cc31723e6 in memcpy () from /lib64/tls/libc.so.6
(gdb) bt
#0  0x003cc31723e6 in memcpy () from /lib64/tls/libc.so.6
#1  0x00364bf0e0ae in PQunescapeBytea () from /usr/lib64/libpq.so.5
#2  0x00364bf0e230 in PQunescapeBytea () from /usr/lib64/libpq.so.5
#3  0x00364bf0c09e in PQsendQuery () from /usr/lib64/libpq.so.5
#4  0x00364bf0c788 in PQexec () from /usr/lib64/libpq.so.5
#5  0x00406e95 in ?? ()
#6  0x00409dfa in ?? ()
#7  0x0040408d in ?? ()
#8  0x004057cd in ?? ()
#9  0x00406286 in ?? ()
#10 0x00409f66 in ?? ()
#11 0x0040408d in ?? ()
#12 0x004057cd in ?? ()
#13 0x00406286 in ?? ()
#14 0x00409f66 in ?? ()
#15 0x0040c687 in ?? ()
#16 0x003cc311c3fb in __libc_start_main () from /lib64/tls/libc.so.6
#17 0x00403d2a in ?? ()
#18 0x007fb558 in ?? ()
#19 0x001c in ?? ()
#20 0x0001 in ?? ()
#21 0x007fb795 in ?? ()
#22 0x in ?? ()

Is that what you need?
I installed
compat-postgresql-libs-debuginfo-3-2PGDG.rhel4.x86_64.rpm
postgresql-debuginfo-8.2.7-1PGDG.rhel4.x86_64.rpm

What is compat-postgresql-libs-debuginfo?
Neither of those two RPMs are described in the RPM install PDF by Devrim
and Lamar.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Psql crashes with Segmentation fault on copy from

2008-05-28 Thread Francisco Reyes
On 11:09 am 05/28/08 Tom Lane [EMAIL PROTECTED] wrote:
   I installed
   compat-postgresql-libs-debuginfo-3-2PGDG.rhel4.x86_64.rpm
   postgresql-debuginfo-8.2.7-1PGDG.rhel4.x86_64.rpm

 Do those *exactly* match the versions of the Postgres RPMs you're
 using?

I got them from the same directory as the rest of the 8.2.7 RPMs I
downloaded.

I am going to try uninstalling the RPMs and using the Source RPMS and
report back. There is no production data yet since I am working on testing
the machine and creating some benchmarks so I can redo the entire setup as
needed.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Psql crashes with Segmentation fault on copy from

2008-05-28 Thread Francisco Reyes
On 11:09 am 05/28/08 Tom Lane [EMAIL PROTECTED] wrote:

Re-send. Didn't do reply-all before.

 Well, it would be if it were right; but PQsendQuery doesn't call
 PQunescapeBytea, so there's something wrong with the debug info.
..
   compat-postgresql-libs-debuginfo-3-2PGDG.rhel4.x86_64.rpm

That was the wrong lib.
Installed 
postgresql-debuginfo-8.2.7-1PGDG.rhel4.x86_64.rpm and got
Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 182895355296 (LWP 5008)]
0x003cc31723e6 in memcpy () from /lib64/tls/libc.so.6
(gdb) bt
#0  0x003cc31723e6 in memcpy () from /lib64/tls/libc.so.6
#1  0x002a955820ae in pqPutMsgBytes (buf=0x2a9860a010, len=2147483647,
conn=0x560a80) at fe-misc.c:475
#2  0x002a95582230 in pqPuts (
s=0x2a9860a010 
yjbjK8WKLRHKavptOqlPD4QeI24=\t200803\t2008-03-11\t0\t10.47\t08620667742MDF1\t\tMCDONALD'S
 
F7742\t5814\t1003\t1187954\nyjbjK8WKLRHKavptOqlPD4QeI24=\t200803\t2008-03-11\t0\t16.99\t00614111869AWG1\t\tPIGGLY
 WIGGLY#52\t5411...,
conn=0x560a80) at fe-misc.c:151
#3  0x002a9558009e in PQsendQuery (conn=0x560a80,
query=0x2a9860a010 
yjbjK8WKLRHKavptOqlPD4QeI24=\t200803\t2008-03-11\t0\t10.47\t08620667742MDF1\t\tMCDONALD'S
 
F7742\t5814\t1003\t1187954\nyjbjK8WKLRHKavptOqlPD4QeI24=\t200803\t2008-03-11\t0\t16.99\t00614111869AWG1\t\tPIGGLY
 WIGGLY#52\t5411...)
at fe-exec.c:687
#4  0x002a95580788 in PQexec (conn=0x560a80,
query=0x2a9860a010 
yjbjK8WKLRHKavptOqlPD4QeI24=\t200803\t2008-03-11\t0\t10.47\t08620667742MDF1\t\tMCDONALD'S
 
F7742\t5814\t1003\t1187954\nyjbjK8WKLRHKavptOqlPD4QeI24=\t200803\t2008-03-11\t0\t16.99\t00614111869AWG1\t\tPIGGLY
 WIGGLY#52\t5411...)
at fe-exec.c:1288
#5  0x00406e95 in SendQuery (
query=0x2a9860a010 
yjbjK8WKLRHKavptOqlPD4QeI24=\t200803\t2008-03-11\t0\t10.47\t08620667742MDF1\t\tMCDONALD'S
 
F7742\t5814\t1003\t1187954\nyjbjK8WKLRHKavptOqlPD4QeI24=\t200803\t2008-03-11\t0\t16.99\t00614111869AWG1\t\tPIGGLY
 WIGGLY#52\t5411...)
at common.c:878
#6  0x00409dfa in MainLoop (source=0x5414f0) at mainloop.c:340
#7  0x0040408d in process_file (filename=0x541d70
data/usb_t_60M.sql, single_txn=0 '\0') at command.c:1394
#8  0x004057cd in exec_command (cmd=0x541bc0 i,
scan_state=0x549b30, query_buf=0x549890) at command.c:546
#9  0x00406286 in HandleSlashCmds (scan_state=0x549b30,
query_buf=0x549890) at command.c:92
#10 0x00409f66 in MainLoop (source=0x541070) at mainloop.c:259
#11 0x0040408d in process_file (filename=0x560750
bench-index-after.sql, single_txn=0 '\0') at command.c:1394
#12 0x004057cd in exec_command (
cmd=0x569360 
Hxnh9aOwMviVVP9I=\t200803\t2008-03-10\t0\t15.54\t434240509886\t\tNU-YALE 
CLEANERS 
(BARD\t4225\t0\t11636644\n3MM9gwBGMHiyPGmUrqIEOOha1Ao=\t200803\t2008-03-11\t0\t739.95\t55417210017\t\tUSAA
 PC PREMIUM\t6300\t0\t183718...,
scan_state=0x549c20, query_buf=0x54a480) at command.c:546
#13 0x00406286 in HandleSlashCmds (scan_state=0x549c20,
query_buf=0x54a480) at command.c:92
#14 0x00409f66 in MainLoop (source=0x3cc332fb00) at mainloop.c:259
#15 0x0040c687 in main (argc=80207, argv=0x0) at startup.c:367
#16 0x003cc311c3fb in __libc_start_main () from /lib64/tls/libc.so.6
#17 0x00403d2a in _start ()
#18 0x007fb558 in ?? ()
#19 0x001c in ?? ()
#20 0x0001 in ?? ()
#21 0x007fb795 in ?? ()
#22 0x in ?? ()


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Psql crashes with Segmentation fault on copy from

2008-05-28 Thread Francisco Reyes
On 3:09 pm 05/28/08 Gregory Stark [EMAIL PROTECTED] wrote:
Does it really have a COPY command at the beginning? Are you really doing \i 
data/usb_t_60M.sql or were you trying to do a copy from this file?

Argh..That's it.
When I re-organized the scripts I must have taken the copy command from the top 
of the data file and did not put a 'copy from' in the calling script.

Thanks!!!

Is there a slowdown by keeping the debug RPM?
Or should I only have it when trying to troubleshoot a problem?


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Psql crashes with Segmentation fault on copy from

2008-05-27 Thread Francisco Reyes
I am setting up a new machine and preparing some standard benchmark tests.
While trying to load some data using copy from psql is crashing.

OS Red Hat Enterprise Linux ES release 4 (Nahant Update 6)
Postgresql  8.2.7 installed from RPMs

I ran strace on psql and got:

read(5, DAIRY QUEENQ68\t5814\t1215..., 4096) = 4096
rt_sigprocmask(SIG_BLOCK, NULL, [], 8)  = 0
.. 36 other lines like the ones above/below.
rt_sigprocmask(SIG_BLOCK, NULL, [], 8)  = 0
read(5, TORE 2653\t5311\t93\t5080284\n2qLMfi..., 4096) = 3796
rt_sigprocmask(SIG_BLOCK, NULL, [], 8)  = 0
.. 36 other lines like the ones above/below.
rt_sigprocmask(SIG_BLOCK, NULL, [], 8)  = 0
read(5, , 4096)   = 0
gettimeofday({1211915058, 706827}, NULL) = 0
--- SIGSEGV (Segmentation fault) @ 0 (0) ---
+++ killed by SIGSEGV +++


On the server I increased the verbosity to debug levels:
DEBUG:  0: checkpoint starting
LOCATION:  CreateCheckPoint, xlog.c:5584
DEBUG:  0: checkpoint complete; 0 transaction log file(s) added, 0
removed, 0 recycled
LOCATION:  CreateCheckPoint, xlog.c:5712
LOG:  08P01: unexpected EOF on client connection
LOCATION:  SocketBackend, postgres.c:307
DEBUG:  0: proc_exit(0)
LOCATION:  proc_exit, ipc.c:94
DEBUG:  0: shmem_exit(0)
LOCATION:  shmem_exit, ipc.c:125
LOG:  0: disconnection: session time: 0:28:57.253 user=benchmark
database=benchmark host=[local]
LOCATION:  log_disconnections, postgres.c:3840
DEBUG:  0: exit(0)
LOCATION:  proc_exit, ipc.c:112
DEBUG:  0: server process (PID 3112) exited with exit code 0
LOCATION:  LogChildExit, postmaster.c:2425

The same data file loads into another machine with Postgresql 8.2.4 and the
same version of RedHat..

Checked the disk with the badblocks command. No errors found.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Seeking rows whit \

2008-05-18 Thread Francisco Reyes

Is there a way to search for rows with the character \?

I tried variants of like and  ~.

create table mytable (f1 text);
insert into mytable values ('Test row 1');
insert into mytable values (E'Test row 2 \\');

select * from mytable where f1 like E'%\\%'; -- returned nothing
select * from mytable where f1 ~ '\'; -- waiting for single quote
select * from mytable where f1 ~ E'\\'; -- Error

And a few more variants.. with no valid reults.

Any suggestions?

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Seeking rows whit \

2008-05-18 Thread Francisco Reyes

Gurjeet Singh writes:

Thus, writing a pattern that actually matches a literal backslash means 
writing four backslashes in the statement.


Thanks.

select * from mytable where f1 like '%%'; 
Worked.


For the archives..

After reading section 4.1.2.2 
Also found that these other one also works

select * from mytable where f1 ~ $$\\$$;
select * from mytable where f1 like $$%\\%$$;







--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Making sure \timing is on

2008-05-11 Thread Francisco Reyes

Is there a way to make certain \timing is on?

I am looking for something simmilar to what one can do with the pager
\pset pager always

Most scripts I am working with (inheritted) don't have \timing in them so I 
can put timing on/off as needed. However some scripts already have it. So if 
I set \timing before the script and the script calls it again then it gets 
turned off.


As I go over all the scripts it will be less of an issue, but I still would 
like a way to make sure timing is on, before I run some scripts without 
having to first check the script to see if I had put \timing in the script 
or not. 


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Making sure \timing is on

2008-05-11 Thread Francisco Reyes

Tom Lane writes:


Not that I know of :-(.  There's been discussion of fixing all of psql's
toggle commands to offer \foo on and \foo off


What would be a good starting point for someone interesting in looking into 
working on that?


Started a job recently as PostgresSQL DBA.. so sooner or later I will want 
to start looking at the code. This may be one way to get my feet wet with 
the code.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Using Epoch to save timestamps in 4 bytes?

2008-05-10 Thread Francisco Reyes

Bruce Momjian writes:

I am looking at tens of millions of rows, which is why my predecessor may 
have used integer to store epoch to save space.


Our timestamp has a much larger range than a 4-byte time_t, docs say:

entry4713 BC/entry
entry294276 AD/entry


Given that all of our dates will fall within what we can store in 4bytes, 
what would be the easiest way to use epoch as a timestamp?


Create a couple of functions so our developers can use a date representation 
like '20080508 12:40:00' and have the functions tranlate strings to epoch 
and back?


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Using Epoch to save timestamps in 4 bytes?

2008-05-08 Thread Francisco Reyes
While looking at a database I inheritted I noticed the database has tables 
with integers used to store epoch.


I noticed that timestamp is 8 bytes and was wondering how come timestamp is 
8 bytes and not 4. Is it to be able to support precission beyond a second?


I am looking at tens of millions of rows, which is why my predecessor may 
have used integer to store epoch to save space.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] tcp_keepalive values not taking

2007-08-20 Thread Francisco Reyes

Postgresql 8.2.4
FreeBSD 6.2

According to 
http://www.postgresql.org/docs/current/static/runtime-config-connection.html
the tcp_keepaliave variables take the OS setting if 0, or use whatever value 
one puts in postgresql.conf.


I tried the following values in postgresql.conf:

tcp_keepalives_idle = 600   # TCP_KEEPIDLE, in seconds;
   # 0 selects the system default
tcp_keepalives_interval = 60# TCP_KEEPINTVL, in seconds;
   # 0 selects the system default
tcp_keepalives_count = 10   # TCP_KEEPCNT;
   # 0 selects the system default

Yet when I do show tcp_keepalives_idle, show tcp_keepalives_interval or 
tcp_keepalives_count they all show zero. Tried both reload and 
restart.


Also tried changing sysctl net.inet.tcp.keepidle=600 in FreeBSD. It still 
doesn't show in postgresql.


Any suggestions?

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Dealing with tsvector in fuctions for data split

2007-08-08 Thread Francisco Reyes

Francisco Reyes writes:


ERROR:  operator does not exist: text || tsvector

Also, it'd be worth to show simplified version of your function, which 
demonstrates your problem.



I did include that.


The function looks like:
AS $$
DECLARE
  v_sql TEXT;
BEGIN
  v_sql := 'INSERT INTO messagecatalog_'|| 
to_char(NEW.timestampfield,'')||

  '(field1, field2) values ('
||New.field1||','||New.field2||')
')';
  EXECUTE v_sql;
  RETURN NULL;
END
$$;


In the code above field1 is text and field2 is tsvector.



Any suggestions?
Anyone else has dealt with tsvectors in a partition environment?
If sow how did you get the split function/rule to insert into the child 
table? 


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Dealing with tsvector in fuctions for data split

2007-08-06 Thread Francisco Reyes

Tom Lane writes:


This is not a particularly good way of accomplishing partitioning,
as you'll be needing *every* part of your application to be explicitly
aware of the exact partitioning scheme.


I am trying to follow the presentation at PGcon2007 by Robert Treat.
I created a master table, then the children which inherit from the master.
The children have checks to make sure the checks are mutually exclusive.

The function is for the insert trigger.
Applications will insert against the master table table and the function is 
to be called by an insert trigger in the master table. The function is to 
redirect each insert to the proper child table. Users will access the data 
through the master table. I will have constraint_exclusion = on so only the 
appropriate tables get accessed on selects.




However, if you insist on doing it like that, don't you need
quote_literal() for the field values?


ERROR:  function quote_literal(tsvector) does not exist



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Dealing with tsvector in fuctions for data split

2007-08-05 Thread Francisco Reyes
I am trying to create a function to handle inserts in a database with 
partitions in a particular table.


The function looks like:
AS $$
DECLARE
   v_sql TEXT;
BEGIN
   v_sql := 'INSERT INTO messagecatalog_'|| 
to_char(NEW.timestampfield,'')||

   '(field1, field2) values ('
 ||New.field1||','||New.field2||')
')';
   EXECUTE v_sql;
   RETURN NULL;
END
$$;

The problem I am running into is that one of the fields is a tsvector and 
when I try to concatenate it wit the rest of the string it gives an error.
The only thing I can think of so far is to start a transaction, insert 
without the tsvector and then do an update to get the tsvector in.


Anyone else had to deal with tsvectors in a fuction?

I am copying some existing data so all the rows already have the tsvector.

If nothing else works I may just not deal with the tsvector in the function 
and see if I just re-create the tsvector for each record as I am insert the 
data. 


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[GENERAL] Using result from current_database() into another command?

2007-07-24 Thread Francisco Reyes

select current_database();
current_database
--
test2

Is there a way to use that result from current_database() into something 
like: 
alter database CURRENT_DATABASE() set search_path=myschema,public;


I have some generic scripts that I am preparing for several databases. Most 
commands/loads don't care about the database, since they just use the one I 
am connected to. Alter database however needs the database specified. 


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[GENERAL] Feature request: Per database search_path

2007-07-18 Thread Francisco Reyes
As far as I know, currently one can set the search path globally, or on a 
per role bases.


I was wondering if it could be possible to have a per database search_path.
I believe this would be not only convenient, but will add flexibility.

What got me thinking of this was that I manually ste a search_path.
Connected to another database and when I came back to the original database 
the search_path was gone.


At first I thought perhaps search_path could be set by database and that 
changing database was blanking my change. After looking at the help for 
alter database I noticed that there is no way to set the search_path by 
database.



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Feature request: Per database search_path

2007-07-18 Thread Francisco Reyes

Richard Huxton writes:


ALTER DATABASE leia SET search_path = public,lookups;
Seems to work for me on 8.2 - you'll need to disconnect and reconnect to 
see it take place though.


Hmm.. I must have typed something wrong when I tried..

For the archives..
If a user has a search path, that takes precedence over the database 
search_path



Is there a way to unset a role's search_path?
I had set one role with a particular search path. Now want to take that off 
so the user can get the database's search_path setting.


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] pg_dump vs schemas

2007-07-16 Thread Francisco Reyes

Tom Lane writes:


Shouldn't the public. be left out?


 I'm hesitant to  remove the schema spec for fear we'd end up with 
underspecified output
in some corner case or other.


Totally understand. How about making it an option?
Just like the --no-owner option. There are options that one may rarely use, 
but that can be very usefull for certain type of specialized restores.



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


  1   2   3   >