Re: [ADMIN] Why sequence grant is separated from table?

2013-06-20 Thread Achilleas Mantzios

On 19/06/2013 12:47, Szymon Guz wrote:
On 19 June 2013 11:35, Rural Hunter ruralhun...@gmail.com 
mailto:ruralhun...@gmail.com wrote:


I really hate the error permission denied for sequence x
when I grant on a table but forget to grant additionally on the
related sequence to users. Can the permission of table and related
sequences be merged?


I can imagine a situation where you want a user to be granted rights 
on a table, to use sequence with curval and nextval functions, but not 
setval, as it can provide some database problems.


What's more, in general, you cannot say which sequence is used for 
which table, if the sequence is not owned by the table.


Right, there might be cases, when one sequence is used by more than one 
table. Like for instance when the user wishes to create a view on a 
union of two tables, with their IDs included, and still those IDs to be 
guaranteed to be unique across the union.
However, after I dumped a table recently whose sequence is shared by 
another table as wee i noticed this :


ALTER SEQUENCE vessels_id_seq OWNED BY vessels.id;

Which means that the initial relationship between the creating table 
and the sequence is preserved for the whole life of the table. This 
could tempt new users to think that it could be meaningful to bind those 
two in the GRANT.




Which permissions of a table do you want to merge with which 
permissions of sequences?


regards
Szymon



--
Achilleas Mantzios



Re: [ADMIN] Installing multiple instances of Postgred on one FreeBSD server

2013-05-08 Thread Achilleas Mantzios
Pretty informative Jim, thanx.

Do you remember which version, when, by who, were these shortcomings (different 
user, different port) resolved?
At the time it seemed like a very hard thing to implement in the current status 
of jails.
So, to rephrase, did it take a major jail re-write to achieve the above? or was 
it more on the hack-side?

On Ðåì 02 ÌáÀ 2013 08:58:49 Jim Mercer wrote:
 On Thu, May 02, 2013 at 10:10:51AM +0300, Achilleas Mantzios wrote:
  Hi, i was just lurking around,
  being previously interested in running pgsql in jail environment (must have 
  been some years back)
  i noticed that jails had issues with running many instances of postmaster 
  listening on the same port
  and with the same user (say e.g. postgres). So basically having cloned 
  jails running pgsql was not
  out of the box possible without tweaking either user or port.
  Being many years out of freebsd admin, has this been remedied by now?
 
 yeah, those issues are no longer
 
 the key thing is to have things running on the base machine set to listen
 on a specific IP, rather than all interfaces, as many applications do by
 default.
 
 so, for instance, for ssh on the base system, you would do:
 
 sshd_enable=YES
 sshd_flags=-o ListenAddress=120.121.122.123
 
 this will prevent it from listening for ssh on the ip you intend to use for
 the jail.
 
 and in the jail, you can leave it listen on all interfaces, since, in the
 jail, it only has its own interface.
 
 in the case of pgsql, the best way to do it is to not have pgsql installed 
 on the base server, and only run it in a jail (or a couple jails, if you are
 looking to smooth the upgrade from one to the next).
 
 if you want to run pgsql on the base server, then set:
 listen_addresses = '120.121.122.123'
 in postgresql.conf, and there will be no conflict of listeners.
 
 it issue of users/uids is moot, since the jail is independent of the main 
 system.
 
 assuming you configure the base system to listen only on its own IP, 
 installing
 on the jail, is no different than installing on a normal system.
 
 --jim
 
  
  On ?? 01 ?? 2013 16:24:01 Jim Mercer wrote:
   On Wed, May 01, 2013 at 02:21:44PM -0600, Benjamin Krajmalnik wrote:
So essentially we would have to run jails on this - does that affect
performance?
   
   i would say minimal impact.
   


-Original Message-
From: Jim Mercer [mailto:j...@reptiles.org] 
Sent: Wednesday, May 01, 2013 2:20 PM
To: Benjamin Krajmalnik
Cc: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] Installing multiple instances of Postgred on one
FreeBSD server

On Wed, May 01, 2013 at 01:58:21PM -0600, Benjamin Krajmalnik wrote:
 I would like to install PG 9.2 on the same server and use Slony to 
 replicate the databases, and once everything is up take the 9.0 
 cluster down.

allocate a new ip to the server.
reconfigure/restart 9.0 to listen on the main IP (not all as default)

use ezjail to create a freebsd jail on thenew IP

install 9.2 in the jail

treat it them as two servers.

point clients at new ip.

once migrated, delete all 9.0 from the main server.

if you need to upgrade again, add another jail.

stop and delete the old jail whe the next upgrade is complete.

repeat as often as you want to upgrade.

i'm pretty sure this can be adapted to linux, but i forget what their
jail equivilent is.

the overhead is minimal



   
   
  -
  Achilleas Mantzios
  IT DEV
  IT DEPT
 
 
-
Achilleas Mantzios
IT DEV
IT DEPT
Dynacom Tankers Mgmt


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Installing multiple instances of Postgred on one FreeBSD server

2013-05-08 Thread Achilleas Mantzios
On Ôåô 08 ÌáÀ 2013 08:28:31 Jim Mercer wrote:
 On Wed, May 08, 2013 at 10:03:30AM +0300, Achilleas Mantzios wrote:
  Pretty informative Jim, thanx.
  
  Do you remember which version, when, by who, were these shortcomings
  (different user, different port) resolved?
  At the time it seemed like a very hard thing to implement in the current
  status of jails.
  So, to rephrase, did it take a major jail re-write to achieve the above?
  or was it more on the hack-side?
 
 i didn't keep up with the release notes, but i think the first few rollouts
 of jails in FreeBSD were on the experimental side.
 
 it certainly took a few tweaks and hacks to make it robust.
 
 however, that was years ago.
 
 jails have evolved into a well thought out virtual kernel environment.
 
 ah, here you go, some stuff about setting up pgsql in a FreeBSD jail:
 http://forums.freebsd.org/showthread.php?t=23114

Thanx,
1st the above links do not reveal anything new.
2nd, judging by the wiki : https://wiki.freebsd.org/Jails , 
jailified SysV IPC (or SysV IPC-ified jails) (the specific territory where 
pgsql comes in contact with jails) 
are in stalled state.

Latest work i recall was for this 
docs.freevps.com/doku.php?id=freebsd:changelog project
and i remember that getting this through was something not easy to do.

That's why i asked about any milestones in FreeBSD jail implementation.
It just does not seem normal, all of a sudden for pgsql to work that nice in 
FreeBSD jails,
there have been also some rather warm discussions here in these very lists.

 
 
-
Achilleas Mantzios
IT DEV
IT DEPT
Dynacom Tankers Mgmt


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] pg_attribute file in PostgreSQL 9.0

2012-03-11 Thread Achilleas Mantzios
On Τετ 07 Μαρ 2012 15:14:33 Lukasz Brodziak wrote:
 Hello,
 
 I have a question regarding pg_attribute. In which file it is stored
 because the relfilenode for it shows 0 and file 1249 isn't present in
 the folder.

in my system:

dynacom=# SELECT pg_relation_filepath('public.vessels'::regclass);
 pg_relation_filepath 
--
 base/539545/63242177
(1 row)

for system catalog tables you will have this relfilenode = 0

-- 
Achilleas Mantzios
IT DEPT

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] SET search path

2011-10-31 Thread Achilleas Mantzios
Στις Sunday 30 October 2011 12:34:38 ο/η Akash Kodibail έγραψε:
 Hi,
 
 I am using postgresql 8.4.0
 
 Select version() gives:
 
 PostgreSQL 8.4.0 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 3.4.6 
 20060404 (Red Hat 3.4.6-10), 64-bit
 
 I have trying to set search path, so that I do not have to append the 
 schema_name while doing a select on the tables, however, when I do so, the 
 search_path is set only for that session, it is not a permanent change that 
 is done. Can you please guide me as to how to make it permanent.
 
 After I do a set search_path, pg_user view does not show any entry in the 
 column useconfig for the same being set.
 
As superuser
ALTER user username set search_path TO $user,someschema,public ; 
works in 8.3 and 9.*

 Regards,
 Akash.
 
 
 DISCLAIMER: The information in this message is confidential and may be 
 legally privileged. It is intended solely for the addressee. Access to this 
 message by anyone else is unauthorized. If you are not the intended 
 recipient, any disclosure, copying, or distribution of the message, or any 
 action or omission taken by you in reliance on it, is prohibited and may be 
 unlawful. Please immediately contact the sender if you have received this 
 message in error. Further, this e-mail may contain viruses and all reasonable 
 precaution to minimize the risk arising there from is taken by OnMobile. 
 OnMobile is not liable for any damage sustained by you as a result of any 
 virus in this e-mail. All applicable virus checks should be carried out by 
 you before opening this e-mail or any attachment thereto.
 Thank you - OnMobile Global Limited.
 



-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] How to find and/or REINDEX only GiST indexes in the database?

2011-10-12 Thread Achilleas Mantzios
Στις Wednesday 12 October 2011 15:46:45 ο/η Gnanakumar έγραψε:
 Hi,
 
 We're planning to upgrade our Production server from PostgreSQL v8.2.3 to
 the latest version in 8.2 series, that is v8.2.22.  
 
 In 8.2.11 Release Notes here
 http://www.postgresql.org/docs/current/interactive/release-8-2-11.html, it's
 been mentioned that:
 Also, if you were running a previous 8.2.X release, it is recommended to
 REINDEX all GiST indexes after the upgrade.
 
 My question is, how do I find out or REINDEX *only* GiST indexes in the
 database?  Is there a single syntax/command that does this?

I guess not, one way is to take an ascii schema-only dump and parse it,
looking for GIST indexes.

Anotherway is to look in those tables:
SELECT oid,* from pg_opclass ; -- to find the GIST class
SELECT c.relname,indclass from pg_catalog.pg_index i,pg_catalog.pg_class c 
where indexrelid=c.oid;
^^ to find those indexes which have this gist class oid in the indclass field.

Note tho, that indclass is oidvector, and i dont know how you could iterate on 
those (by SQL), 
without writing a C (or other) program.

 
 Regards,
 Gnanam
 
 
 



-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] create schedule job

2011-08-30 Thread Achilleas Mantzios
1st run the script by hand  (maybe with -v) to see what is going wrong.
2nd use cron

Στις Tuesday 30 August 2011 16:29:32 ο/η Karuna Karpe έγραψε:
 Hello,
 
 I am create one job using pgAgent in postgres studio.  In that job i want to
 backup whole database every day.  But when i check the backup location, then
 there is no any backup file created. I create on script and I give this
 script location in the definition tab of step window.  My script is
 following :
 
 #!/bin/bash
 #backup directory can be file server share that the PgAgent daemon account
 has access to
 BACKUPDIR=/opt/dailybackup
 PGHOST=localhost
 PGUSER=enterprisedb
 PGBIN=/opt/PostgresPlus/9.0AS/bin
 thedate=$(date +%Y-%m-%d-%H)
 themonth=$(date +%Y-%m)
 
 #Create a full backup of the server database
 $PGBIN/pg_dumpall -h $PGHOST -U $PGUSER | gzip 
 $BACKUPDIR/fullbackup-$themonth.sql.gz
 
 #put the names of the database you want to create an individual backup below
 dbs=(edb bench_replication bench_parallel pgpool)
 
 #iterate thru dbs in dbs array and backup each one
 for db  in ${dbs[@]}
 do
 $PGBIN/pg_dump -i -h $PGHOST -U $PGUSER -F c -b -v -f
 $BACKUPDIR/$db-$thedate.compressed $db
 done
 
 But this is not create backup file.
 
 So please give a solution for that.
 
 Regards,
 karuna karpe.
 



-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Problem retrieving large records (bytea) data from a table

2011-07-20 Thread Achilleas Mantzios
he could use smth like this to know the size like:
SELECT count(*),CASE WHEN length(doc_data)5000 THEN '=50 MB' WHEN  
length(doc_data)1 THEN '=100 MB' ELSE '100MB' END from doc_table 
GROUP by 2;

and then based on the above, to do finer queries to find large data.

However, i dont know if cursor based queries (like the ones used by JDBC) 
should be affected by the size.
Normally they would not. However, querying by psql this way will definitely be 
slow.


Στις Wednesday 20 July 2011 17:31:45 ο/η pasman pasmański έγραψε:
 You may do a backup of this table. Then with ultraedit search your
 documents and remove them.
 
 2011/7/5, jtke...@verizon.net jtke...@verizon.net:
  I am having a hang condition every time I try to retrieve a large
  records (bytea) data from  a table
  The OS is a 5.11 snv_134 i86pc i386 i86pc Solaris with 4GB memory
  running Postgresql 8.4.3 with a standard postgresql.conf file (nothing
  has been changed)
  I have the following table called doc_table
Column  |  Type  |  Modifiers |
  Storage  | Description
  ++---
   id   | numeric| not null| main |
   file_n   | character varying(4000)| |
  extended |
   create_date  | timestamp(6) without time zone | not null
  default (clock_timestamp())
  ::timestamp(0)without time zone  | plain |
   desc | character varying(4000)| |
  extended |
   doc_cc   | character varying(120) | not null|
  extended |
   by   | numeric| not null| main |
   doc_data | bytea  | |
  extended |
   mime_type_id | character varying(16)  | not null|
  extended |
   doc_src  | text   | |
  extended |
   doc_stat | character varying(512) | not null
 default 'ACTIVE'::character varying   |
  extended |
  Indexes:
 documents_pk PRIMARY KEY, btree (document_id)
 
 
  A while ago the some developers inserted several records with a
  document (stored in doc_Data) that was around 400 - 450 MB each. Now
  when you do a select * (all) from this table you get a hang and the
  system becomes unresponsive.  Prior to these inserts, a select * (all,
  no where clause) worked.  I'm also told a select * from doc_table
  where id = xxx still works.  I haven't seen any error message in the
  postgresql log files.
  So I'm not sure how to find these bad records and why I am getting a
  hang.  Since this postgresql is running with the default config files
  could I be running out of a resource?  If so I'm not sure how to or
  how much to add to these resources to fix this problem since I have
  very little memory on this system.  Does anyone have any ideas why I
  am getting a hang.  Thanks
 
  --
  Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
  To make changes to your subscription:
  http://www.postgresql.org/mailpref/pgsql-admin
 
 
 
 -- 
 
 pasman
 



-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Importing the dump file in postgresql-7.4.23

2011-07-12 Thread Achilleas Mantzios
is your target DB =8.x?
Did you use to have tsearch2 installed in 7.4?

Anyway, this is how we used to dump from 7.4 and load into 8.3:

take the dump from 7.4,
install 8.3
initdb, etc...

cd /usr/local/src/postgresql-8.3.3
cd contrib/tsearch2/
make uninstall
cd ../intarray/
make uninstall

create domain public.tsvector as pg_catalog.tsvector;
create domain public.gtsvector as pg_catalog.gtsvector;
create domain public.tsquery as pg_catalog.tsquery;

psql yourdb -f yourdb_DUMP_OUR_DUMP_FROM_7_4.sql 2see 21

DROP TABLE public.pg_ts_cfg;
DROP TABLE public.pg_ts_cfgmap;
DROP TABLE public.pg_ts_dict ;
DROP TABLE public.pg_ts_parser ;
DROP TYPE public.query_int ;
DROP TYPE public.statinfo ;
DROP TYPE public.tokenout ;
DROP TYPE public.tokentype ;
DROP FUNCTION public.ts_debug(text) ;
DROP TYPE public.tsdebug ;

for every table with fields of tsvector do 

ALTER TABLE yourtable ALTER idxfti TYPE pg_catalog.tsvector;

DROP DOMAIN public.gtsvector ;
DROP DOMAIN public.tsquery ;
DROP DOMAIN public.tsvector ;

cd /usr/local/src/postgresql-8.3.3/contrib/tsearch2 (compatibility package)
make install

load 'tsearch2';
\i /usr/local/src/postgresql-8.3.3/contrib/tsearch2/tsearch2.sql

setup the new tsearch triggers, and you are ready to go!

Στις Tuesday 12 July 2011 15:54:38 ο/η saravanan έγραψε:
 Hi all,
 
 I'm using PostgreSQL-7.4.23 version. I have one dump file that was taken
 using pg_dumpall utility. Now i tried to import the dump file.  But its
 throwing me lot of *Invalid Command \N error* , *ERROR:  type tsvector does
 not exist* and *\t: extra argument string ignored error*
 
 I'm using 7.4.23 psql tool to import
 I tried the following ways to import pg_dumpall dump file
 psql -f all.dump template1
 psql -e template1  all.dump
 
 Here some of DDL commands were imported correctly. Commands like copy were
 not imported properly, its throwing lot of Invalid Command \N error.
 
 
 Can anyone help me out of this problem?
 
 Here are the some images while importing dump file 
 http://postgresql.1045698.n5.nabble.com/file/n4578769/1.jpg 
 
 
 
 
 
 http://postgresql.1045698.n5.nabble.com/file/n4578769/2.jpg 
 
 
 
 Any help would be appreciated
 Advance 
 Thanks
 
 
 
 --
 View this message in context: 
 http://postgresql.1045698.n5.nabble.com/Importing-the-dump-file-in-postgresql-7-4-23-tp4578769p4578769.html
 Sent from the PostgreSQL - admin mailing list archive at Nabble.com.
 



-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] LDAP Authentication : How to set secondary server?

2011-07-11 Thread Achilleas Mantzios
Hello,
is there a way to set a secondary ldapserver  in LDAP authentication in 
pg_hba.conf, such as that 
when the primary ldapserver is down (just the LDAP service or the whole host), 
postgresql to query
the secondary ldap server for authentication?
Is that possible?
Or is there any notion of pg_hba.conf to be interpreted in a similar way like 
sufficient in JSSE or PAM?
e.g. have two entries :
hostall all 127.0.0.1/32ldap 
ldapserver=primaryldap.internal.net ldapbasedn=
hostall all 127.0.0.1/32ldap 
ldapserver=secondaryldap.internal.net ldapbasedn=

so that when the first fails, the user has a second chance to be queries in the 
second ldap server.

Thanx for any help.

-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] How to start a database in ReadOnly mode?

2011-07-07 Thread Achilleas Mantzios
Hello
Στις Thursday 07 July 2011 10:26:19 ο/η Simon Riggs έγραψε:
 On Thu, Jul 7, 2011 at 8:08 AM, Paul Schluck pschl...@gmail.com wrote:
 
  Is it possible to start a postgres cluster or 1 postgres database in
  readonly mode? And - if yes - how can this be done?
 
 There is no formal, explicit command to do that. Just avoid doing
 writes, either in the application or by removing privileges.
 

This is how we did it, when there was a special requirement to do so for a 
special occasion,
make a new user (lets call it readonly) and have the app authenticating with 
this user.
Manage his privileges with smth like: (for every table)
CREATE user readonly;
REVOKE ALL ON DATABASE your database FROM readonly;
(for each table) GRANT SELECT ON your table TO readonly;

For this last one you might need to write a script, producing the sql commands.
 -- 
 �Simon Riggs�� http://www.2ndQuadrant.com/
 �PostgreSQL Development, 24x7 Support, Training  Services
 



-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Help compiling --with-ldap on Solaris 11 Express?

2011-07-01 Thread Achilleas Mantzios
Looking in openldap-2.4.10/libraries/liblber/lber-int.h:94 i see :
#define LBER_VALID(ber) ((ber)-ber_valid==LBER_VALID_BERELEMENT)
the assertion fails for some reason.

Could you make a proper reinstallation of the openldap related packages 
(openldap-client and possibly -devel),
remove the manual symbolic link, and try to ./configure again?

You might change the --with-libs or --with-include part to match wherever your 
ldap client installation is located.

Also, for the moment, drop the rest of --with- arguments, until you have a 
working postgresql with ldap,
and you'll go gradually from there.

Unfortunately i do not have a Solaris system to test.

Στις Friday 01 July 2011 10:23:53 ο/η Dave Pooser έγραψε:
 TL; DR version: I'm trying to compile 9.0.4 on Solaris 11 Express and
 having problems with LDAP support; I finally got it to compile and install
 but when I try to authenticate via LDAP I get Assertion failed:
 LBER_VALID( ber ), file sasl.c, line 103, function ldap_sasl_bind and it
 kills the process.
 
 More detailed version:
 
 I started with a clean install of Solaris Express 11 on a VMWare ESXi VM,
 single processor/2GB RAM.
 I installed from packages gcc-3, library/openldap and
 system/library/math/header-math.
 I downloaded the source tarball, uncompressed it and tried to configure:
  ./configure \
 --enable-nls \
 --with-python \
 --with-openssl \
 --with-libxml \
 --enable-thread-safety \
 --enable-dtrace \
 --with-ldap \
 --with-libs=/usr/lib \
 --with-includes=/usr/include/openldap
 
 
 Configure failed with an error
 checking for ldap_simple_bind in -lldap_r... no
 configure: error: library 'ldap_r' is required for LDAP
 
 It looked to me that it wasn't finding /usr/lib/libldap_r-2.4.so I added a
 symlink pointing libldap_r.so to libldap_r-2.4.so -- after that configure
 worked but make failed:
 Undefined first referenced
  symbol   in file
 ldap_start_tls_s  libpq/auth.o
 
 So I tried setting LDFLAGS in the configure and make commands:
 
 LDFLAGS=-lldap_r ./configure
 LDFLAGS=-lldap_r gmake
 
 That worked okay, as did gmake install -- I could initdb successfully
 and connect locally with psql. But when I try to connect using LDAP auth
 it pukes:
 Assertion failed: LBER_VALID( ber ), file sasl.c, line 103, function
 ldap_sasl_bind
 LOG:  server process (PID 26751) was terminated by signal 6
 LOG:  terminating any other active server processes
 WARNING:  terminating connection because of crash of another server process
 DETAIL:  The postmaster has commanded this server process to roll back the
 current transaction and exit, because another server process exited
 abnormally and possibly corrupted shared memory.
 HINT:  In a moment you should be able to reconnect to the database and
 repeat your command.
 LOG:  all server processes terminated; reinitializing
 LOG:  database system was interrupted; last known up at 2011-07-01
 01:54:39 CDT
 LOG:  database system was not properly shut down; automatic recovery in
 progress
 LOG:  consistent recovery state reached at 0/2501DBB4
 LOG:  record with zero length at 0/2501DBB4
 LOG:  redo is not required
 LOG:  autovacuum launcher started
 LOG:  database system is ready to accept connections
 
 Any pointers would be appreciated; I'm far from an expert on either
 Solaris or PostgreSQL, so it's very possible that I'm making one or more
 elementary mistakes.
 
 -- 
 Dave Pooser
 Cat-Herder-in-Chief, Pooserville.com
 ...Life is not a journey to the grave with the intention of arriving
 safely in one pretty and well-preserved piece, but to slide across the
 finish line broadside, thoroughly used up, worn out, leaking oil, and
 shouting GERONIMO!!! -- Bill McKenna
 
 
 



-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] dump from 9.0 to 8.3

2011-06-22 Thread Achilleas Mantzios
Στις Monday 20 June 2011 18:46:33 γράψατε:
 Achilleas Mantzios ach...@matrix.gatewaynet.com wrote:
  
  is there any quick and dirty (or otherwise easy way) to have a
  smooth procedure to load into a 8.3 server data dumped by an 9.0
  server ?
  
  I tried with pg_dump -i from the 8.3 server without good results.
  
 Your best bet might be to dump using the 9.0 pg_dump and then see
 where the errors are on the load attempt, and edit the file as
 needed.

For the moment, from the server running 9.0,

ALTER DATABASE dynacom SET bytea_output = 'escape';
pg_dump dynacom | sed 's/CREATE OR REPLACE/CREATE/g' | bzip2 | ssh 10.9.200.254 
bzcat | psql dynacom -f -

seem to do the trick, till now at least, the restore is not yet complete.

  
 -Kevin
 



-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] dump from 9.0 to 8.3

2011-06-21 Thread Achilleas Mantzios
Στις Monday 20 June 2011 18:46:33 ο/η Kevin Grittner έγραψε:
 Achilleas Mantzios ach...@matrix.gatewaynet.com wrote:
  
  is there any quick and dirty (or otherwise easy way) to have a
  smooth procedure to load into a 8.3 server data dumped by an 9.0
  server ?
  
  I tried with pg_dump -i from the 8.3 server without good results.
  
 Your best bet might be to dump using the 9.0 pg_dump and then see
 where the errors are on the load attempt, and edit the file as
 needed.
  

Thanx, unfortunately the dump is 310GB, and that rather forces me to go to the
migration direction.

Currently we run a central master system, (which i try to upgrade) and ~ 65 
slaves in remote locations
(very few of them have TCP/IP connectivity).

I thought of migrating only the central system and leave the slaves at 8.3 but 
this possibility 
seems rather weak.

 -Kevin
 



-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] dump from 9.0 to 8.3

2011-06-21 Thread Achilleas Mantzios
Στις Tuesday 21 June 2011 09:47:22 ο/η CaT έγραψε:
 On Tue, Jun 21, 2011 at 09:10:50AM +0300, Achilleas Mantzios wrote:
  Στις Monday 20 June 2011 18:46:33 ο/η Kevin Grittner έγραψε:
   Achilleas Mantzios ach...@matrix.gatewaynet.com wrote:

is there any quick and dirty (or otherwise easy way) to have a
smooth procedure to load into a 8.3 server data dumped by an 9.0
server ?

I tried with pg_dump -i from the 8.3 server without good results.

   Your best bet might be to dump using the 9.0 pg_dump and then see
   where the errors are on the load attempt, and edit the file as
   needed.

  
  Thanx, unfortunately the dump is 310GB, and that rather forces me to go to 
  the
  migration direction.
 
 I may get beaten up for this but... slony? :)
 
 Afaik should be doable. Your tables just need primary keys.
 
I think slony is would be an overkill.

The issue is that we make new slaves on demand. Mgmt is buying a new vessel and 
we 
are supposed to build a new server for it, which contains a subset of the 
master DB.

(Once deployed), The whole system is based on a heavily modified version of 
DBMirror
which does a lot of nice things, asynchronous, lazy, FK-orineted, row-level, 
fine grained
replication, which makes sure that at any point the vessel slave DB has exactly 
the set 
of data needed in order for the DB to be consistent, nothing less than that and 
nothing more.
This way we guarantee correct operation and minimal communication cost (via 
SAT).
Once a new row enters a table, and this row is indeed needed for a vessel (or a 
set of vessels)
then it will cause a recursive replication of all parent rows in all parent 
tables involved in FK
relationships to the said table.

So our problem is the initial build of the vessel DB. We are wondering of what 
to do next.
If we move the central master to 9.0, then chances are at the moment that we 
must upgrade
the procedure and the reference mother vessel system we keep here at the 
office
to 9.0 as well.
But this increases the management burden in our side, since we will have 3 
kinds of slaves
in the vessels : 7.4, 8.3 and 9.0.
I would love to have someone traveling to all vessels and performing the 
upgrades
but this is not realistic. So we might adopt this 3-version scheme, even if it 
means
that we will have to take care of every SQL we write to be at least runable in 
8.3 and 9.0
(as an example CREATE OR REPLACE PROCEDURAL LANGUAGE plpgsql; which already has 
biten me in 8.3)

Thank all you for all your thoughts shared and please point out anything that 
might help.

 -- 
   A search of his car uncovered pornography, a homemade sex aid, women's 
   stockings and a Jack Russell terrier.
 - 
 http://www.dailytelegraph.com.au/news/wacky/indeed/story-e6frev20-118083480
 



-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] dump from 9.0 to 8.3

2011-06-20 Thread Achilleas Mantzios
Hello,
is there any quick and dirty (or otherwise easy way) to have a smooth procedure 
to load into a 8.3 server
data dumped by an 9.0 server ?

I tried with pg_dump -i from the 8.3 server without good results.

-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] PL/Java

2011-04-19 Thread Achilleas Mantzios
Στις Tuesday 19 April 2011 13:11:35 ο/η Alessio Gennari έγραψε:
 Hello to everyone,
 where Can I find PL/Java installation manual and/or tutorials?
 
 Thanks in advance.
 

Here : http://pgfoundry.org/projects/pljava/ and here : 
http://wiki.tada.se/index.php?title=Main_Page

 Alessio
 



-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] PostgreSQL v9.0.3 Database Import Error: language plpgsql already exists

2011-03-30 Thread Achilleas Mantzios
Στις Wednesday 30 March 2011 16:44:33 ο/η Gnanakumar έγραψε:
 Hi,
 
 We're planning to upgrade our PostgreSQL database from v8.2.3 to v9.0.3.
 Before upgrading our Production server, we're trying out this complete
 upgrade process in a separate database server that is just exactly the
 replica of the production server.
 
 I already took a dump of our database using pg_dump command in v8.2.3:
 /usr/local/pgsql/bin/pg_dump -D -f dbdump.sql -S mydbuser mydb

Start with a schema dump (DDL) pg_dump -s 
edit the schema to excluded plpgsql, plpgsql_call_handler,etc... related 
statements
and then create a data dump using pg_dump -a
then load first the schema dump (DDL) and then the data dump.
Anyways, those little error msgs should not scare you, no major harm done,
you should be able to tell *real* problems from minor decorative issues like 
this one.

 
 Now, after upgrading PostgreSQL to v9.0.3, I'm trying to import the database
 dump using: psql -f dbdump.sql -o dbdump.log -d mydb -U mydbuser.  I'm
 getting the following errors while importing:
 
 psql:dbdump.sql:38: ERROR:  language plpgsql already exists
 psql:dbdump.sql:102646: ERROR:  attempt to redefine parameter
 plpgsql.variable_conflict
 psql:dbdump.sql:102649: ERROR:  function public.plpgsql_call_handler() does
 not exist
 
 I also read from 9.0 release notes that PL/pgSQL is now installed by
 default.
 http://www.postgresql.org/docs/current/interactive/release-9-0.html 
 
 My question is, how do I prevent/fix this error during import.  Are all
 these 3 errors one and the same or are they different errors that each has
 to be fixed separately?
 
 NOTE: We've multiple onsite customer servers that has to be upgraded to
 v9.0.3, hence I need to address this issue in general.
 
 Regards,
 Gnanam
 
 



-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] cp1250 problem

2011-02-16 Thread Achilleas Mantzios
Στις Wednesday 16 February 2011 00:00:10 ο/η Jan-Peter Seifert έγραψε:
 Hello,
 
 Am 15.02.2011 12:26, schrieb Achilleas Mantzios:
  Στις Tuesday 15 February 2011 12:44:31 ο/η Lukasz Brodziak έγραψε:
  Hello,
 
  How can I set PostgreSQL locale and encoding to be pl_PL.cp1250 all I
  can do is pl_PL.UTF-8.
  I have PG 8.2.4 on Ubuntu 10.04 (polish version).
 
 There are no code pages in Ubuntu. The nearest what you can get seems to
 be encoding LATIN2 ( and a compatible locale ). These charsets are NOT
 identical though.
 
  The locale for your whole cluster is defined in postgresql.conf.
 
 Not really - LC_COLLATE and LC_CTYPE are set during initialization of
 the cluster by initdb. You can verify the settings with pg_controldata:
 

That's true. Thanx.

 http://www.postgresql.org/docs/8.2/interactive/app-pgcontroldata.html
 
 As of PostgreSQL v8.4 you can specify these two locale settings
 different from the server's settings (for each database).
 If you plan to upgrade your PostgreSQL major server version beware of
 the removal of some implicit data type casts among other changes as of 8.3.
 
 Peter
 
 



-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] cp1250 problem

2011-02-15 Thread Achilleas Mantzios
Στις Tuesday 15 February 2011 12:44:31 ο/η Lukasz Brodziak έγραψε:
 Hello,
 
 How can I set PostgreSQL locale and encoding to be pl_PL.cp1250 all I
 can do is pl_PL.UTF-8.
 I have PG 8.2.4 on Ubuntu 10.04 (polish version).
 

I had similar issues with the Greek ISO locale/encoding and the UTF one.
If i had known earlier i would have implemented UTF since day one.
UTF-8 support is pretty mature, so i dont see any reason for people to
hold back to ISO encodings (in year 2011!)

The locale for your whole cluster is defined in postgresql.conf.
(and i believe the value is inherited by the ENV variables at initdb time)
Now each database has its *own* encoding.
psql -l 
lists who has what.


 -- 
 Łukasz Brodziak
 What if everyting around You isn't quite as it seems,
 What if all the world You think You know is an inelaborate dream
 When You look at Your reflection is that all you want it to be
 What if You could look right through the cracks
 Would You find Yourself.. Find Yourself afraid to see
 



-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Postgresql 9.0.2 explain analyze very slow (10x), compared to actual query time

2011-01-17 Thread Achilleas Mantzios
Στις Saturday 15 January 2011 00:35:44 ο/η Kevin Grittner έγραψε:
 Achilleas Mantzios ach...@matrix.gatewaynet.com wrote:
  
  I agree about the subject line. But whouldn't -bugs be a more
  apropriate place than -performance?
  
 Is it returning incorrect results, or is it not running as fast as
 you would like?
alright i opened a thread in -performance.
  
 -Kevin
 



-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Postgresql 9.0.2 explain analyze very slow (10x), compared to actual query time

2011-01-14 Thread Achilleas Mantzios
Στις Thursday 13 January 2011 16:13:23 ο/η Achilleas Mantzios έγραψε:
 
 Next step, if i am lucky tonight, i'll set up a new linux box with 9.0.2 and 
 load 
 the same small (coming from FBSD_DEV) database.
 
 So my problem was twofold : a) EXPLAIN ANALYZE performance, b) the particular 
 slow query.
 
 It seems there is only b) to solve. (see the previous email)
 
 A lot of thanx (but stay tuned ;)

Linux behaves the same way. I setup postgersql-9.0.2 in a fairly new debian 
(Debian GNU/Linux 5.0) 
(2x AMD athlon @2.2GZ, 4GB Mem) (lets call this LINUX_TEST)
with the exact same small db as currently in FBSD_TEST test (derived from 
FBSD_DEV)
and the times i get for this particular query (known from the previous posts) 
are bad 
(and of course much worse than the faster FBSD_TEST, which is normal since 
LINUX_TEST
is a slower/older machine)
The plans are identical for the two 9.0.2 boxes. (FBSD_TEST, LINUX_TEST)
In the LINUX_TEST i have left only postgres running to decrease the 
uncontrolled consumption of resources
and reduce the probability of swapping (i.e. gave all machine to postgresql)

The query is (i rewrite it here)
SELECT distinct m.id,coalesce(m.givenname,''),coalesce(m.midname,''),m.surname 
from marinerstates ms,vessels vsl,mariner m where 
m.id=ms.marinerid and ms.vslid=vsl.id and ms.state='Active' and 
coalesce(ms.endtime,now())::date = '2006-07-15' and 
ms.starttime::date = '2007-01-11'  and m.marinertype='Mariner'  and m.id not 
in 
   (SELECT distinct mold.id from marinerstates msold,vessels vslold,mariner 
mold  where mold.id=msold.marinerid and msold.vslid=vslold.id and 
msold.state='Active' and coalesce(msold.endtime,now())::date = 
'2006-07-15' and msold.starttime::date = '2007-01-11' and 
exists (select 1 from marinerstates msold2 where 
msold2.marinerid=msold.marinerid and msold2.state='Active' and msold2.id  
msold.id and 
   msold2.starttimemsold.starttime AND 
(msold.starttime-msold2.endtime)='18 months')
and mold.marinertype='Mariner' ) 
order by m.surname,coalesce(m.givenname,''),coalesce(m.midname,'');

LINUX_TEST : query : Time: 6685.910 ms , EXPLAIN ANALYZE : Total runtime: 
69.959 ms
FBSD_TEST : query : Time: 2284,918 ms , EXPLAIN ANALYZE : Total runtime: 
10432.405 ms

And for the NOT EXISTS version:
SELECT distinct m.id,coalesce(m.givenname,''),coalesce(m.midname,''),m.surname 
from marinerstates ms,vessels vsl,mariner m where 
m.id=ms.marinerid and ms.vslid=vsl.id and ms.state='Active' and 
coalesce(ms.endtime,now())::date = '2006-07-15' and 
ms.starttime::date = '2007-01-11'  and m.marinertype='Mariner'  and NOT EXISTS 
   (SELECT distinct mold.id from marinerstates msold,vessels vslold,mariner 
mold  where mold.id=msold.marinerid and msold.vslid=vslold.id and 
   msold.state='Active' and coalesce(msold.endtime,now())::date = '2006-07-15' 
and msold.starttime::date = '2007-01-11' and 
   exists (select 1 from marinerstates msold2 where 
msold2.marinerid=msold.marinerid and msold2.state='Active' and msold2.id  
msold.id and 
  msold2.starttimemsold.starttime AND 
(msold.starttime-msold2.endtime)='18 months')  
   and mold.marinertype='Mariner' AND mold.id=m.id) 
order by m.surname,coalesce(m.givenname,''),coalesce(m.midname,'');

LINUX_TEST : query : Time: 771.730 ms , EXPLAIN ANALYZE : Total runtime: 
1248.320 ms
FBSD_TEST : query : Time: 139,568 ms , EXPLAIN ANALYZE : Total runtime: 173.524 
ms

So, it seems that the original not in version of the query suffers in 
postgresql 9.0.2 

-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Postgresql 9.0.2 explain analyze very slow (10x), compared to actual query time

2011-01-14 Thread Achilleas Mantzios
Στις Friday 14 January 2011 16:41:18 γράψατε:
 Achilleas Mantzios ach...@matrix.gatewaynet.com wrote:
  
  and NOT EXISTS 
 (SELECT distinct mold.id from marinerstates
  
 Does that do better if you leave out the DISTINCT keyword?  Maybe
 use * instead of the column name?

Its already really fast, it doesn't matter. Anyways just tried without the 
distinct 
...NOT EXISTS (SELECT 1 from ...
and its the same speed. The other form (NOT IN) is the problem.

  
 -Kevin
 



-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Postgresql 9.0.2 explain analyze very slow (10x), compared to actual query time

2011-01-14 Thread Achilleas Mantzios
Στις Friday 14 January 2011 16:41:18 γράψατε:
 Achilleas Mantzios ach...@matrix.gatewaynet.com wrote:
  
  and NOT EXISTS 
 (SELECT distinct mold.id from marinerstates
  
 Does that do better if you leave out the DISTINCT keyword?  Maybe
 use * instead of the column name?

Its already really fast, it doesn't matter. Anyways just tried without the 
distinct 
...NOT EXISTS (SELECT 1 from ...
and its the same speed. The other form (NOT IN) is the problem.

  
 -Kevin
 



-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Postgresql 9.0.2 explain analyze very slow (10x), compared to actual query time

2011-01-14 Thread Achilleas Mantzios
Στις Friday 14 January 2011 16:41:18 γράψατε:
 Achilleas Mantzios ach...@matrix.gatewaynet.com wrote:
  
  and NOT EXISTS 
 (SELECT distinct mold.id from marinerstates
  
 Does that do better if you leave out the DISTINCT keyword?  Maybe
 use * instead of the column name?

Its already really fast, it doesn't matter. Anyways just tried without the 
distinct 
...NOT EXISTS (SELECT 1 from ...
and its the same (high) speed. The other form (NOT IN) is the problem (in the 
case of postgresql-9.0.2).
(in 8.3 both queries are fast)

  
 -Kevin
 



-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Postgresql 9.0.2 explain analyze very slow (10x), compared to actual query time

2011-01-14 Thread Achilleas Mantzios
Στις Friday 14 January 2011 22:06:56 ο/η Kevin Grittner έγραψε:
 Achilleas Mantzios ach...@matrix.gatewaynet.com wrote:
  
  The other form (NOT IN) is the problem (in the case of
  postgresql-9.0.2).  (in 8.3 both queries are fast)
  
 So running EXPLAIN ANALYZE for this plan on the 9.0 version checks
 the time in more places than the 8.3 version.  For a plan that's an
 order of magnitude slower than the plan used by the recommended
 version of the query.  Have you looked at what options you have for
 getting a faster clock reading on this slower/older machine? 

Kevin, momentarily, lets forget about the slow EXPLAIN ANALYZE part.
Also lets forget about the fact that the LINUX_TEST running 9.0.2 is a 
slower/older machine.
(its definetely faster/newer than FBSD_DEV)
The reason i got into trouble setting a new linux box (the 4th box in the test) 
just for this problem was to show that
the thing is not FreeBSD related.

We have clearly a query that runs fine in two boxes (FreeBSD - FBSD_DEV, Linux 
LINUX_PROD) running PostgreSQL 8.3.13
but runs a lot slower in two other boxes (FreeBSD - FBSD_TEST, Linux 
LINUX_TEST) running PostgreSQL 9.0.2

It is true that the thread might have to move over to pgsql-performance, if it 
there was no 8.3.13 involved,
in other words if we had only the old EXISTS vs IN situation.

Here we have smth which runs fast in 8.3.13 (over a variety of OS/HW) but slow 
in 9.0.2 (over a variety of OS/HW).

 Is 
 there something you're asking for as a change to the PostgreSQL
 product?
  
What i am asking is ways to understand what is going on, and what to expect 
when i will have to actually perform the production
upgrade to 9.0.2
 -Kevin
 



-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Postgresql 9.0.2 explain analyze very slow (10x), compared to actual query time

2011-01-14 Thread Achilleas Mantzios
Στις Friday 14 January 2011 23:20:25 ο/η Kevin Grittner έγραψε:
 Achilleas Mantzios ach...@matrix.gatewaynet.com wrote:
  
  What i am asking is ways to understand what is going on, and what
  to expect when i will have to actually perform the production
  upgrade to 9.0.2
  
 Sorry to have focused on the wrong issues from your latest post. 
 Here's what I suggest to try to get you the best information.
  
 (1)  Abandon this thread, start a new one on the performance list
 with a different subject line.

I agree about the subject line. But whouldn't -bugs be a more apropriate place 
than -performance?
Pre 8.3 (e.g. 7.4), EXISTS used to run faster than IN, this was fixed in 8.3 
(which was IMO a milestone in postgresql performance) ,
should not that be the case with 9.0 as well?

  
  
 -Kevin
 



-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Postgresql 9.0.2 explain analyze very slow (10x), compared to actual query time

2011-01-13 Thread Achilleas Mantzios
Στις Wednesday 12 January 2011 21:17:00 ο/η Alex Hunsaker έγραψε:
 2011/1/12 Achilleas Mantzios ach...@matrix.gatewaynet.com:
 
 
  Changed that to smth like: micros_total = micros_total + (double) micros;
  instead of the printf to beat any compiler optimization, and still linux 
  runs at light speed:
  FBSD_TEST : user 0.089s, sys 1.4s
  FBSD_DEV : user 0.183s, sys 3.8s
  LINUX_PROD : user 0.168s, sys 0s
  (regarding that gettimeofday is a syscall in FreeBSD, and that sys time is 
  0 for linux, makes me think some optimization is being done)
 
 At least on x86_64, linux will use a vsyscall (v for virtual) for
 gettimeofday which does not actually need to enter/exit the kernel.
 That probably explains a large part of the speed difference here.
 

Thanx, thats the ARCH (x86_64) of our linux system.


-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Postgresql 9.0.2 explain analyze very slow (10x), compared to actual query time

2011-01-13 Thread Achilleas Mantzios
Στις Wednesday 12 January 2011 19:17:28 ο/η Mark Felder έγραψε:
 On Wed, 12 Jan 2011 10:45:20 -0600, Tom Lane t...@sss.pgh.pa.us wrote:
 
  The short of it is that cheap motherboards tend to provide cheap clock 
  hardware that takes about a microsecond to read.
 
 I'm not sure how gettimeofday() works, but is there a slight chance this  
 could be alleviated by choosing a different kern.timecounter.hardware  
 option?
 
 kern.timecounter.choice: TSC(800) HPET(900) ACPI-fast(1000) i8254(0)  
 dummy(-100)
 kern.timecounter.hardware: ACPI-fast
 
 Or does the kernel timer stuff have nothing to do with gettimeofday()?
Thanx,
Actually setting sysctl kern.timecounter.hardware=TSC
and making the program look like: (increased the loop from 1,000,000 to 
100,000,000)

#include stdio.h
#include stdlib.h
#include sys/time.h
int main(int argc,char** argv) {
struct timeval *tp=calloc(1,sizeof(struct timeval));
int runna;
double micros_total=0;
for (runna=0;runna1;runna++) {
int rc=gettimeofday(tp,NULL);
long micros = tp-tv_sec * 100 + tp-tv_usec;
micros_total = micros_total + (double) micros;
}
printf(micros_total=%f\n,micros_total);
}

I get in LINUX_PROD
time ./gettimeofday_test
micros_total=129491418242412709937152.00

real0m16.665s
user0m16.621s
sys 0m0.000s

and in FBSD_TEST
time ./gettimeofday_test
micros_total=129491437864563859521536.00

real0m15.506s
user0m0.517s
sys 0m1.161s

So, the FBSD_TEST case although still involves some kernel overhead, it is 
faster than LINUX_PROD
(but i am puzzled as to where those 15.5 secs come from since 1.16+
 
 
 Regards,
 
 
 Mark
 



-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Postgresql 9.0.2 explain analyze very slow (10x), compared to actual query time

2011-01-13 Thread Achilleas Mantzios
Στις Wednesday 12 January 2011 18:45:20 ο/η Tom Lane έγραψε:
 Achilleas Mantzios ach...@matrix.gatewaynet.com writes:
  Στις Wednesday 12 January 2011 17:07:53 ο/η Tom Lane έγραψε:
  Right offhand I'd wonder whether that was more bound by gettimeofday or
  by printf.  Please try it without printf in the loop.
 
  Changed that to smth like: micros_total = micros_total + (double) micros;
  instead of the printf to beat any compiler optimization, and still linux 
  runs at light speed:
  FBSD_TEST : user 0.089s, sys 1.4s
  FBSD_DEV : user 0.183s, sys 3.8s
  LINUX_PROD : user 0.168s, sys 0s
 
 Well, there's your problem all right: the FBSD_DEV system takes 22X
 longer to execute gettimeofday() than the LINUX_PROD system.  The
 particular plan that 9.0 is choosing is especially vulnerable to this
 because it involves a whole lot of calls of the Materialize plan node:
 
 -  Nested Loop  (cost=3390.49..9185.73 
  rows=1 width=4) (actual time=279.916..93280.499 rows=1824 loops=1)
   Join Filter: (msold.marinerid = 
  mold.id)
   -  Index Scan using mariner_pkey 
  on mariner mold  (cost=0.00..1957.89 rows=15914 width=4) (actual 
  time=0.009..38.449 rows=15914 loops=1)
 Filter: ((marinertype)::text 
  = 'Mariner'::text)
   -  Materialize  
  (cost=3390.49..6989.13 rows=1 width=8) (actual time=0.013..2.881 rows=1888 
  loops=15914)
 
 You've got 1888 * 15914 * 2 gettimeofday() calls needed to count up the
 time spent in Materialize, which at 3.8usec apiece adds up to ... hmm
 ... I get 228 seconds, which is more than the reported runtime.  So I
 think there's something wrong with your measurement of the gettimeofday
 cost.  But I wouldn't be a bit surprised to hear that the true cost of
 gettimeofday on that machine is a microsecond or so, in which case all
 of the EXPLAIN ANALYZE time bloat is indeed due to that.
 
 This problem has been discussed repeatedly on the pgsql-performance
 list, which is where your question really belongs, not here.  The short
 of it is that cheap motherboards tend to provide cheap clock hardware
 that takes about a microsecond to read.  It also helps to be running
 an OS that is able to read the clock in userspace without invoking a
 kernel call.  It looks like your Linux box is winning on both counts
 compared to your BSD boxes.

Just comparing the two FBSD boxes on equal terms gives crazy results, so
I'll try to leave LINUX out of the equation for a while, and also i'll try to 
leave the 
difference in DB sizes out of the equation so what i did was make a dump of the 
small DB 
on the small (slow) FBSD box (FBSD_DEV) and load it on the bigger (fast) FBSD 
box (FBSD_TEST).
I set : sysctl kern.timecounter.hardware=TSC on both systems
and now the below program

#include stdio.h
#include stdlib.h
#include sys/time.h
int main(int argc,char** argv) {
struct timeval *tp=calloc(1,sizeof(struct timeval));
int runna;
long double micros_total=0;
for (runna=0;runna100;runna++) {
int rc=gettimeofday(tp,NULL);
long micros = tp-tv_sec * 100 + tp-tv_usec;
micros_total = micros_total + (long double) micros;
}
printf(micros_total=%LE\n,micros_total);
}

% time ./gettimeofday_test gives 

FBSD_DEV (slow) : 0.127u 0.231s 0:00.35 100.0%5+192k 0+0io 0pf+0w
FBSD_TEST (fast) : 0.002u 0.014s 0:00.15 6.6%  84+24192k 0+0io 0pf+0w

Database on the fast system, as i said is exactly the same as in the slow one.

The query SELECT avg(md.perioddue) from status st,items it,machdefs md WHERE 
st.apptblidval=it.id AND it.defid=md.defid;
gives with \timing

FBSD_DEV (slow) : Time: 28747,374 ms, EXPLAIN ANALYZE: Total runtime: 31816.648 
ms 
FBSD_TEST (fast) : Time: 3153,372 ms, EXPLAIN ANALYZE: Total runtime: 5701.942 
ms

(the fast system is indeed a lot faster on this one (by 10x))

However the query (known from the initial mail as well)
SELECT distinct m.id,coalesce(m.givenname,''),coalesce(m.midname,''),m.surname 
from marinerstates ms,vessels vsl,mariner m 
where m.id=ms.marinerid and ms.vslid=vsl.id and ms.state='Active' and 
coalesce(ms.endtime,now())::date = '2006-07-15' and 
ms.starttime::date = '2007-01-11'  and m.marinertype='Mariner'  and m.id not 
in 
   (SELECT distinct mold.id from marinerstates msold,vessels vslold,mariner 
mold  where mold.id=msold.marinerid and msold.vslid=vslold.id 
  and msold.state='Active' and coalesce(msold.endtime,now())::date = 
'2006-07-15' and msold.starttime::date = '2007-01-11' and exists 
 (select 1 from marinerstates msold2 where 
msold2.marinerid=msold.marinerid and msold2.state='Active' and msold2.id  
msold.id and 
   msold2.starttimemsold.starttime AND 
(msold.starttime-msold2.endtime)='18 months')  and mold.marinertype='Mariner

Re: [ADMIN] Postgresql 9.0.2 explain analyze very slow (10x), compared to actual query time

2011-01-13 Thread Achilleas Mantzios
Στις Thursday 13 January 2011 14:13:21 ο/η Achilleas Mantzios έγραψε:

 SELECT distinct 
 m.id,coalesce(m.givenname,''),coalesce(m.midname,''),m.surname from 
 marinerstates ms,vessels vsl,mariner m 
 where m.id=ms.marinerid and ms.vslid=vsl.id and ms.state='Active' and 
 coalesce(ms.endtime,now())::date = '2006-07-15' and 
 ms.starttime::date = '2007-01-11'  and m.marinertype='Mariner'  and m.id not 
 in 
(SELECT distinct mold.id from marinerstates msold,vessels vslold,mariner 
 mold  where mold.id=msold.marinerid and msold.vslid=vslold.id 
   and msold.state='Active' and coalesce(msold.endtime,now())::date = 
 '2006-07-15' and msold.starttime::date = '2007-01-11' and exists 
  (select 1 from marinerstates msold2 where 
 msold2.marinerid=msold.marinerid and msold2.state='Active' and msold2.id  
 msold.id and 
msold2.starttimemsold.starttime AND 
 (msold.starttime-msold2.endtime)='18 months')  and 
 mold.marinertype='Mariner' ) 
order by m.surname,coalesce(m.givenname,''),coalesce(m.midname,'')

Rewriting the query with NOT EXISTS (a practice quite common pre 8.3, IIRC) as:

SELECT distinct m.id,coalesce(m.givenname,''),coalesce(m.midname,''),m.surname 
from marinerstates ms,vessels vsl,mariner m 
where m.id=ms.marinerid and ms.vslid=vsl.id and ms.state='Active' and 
coalesce(ms.endtime,now())::date = '2006-07-15' and 
ms.starttime::date = '2007-01-11'  and m.marinertype='Mariner'  and  NOT 
EXISTS 
  (SELECT distinct mold.id from marinerstates msold,vessels 
vslold,mariner mold  where mold.id=msold.marinerid and 
  msold.vslid=vslold.id and msold.state='Active' and 
coalesce(msold.endtime,now())::date = '2006-07-15' and 
  msold.starttime::date = '2007-01-11' and exists 
  (select 1 from marinerstates msold2 where 
msold2.marinerid=msold.marinerid and 
  msold2.state='Active' and msold2.id  msold.id and 
msold2.starttimemsold.starttime AND 
  (msold.starttime-msold2.endtime)='18 months')  and 
mold.marinertype='Mariner' AND mold.id=m.id) 
order by m.surname,coalesce(m.givenname,''),coalesce(m.midname,'');

Is fast.

-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Postgresql 9.0.2 explain analyze very slow (10x), compared to actual query time

2011-01-13 Thread Achilleas Mantzios
Στις Wednesday 12 January 2011 18:45:20 ο/η Tom Lane έγραψε:
 Achilleas Mantzios ach...@matrix.gatewaynet.com writes:
  Στις Wednesday 12 January 2011 17:07:53 ο/η Tom Lane έγραψε:
  Right offhand I'd wonder whether that was more bound by gettimeofday or
  by printf.  Please try it without printf in the loop.
 
  Changed that to smth like: micros_total = micros_total + (double) micros;
  instead of the printf to beat any compiler optimization, and still linux 
  runs at light speed:
  FBSD_TEST : user 0.089s, sys 1.4s
  FBSD_DEV : user 0.183s, sys 3.8s
  LINUX_PROD : user 0.168s, sys 0s
 
 Well, there's your problem all right: the FBSD_DEV system takes 22X
 longer to execute gettimeofday() than the LINUX_PROD system.  The
 particular plan that 9.0 is choosing is especially vulnerable to this
 because it involves a whole lot of calls of the Materialize plan node:
 

You are absolutely right!
Changing kern.timecounter.hardware=TSC (from ACPI-fast) *solved* the slow 
explain analyze effect.
Now i get reasonable EXPLAIN ANALYZE times!

 It also helps to be running
 an OS that is able to read the clock in userspace without invoking a
 kernel call.  It looks like your Linux box is winning on both counts
 compared to your BSD boxes.

Next step, if i am lucky tonight, i'll set up a new linux box with 9.0.2 and 
load 
the same small (coming from FBSD_DEV) database.

So my problem was twofold : a) EXPLAIN ANALYZE performance, b) the particular 
slow query.

It seems there is only b) to solve. (see the previous email)

A lot of thanx (but stay tuned ;)

 
   regards, tom lane
 



-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Postgresql 9.0.2 explain analyze very slow (10x), compared to actual query time

2011-01-12 Thread Achilleas Mantzios
Στις Tuesday 11 January 2011 18:19:11 ο/η Tom Lane έγραψε:
 Achilleas Mantzios ach...@matrix.gatewaynet.com writes:
  I have noticed that in 9.0.2 EXPLAIN ANALYZE takes considerably longer time 
  to return than the actual query. I never noticed this in 8.3.
 
 Same hardware?  What it sounds like is you're running 9.0 on a machine
 with slow gettimeofday().
 

It is on a different hardware. I have three setups here, i have written a small 
program to assess gettimeofday.
My setups are as follows
1) Prod machine (thereafter called LINUX_PROD) :
System: Linux Suse 2.6.16.46-0.12-smp, 16 x Intel Xeon(R) X7350 @ 2.93GHz, 64GB 
memory
DB: PostgreSQL 8.3.13, shared_buffers=16GB, work_mem=512MB, db size=94GB
2) Dev machine (therafter called FBSD_DEV) :
System : FreeBSD 6.3, Intel(R) Core(TM)2 Duo CPU @ 2.80GHz, 2GB memory
DB: PostgreSQL 8.3.13, shared_buffers=512MB, work_mem=1MB, db size=76GB
3) Test machine (thereafter called FBSD_TEST) :
System: FreeBSD 8.1, 4 x AMD Phenom(tm) 965 @ 3.4 GHz, 8GB memory
DB: PostgreSQL 9.0.2, shared_buffers=5GB, work_mem=512MB, db size=7GB

The FBSD_TEST system,  in the majority of cases seems to outperform LINUX_PROD
(rough impression watching execution times of my app, random benchmarks, 
nothing too detailed, but it is still a very fast system,
and i have been testing for over 2 months with this new system, testing 9.0.2 
performance, etc..)
However i found two issues with FBSD_TEST: The EXPLAIN ANALYZE performance, and 
one specific query
which runs considerably slower in FBSD_TEST than both LINUX_PROD and FBSD_DEV.
Regarding gettimeofday (2), i wrote this program :
#include stdio.h
#include stdlib.h
#include sys/time.h
int main(int argc,char** argv) {
struct timeval *tp=calloc(1,sizeof(struct timeval));
int runna;
for (runna=0;runna100;runna++) {
int rc=gettimeofday(tp,NULL);
long micros = tp-tv_sec * 100 + tp-tv_usec;
printf(cur_time=%u micro secs\n,micros);
}
}

For some reason, LINUX_PROD runs this program much much faster than the two 
FBDs (FBSD_DEV, FBSD_PROD)
I get these values for 
% time ./gettimeofday_test  /dev/null
LINUX_PROD real=0m0.373s, user=0m0.364s, sys=0m0.004s
FBSD_DEV real=4.29s, user=0.526s, sys=3.764s
FBSD_TEST real=1.66s, user=0.260s, sys=1.407s

So at least regarding gettimeofday, and setting linux aside, the slower fbsd 
(FBSD_DEV) seems considerably slower
than the faster fbsd (FBSD_TEST).

However for the query:

SELECT distinct m.id,coalesce(m.givenname,''),coalesce(m.midname,''),m.surname 
from marinerstates ms,vessels vsl,mariner m 
where m.id=ms.marinerid and ms.vslid=vsl.id and ms.state='Active' and 
coalesce(ms.endtime,now())::date = '2006-07-15' and 
ms.starttime::date = '2007-01-11'  and m.marinertype='Mariner'  and m.id not 
in 
(SELECT distinct mold.id from marinerstates msold,vessels vslold,mariner mold  
where mold.id=msold.marinerid and msold.vslid=vslold.id and 
msold.state='Active' and coalesce(msold.endtime,now())::date = '2006-07-15' 
and msold.starttime::date = '2007-01-11' and exists 
(select 1 from marinerstates msold2 where msold2.marinerid=msold.marinerid and 
msold2.state='Active' and msold2.id  msold.id and 
msold2.starttimemsold.starttime AND (msold.starttime-msold2.endtime)='18 
months')  and mold.marinertype='Mariner' ) 
order by m.surname,coalesce(m.givenname,''),coalesce(m.midname,'');   
with the EXPLAIN ANALYZE version

i get the following execution times: (with \timing) (note however that FBSD_DEV 
has a considerably smaller database, the other two are compareable)
FBSD_DEV : query : 240,419 ms, EXPLAIN ANALYZE query : Total runtime: 538.468 ms
LINUX_PROD : query : 219.568 ms, EXPLAIN ANALYZE query : Total runtime: 216.926 
ms
FBSD_TEST : query : 2587,465 ms, EPXLAIN ANALYZE query : Total runtime: 
93711.648 ms
The last numbers seem huge.

Up to now this query was the first operation in which i witnessed FBSD_TEST to 
be actually slower than LINUX_PROD,
but that could be my fault as a DBA, haven't looked much into EXPLAIN yet.
However the performance of EXPLAIN ANALYZE is very bad any way you see it.

In other queries (the majority of them), FBSD_TEST is slightly faster than 
LINUX_PROD, e.g
SELECT avg(md.perioddue) from status st,items it,machdefs md WHERE 
st.id=3626961 AND st.apptblidval=it.id AND it.defid=md.defid;
i get:
FBSD_DEV : query : 20166,047 ms, EXPLAIN ANALYZE : Total runtime: 74245.576 ms
LINUX_PROD : query : 5908.960 ms, EXPLAIN ANALYZE : Total runtime: 8847.124 ms
FBSD_TEST : query : 4190,873 ms, EXPLAIN ANALYZE : Total runtime: 30334.086 ms

Here we see that although FBSD_TEST is the faster of the 3, its EXPLAIN ANALYZE 
performance
is nowehere near the plain query.

What could i do to have a better understanding of why this is happening?
Thanx

   regards, tom lane
 



-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your

Re: [ADMIN] Postgresql 9.0.2 explain analyze very slow (10x), compared to actual query time

2011-01-12 Thread Achilleas Mantzios
Στις Wednesday 12 January 2011 17:07:53 ο/η Tom Lane έγραψε:
 Achilleas Mantzios ach...@matrix.gatewaynet.com writes:
  Regarding gettimeofday (2), i wrote this program :
  #include stdio.h
  #include stdlib.h
  #include sys/time.h
  int main(int argc,char** argv) {
  struct timeval *tp=calloc(1,sizeof(struct timeval));
  int runna;
  for (runna=0;runna100;runna++) {
  int rc=gettimeofday(tp,NULL);
  long micros = tp-tv_sec * 100 + tp-tv_usec;
  printf(cur_time=%u micro secs\n,micros);
  }
  }
 
 Right offhand I'd wonder whether that was more bound by gettimeofday or
 by printf.  Please try it without printf in the loop.
 

Changed that to smth like: micros_total = micros_total + (double) micros;
instead of the printf to beat any compiler optimization, and still linux runs 
at light speed:
FBSD_TEST : user 0.089s, sys 1.4s
FBSD_DEV : user 0.183s, sys 3.8s
LINUX_PROD : user 0.168s, sys 0s
(regarding that gettimeofday is a syscall in FreeBSD, and that sys time is 0 
for linux, makes me think some optimization is being done)

  3) Test machine (thereafter called FBSD_TEST) :
  System: FreeBSD 8.1, 4 x AMD Phenom(tm) 965 @ 3.4 GHz, 8GB memory
  DB: PostgreSQL 9.0.2, shared_buffers=5GB, work_mem=512MB, db size=7GB
 
 The shared_buffers setting seems way out of line compared to actual
 memory on this machine.  Maybe it's swapping.
 

I tried with what pgtune suggests (1920MB) and i get same results.
For the simple query: SELECT avg(md.perioddue) from status st,items it,machdefs 
md WHERE st.id=3626961 AND st.apptblidval=it.id AND it.defid=md.defid;
still FBSD_TEST runs faster than LINUX_PROD smth like (3.5 secs VS 5.8 secs)

  i get the following execution times: (with \timing) (note however that 
  FBSD_DEV has a considerably smaller database, the other two are compareable)
  FBSD_DEV : query : 240,419 ms, EXPLAIN ANALYZE query : Total runtime: 
  538.468 ms
  LINUX_PROD : query : 219.568 ms, EXPLAIN ANALYZE query : Total runtime: 
  216.926 ms
  FBSD_TEST : query : 2587,465 ms, EPXLAIN ANALYZE query : Total runtime: 
  93711.648 ms
  The last numbers seem huge.
 
 Are they in fact the same query plans in each case?

The query plans seem to differ. A lot of seq scans in the FBSD case.
I attach the query plans for LINUX_PROD, FBSD_TEST (the times did not change 
noticeably after the new shared_buffers setting)

 
   regards, tom lane
 



-- 
Achilleas Mantzios

  QUERY PLAN
   
---
 Unique  (cost=860732.31..860761.51 rows=2336 width=22) (actual 
time=221.113..221.472 rows=577 loops=1)
   -  Sort  (cost=860732.31..860738.15 rows=2336 width=22) (actual 
time=221.110..221.220 rows=577 loops=1)
 Sort Key: m.surname, (COALESCE(m.givenname, ''::character varying)), 
(COALESCE(m.midname, ''::character varying)), m.id
 Sort Method:  quicksort  Memory: 69kB
 -  Hash Join  (cost=856660.40..860601.61 rows=2336 width=22) (actual 
time=169.164..220.490 rows=577 loops=1)
   Hash Cond: (ms.vslid = vsl.id)
   -  Hash Join  (cost=856620.20..860526.95 rows=2958 width=26) 
(actual time=168.871..219.793 rows=577 loops=1)
 Hash Cond: (ms.marinerid = m.id)
 -  Seq Scan on marinerstates ms  (cost=0.00..3793.75 
rows=6067 width=8) (actual time=0.012..48.267 rows=2630 loops=1)
   Filter: (((state)::text = 'Active'::text) AND 
((starttime)::date = '2007-01-11'::date) AND ((COALESCE(endtime, now()))::date 
= '2006-07-15'::date))
 -  Hash  (cost=856509.66..856509.66 rows=8843 width=22) 
(actual time=168.810..168.810 rows=14518 loops=1)
   -  Index Scan using mariner_pkey on mariner m  
(cost=854253.06..856509.66 rows=8843 width=22) (actual time=144.906..162.369 
rows=14518 loops=1)
 Filter: ((NOT (hashed subplan)) AND 
((marinertype)::text = 'Mariner'::text))
 SubPlan
   -  Unique  (cost=0.00..854247.22 rows=2336 
width=4) (actual time=0.079..143.656 rows=1448 loops=1)
 -  Merge Join  (cost=0.00..854241.38 
rows=2336 width=4) (actual time=0.078..142.634 rows=1824 loops=1)
   Merge Cond: (msold.marinerid = 
mold.id)
   -  Nested Loop  
(cost=0.00..851962.29 rows=2396 width=4) (actual time=0.065..122.789 rows=1888 
loops=1

[ADMIN] Postgresql 9.0.2 explain analyze very slow (10x), compared to actual query time

2011-01-11 Thread Achilleas Mantzios
Hello,
I have noticed that in 9.0.2 EXPLAIN ANALYZE takes considerably longer time to 
return than the actual query. I never noticed this in 8.3.
Example:
In 9.0.2 i get
dynacom=# EXPLAIN ANALYZE SELECT count(ms.id) from marinerstates ms,mariner m 
where ms.endtime IS NOT NULL AND ms.marinerid=m.id;
QUERY PLAN
--
 Aggregate  (cost=4716.48..4716.49 rows=1 width=4) (actual 
time=413.378..413.379 rows=1 loops=1)
   -  Hash Join  (cost=788.77..4582.04 rows=53775 width=4) (actual 
time=54.029..330.286 rows=53845 loops=1)
 Hash Cond: (ms.marinerid = m.id)
 -  Seq Scan on marinerstates ms  (cost=0.00..2583.33 rows=53775 
width=8) (actual time=0.005..99.092 rows=53845 loops=1)
   Filter: (endtime IS NOT NULL)
 -  Hash  (cost=583.90..583.90 rows=16390 width=4) (actual 
time=54.011..54.011 rows=16390 loops=1)
   Buckets: 2048  Batches: 1  Memory Usage: 577kB
   -  Seq Scan on mariner m  (cost=0.00..583.90 rows=16390 
width=4) (actual time=0.004..27.587 rows=16390 loops=1)
 Total runtime: 413.495 ms

dynacom=# \timing
Timing is on.
dynacom=# SELECT count(ms.id) from marinerstates ms,mariner m where ms.endtime 
IS NOT NULL AND ms.marinerid=m.id;
 count
---
 53845
(1 row)

Time: 39,038 ms

So the actual time (39 ms) differs considerably from the EXPLAIN ANALYZE Total 
runtime (413 ms) (10 times bigger)

In 8.3 i get
postg...@dynacom=# EXPLAIN ANALYZE SELECT count(ms.id) from marinerstates 
ms,mariner m where ms.endtime IS NOT NULL AND ms.marinerid=m.id;
QUERY PLAN
--
 Aggregate  (cost=5094.13..5094.14 rows=1 width=4) (actual time=84.180..84.180 
rows=1 loops=1)
   -  Hash Join  (cost=869.15..4954.27 rows=55942 width=4) (actual 
time=11.284..72.630 rows=54168 loops=1)
 Hash Cond: (ms.marinerid = m.id)
 -  Seq Scan on marinerstates ms  (cost=0.00..2756.50 rows=55942 
width=8) (actual time=0.013..25.696 rows=54168 loops=1)
   Filter: (endtime IS NOT NULL)
 -  Hash  (cost=642.40..642.40 rows=18140 width=4) (actual 
time=11.256..11.256 rows=16442 loops=1)
   -  Seq Scan on mariner m  (cost=0.00..642.40 rows=18140 
width=4) (actual time=0.007..5.762 rows=16442 loops=1)
 Total runtime: 84.346 ms
(8 rows)

postg...@dynacom=# SELECT count(ms.id) from marinerstates ms,mariner m where 
ms.endtime IS NOT NULL AND ms.marinerid=m.id;
 count
---
 54168
(1 row)

Time: 42.224 ms

In 8.3 the Total runtime is only double the value of the actual time.

Is there anything that can shed some light on this?

-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] SET SESSION AUTHORIZATION and log file entries

2011-01-04 Thread Achilleas Mantzios
Στις Wednesday 29 December 2010 16:06:41 ο/η Achilleas Mantzios έγραψε:
 Hello,
 i noticed that the user name as displayed in the postgresql log (specified by 
 %u) in the log_line_prefix
 does not show the user name of the effective user (current_user)as set by 
 SET SESSION AUTHORIZATION 
 but, instead, shows the initial user which first authenticated against the 
 said connection.
 I think it would be better to have this user reported in the log files. (or 
 have another entry e.g. %U to denote that)

Also i noticed (in 9.0.2) that both SET SESSION AUTHORIZATION and SET ROLE do 
not reflect this change
in pg_stat_get_activity function, (and subsequently in the pg_stat_activity 
view).

I can imagine the use of either SET SESSION AUTHORIZATION and SET ROLE to 
achieve personalized
connections in a connection pool, without sucrificing any of the beneficial 
principles of having
a pool of equivalent connections.

I think having each user in an application connecting with his/her own 
credentials and authorizations
is a very good thing. Currently i am thinking of configuring smth like this in 
the jboss connection pool.
Unfortunately the current software only allows for subpools for each user, 
making the reuse and sharing of
connections impossible, but even if jboss and the postgresql jdbc driver 
allowed to do that
(via SET SESSION AUTHORIZATION or SET ROLE) then again the situation would not 
be complete
since the entries in the log and pg_stat_activity would still show the initial 
user who opened the connection.
(except there are other ways to change the current_user which i am missing).
I think, the combination of the two worlds (having both a generic single 
superpool of equivalent connections which
everybody could use, on one hand, and having the possibility of completetly 
personalizing connections
acquired from the pool according to the user of the application, having its 
name appearing in stats
and in log files as usesysid, on the other hand) is something both desirable 
and missing from 
the current solutions available.

In the case of a more simple web environment, that might not be smth very 
important,
but in business environment, having each user reported separately in the 
postgresql side, i think,
would be very good to have.

 -- 
 Achilleas Mantzios
 



-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] SET SESSION AUTHORIZATION and log file entries

2010-12-30 Thread Achilleas Mantzios
Hello,
i noticed that the user name as displayed in the postgresql log (specified by 
%u) in the log_line_prefix
does not show the user name of the effective user (current_user)as set by SET 
SESSION AUTHORIZATION 
but, instead, shows the initial user which first authenticated against the said 
connection.
I think it would be better to have this user reported in the log files. (or 
have another entry e.g. %U to denote that)
-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] PostgreSQL in Shared Disk Failover mode on FreeBSD+CARP+RAIDZ

2010-12-20 Thread Achilleas Mantzios
Στις Tuesday 21 December 2010 03:23:25 ο/η sn...@email.it έγραψε:
 Hi everybody,
 I'm trying to figure out a way to setup a PostgreSQL HA cluster solution.
 
 I've done some research and I've an idea of different possible solutions,
 but I'd honestly like to implement it using CARP in a Shared Disk Failover
 fashion.

This reminds me of the serialization stack on shared disks (DASD) in the MVS 
IBM oper system.
It takes a lot of work to do that on OS level. Its smth beyond multiprocessing 
and high availability.
Think about it. In FreeBSD (or any of-the-shelf Unix), there is no inherent way 
to implicitly lock files.
Even on the same machine, if two users/processes modify the same file, the one 
who saves (closes)
the file last, generally wins. In order to do that, you will have to use 
explicit locking done by the application.
The OS by itself does not do that by default.
Things get tougher in a networked environment since file serialization should 
be applied at network-level.
This combination of data sharing (the idea of decoupling the concept of 
redundant hardware
from the concept of redundant disks) with the characteristics of parallel 
computing was realized in 
IBM's Parallel Sysplex technology: 
http://en.wikipedia.org/wiki/IBM_Parallel_Sysplex
On top of it, someone could have CICS, DB2,etc... almost any available 
application.
It would be interesting to know if there is some concept close to it in the 
open source-Unix world.

-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] FK disappeared in 8.3.3

2010-11-26 Thread Achilleas Mantzios
Στις Tuesday 23 November 2010 17:22:29 ο/η Tom Lane έγραψε:
 I don't have enough information to show the exact chain of events, but I
 think that something like this is a lot more probable than a random
 hardware failure that just happened to produce these particular results.
 

I just got replies from 59 of our postgresql sites, from which more than 30 had 
gone through the same
migration procedure, and *all* (but the problematic one) had the correct 
definitions for this table.
Since the FK creation was inside the dump and not some human given extra 
command,
i cannot think of anything else than the error (FK contraint been gone) was 
there before the initial
migration to 8.3 (some time in 2009). So the error must have occured while in 
7.4.
And i can only think of hardware error that cause the initial ON DELETE 
CASCADE to not work
in the live database 7.4, leaving child rows with no corresponding parent rows, 
and thus
causing the restore during the initial upgrade to 8.3 to fail.
i think of a possible scenario such as:
1) the ON DELETE CASCADE mechanism stops to work in 7.4
2) during the initial migration to 8.3 the statements in the 7.4 are like:

CREATE TABLE mailcrew_entity ...
COPY mailcrew_entity 

ALTER TABLE ONLY mailcrew_entity
ADD CONSTRAINT mailcrew_entity_message FOREIGN KEY (msgno) REFERENCES 
mailcrew_message(msgno) ON UPDATE CASCADE ON DELETE CASCADE;
^ 
this stmt fails since the FK constraint cannot be met.

3) database works in this manner...
4) newer hardware fails as well
5) migration to new hardware (this july 2010), (i was onboard but i didn't 
witness any error cause there was nothing to complaint about in the log.)

I think that could explain how the FK constraint was lost.
So, i guess there is only 7.4/old hardware or a combination of 7.4 and the old 
hardware to blame.

   regards, tom lane
  
 



-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] FK disappeared in 8.3.3

2010-11-23 Thread Achilleas Mantzios
I must add (which i should have included in the original post), 
that apart from the FK also 2 triggers on the same table were lost.
And in a 2nd table also two triggers were lost.
(all triggers for use with tsearch2)

Στις Monday 22 November 2010 18:11:12 γράψατε:
 The differences do look suspiciously related to past cross-version
 changes.  I do not think they can be explained entirely by pg_dump
 differences, though.  In particular, there's no way that pg_dump
 could have caused the reported change in a view definition from
 ANY(ARRAY[...]) to a list of ORs.  What seems most likely to me is
 that that view was originally entered as
   WHERE machclasses.id IN (1, 2, 16, ...)
 Backends since 8.2 will translate that into ANY(ARRAY), which is how
 it'll get reverse-listed by \d or pg_dump.  Older versions produced
 the list-of-ORs translation.  I think the bad dump somehow traces
 its lineage to an 8.1 or older installation.

That's entirely correct. The view was entered using the IN notation.
This view was written and sent to the vessels at 2008-12-04.
Our test-vessel DB at shore in this date 2008-12-04 was already running 8.3.3.
At this time our problematic vessel was running 7.4.2. This vessel upgraded to 
8.3.3
some time in January 2009. 

The procedure we used to migrate from 7.4.* to 8.3.3 (which was far from 
straight forward) was a 
heavily tested procedure which produced many good working DBs on many vessels.
(This FK and triggers are in place in some random DB's (with similar history) 
that we doublechecked)
This vessel has a bad history regarding its server's hardware.
In at least two occasions (the latest being around this summer), its hardware 
died.
I guess its motherboard's or CPU or memory's fault might have caused this lost 
FK contsraint and the 4 lost triggers.
I can only attribute this to bad hardware, or hardware/os mistreatment by the 
crew.

I have already created the FK constraint, the crew confirmed the app works as 
specified,
and i will recreate the 4 triggers.

My intention was to find any missing schema DDL's in the bad DB using a simple 
pg_dump.
This did not help, since there is a lot of noise in the diff (correct 
differences which does not
help to identify the bad ones). Anyway after some heuristic searches i think i 
have narrowed down
the missing (disappeared) DDL's.

Thanx.

 The other issues look 
 like they could possibly be explained by either forcing pg_dump to
 dump from a newer server major version (which until recently you
 could force with -i, but the resulting dump would likely have issues)
 or trying to load a dump script into an older major server version
 than it was designed for (and ignoring the ensuing error messages).
 
   regards, tom lane
 



-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] FK disappeared in 8.3.3

2010-11-23 Thread Achilleas Mantzios
Στις Monday 22 November 2010 18:45:28 γράψατε:
 Achilleas Mantzios ach...@matrix.gatewaynet.com wrote:
  
  (imagine having to upgrade 61 installations for which you dont
  have ssh/scp ;) sounds fun right? )
  
 I don't have to imagine -- been there, done that.  (Not with ships
 but large geographical distances and needing to trudge through
 secure facilities with big steel doors every so many feet.)  It can
 be time-consuming and tedious.  But then, so can recovering a
 corrupted database.

What you describe is an entirely different situation.

  
 If I were in your shoes, I'd consider arranging to have a CD-ROM
 mailed to someone on each ship at its next port, with a script which
 would automatically perform the update.  (Heavily tested in advance,
 of course.)

This in unrealistic. Many things can go wrong, and in this case the vessel
stays offline, which is a very very bad thing.
(imagine to be hard to make a telephone call, and after hours of effort to hear 
a nice Indian
guy with whom you cant communicate due to different accents of English
since both we are non native english speakers)
btw, we use tsearch2, intarray, which require an upgrade plan by their own,
if you have ever done this.
In addition, i must say that we have about 290 tables in our schema.
Its not the kind of DB you would simply upgrade with a CDROM.
The huge improvements between 7.4.* and 8.3.* were about performance
and not stability. Lack of stability is not smth PostgreSQL is famous for.
So we perform the upgrade to 8.3 only when we send one of our people
onboard.

  
 Of course, none of that helps with your current issue; I would just
 hate to see you pop up here in a few months with bigger problems
 than this if it can be avoided.
  

We had bigger problems than this and still survived, dont worry ;)
PostgreSQL (be it 7.4.2, 7.4.19 or 8.3.3) in our 60+ vessels, is the last thing 
we have worried about
over the last 10 years. Generally PostgreSQL lasts even after hardware/OS have 
given up.

Dont get me wrong, i would love to see all our systems running the latest 
9.0.1, and all the rest of software
(jboss, java, linux) to be on their latest releases.
However this is such a huge project to accomplish and our human resources are 
very limited.
(i am the DBA, linux, freebsd sysadm, java j2ee programmer, architect, etc.)

Also, if there is no explicit upgrade plan (first in the office, second on 
vessels), then i prefer sticking 
to 8.3.3 on the vessels, until we commit to do a serious upgrade.
I cannot start an initiative by myself, only to find later, that i just 
increased my own time spend on managing
who has what in the whole fleet.

But the real pain is the lack of TCP/IP and remote shell whenever we want it.

Anyways thanx for helping by sharing your thoughts.

 -Kevin
 



-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] FK disappeared in 8.3.3

2010-11-23 Thread Achilleas Mantzios
Στις Tuesday 23 November 2010 17:22:29 ο/η Tom Lane έγραψε:
 Achilleas Mantzios ach...@matrix.gatewaynet.com writes:
  At this time our problematic vessel was running 7.4.2. This vessel upgraded 
  to 8.3.3
  some time in January 2009. 
 
 7.4!?
 
 I can't shake the suspicion that some of your issues arose from trying
 to load a later-version pg_dump output into an older-version server.

The (same) migration procedure run fine for 30+ postgresql installations,
and it didn't involve nothing like loading anything later to anything older.
The migration procedure was like:

on the 7.4 machine:
Take a dump, copy it to the 8.3 machine as dynacom_DUMP_OUR_DUMP_FROM_7_4.sql
on the 8.3 machine:
as root:
dropdb dynacom
createdb dynacom
cd /usr/local/src/postgresql-8.3.3
cd contrib/tsearch2/
make uninstall
cd ../intarray/
make uninstall
as postgres:
create domain public.tsvector as pg_catalog.tsvector;
create domain public.gtsvector as pg_catalog.gtsvector;
create domain public.tsquery as pg_catalog.tsquery;
bash
echo SET default_with_oids TO on;  foo
mv dynacom_DUMP_OUR_DUMP_FROM_7_4.sql 
dynacom_DUMP_OUR_DUMP_FROM_7_4.sql_WITHOUT_OIDS
cat foo dynacom_DUMP_OUR_DUMP_FROM_7_4.sql_WITHOUT_OIDS  
dynacom_DUMP_OUR_DUMP_FROM_7_4.sql
psql dynacom -f dynacom_DUMP_OUR_DUMP_FROM_7_4.sql 2see 21
psql
DROP TABLE public.pg_ts_cfg;
DROP TABLE public.pg_ts_cfgmap;
DROP TABLE public.pg_ts_dict ;
DROP TABLE public.pg_ts_parser ;
DROP TYPE public.query_int ;
DROP TYPE public.statinfo ;
DROP TYPE public.tokenout ;
DROP TYPE public.tokentype ;
DROP FUNCTION public.ts_debug(text) ;
DROP TYPE public.tsdebug ;
ALTER TABLE vesselhelp ALTER idxfti TYPE pg_catalog.tsvector;
ALTER TABLE vmanews ALTER idxfti TYPE pg_catalog.tsvector;
ALTER TABLE mail_entity ALTER subjectidxfti TYPE pg_catalog.tsvector;
ALTER TABLE mail_entity ALTER textidxfti TYPE pg_catalog.tsvector;
ALTER TABLE mail_fromfield ALTER idxfti TYPE pg_catalog.tsvector;
ALTER TABLE mail_tofield ALTER idxfti TYPE pg_catalog.tsvector;
ALTER TABLE mailcrew_entity ALTER subjectidxfti TYPE pg_catalog.tsvector;
ALTER TABLE mailcrew_entity ALTER textidxfti TYPE pg_catalog.tsvector;
ALTER TABLE mailcrew_fromfield ALTER idxfti TYPE pg_catalog.tsvector;
ALTER TABLE mailcrew_tofield ALTER idxfti TYPE pg_catalog.tsvector;
DROP DOMAIN public.gtsvector ;
DROP DOMAIN public.tsquery ;
DROP DOMAIN public.tsvector ;
as root:
cd /usr/local/src/postgresql-8.3.3/contrib/intarray
make install
cd /usr/local/src/postgresql-8.3.3/contrib/tsearch2
make install
as postgres
psql
load 'tsearch2';
\i /usr/local/src/postgresql-8.3.3/contrib/tsearch2/tsearch2.sql
load '_int';
\i /usr/local/src/postgresql-8.3.3/contrib/intarray/_int.sql

CREATE TRIGGER mail_entity_subjtsvectorupdate
BEFORE INSERT OR UPDATE ON mail_entity
FOR EACH ROW
EXECUTE PROCEDURE tsearch2('subjectidxfti', 'entsubject');

CREATE TRIGGER mail_entity_texttsvectorupdate
BEFORE INSERT OR UPDATE ON mail_entity
FOR EACH ROW
EXECUTE PROCEDURE tsearch2('textidxfti', 'enttext');

CREATE TRIGGER mail_fromfield_tsvectorupdate
BEFORE INSERT OR UPDATE ON mail_fromfield
FOR EACH ROW
EXECUTE PROCEDURE tsearch2('idxfti', 'fromname', 'fromaddr');

CREATE TRIGGER mail_tofield_tsvectorupdate
BEFORE INSERT OR UPDATE ON mail_tofield
FOR EACH ROW
EXECUTE PROCEDURE tsearch2('idxfti', 'toname', 'toaddr');

CREATE TRIGGER mailcrew_entity_subjtsvectorupdate
BEFORE INSERT OR UPDATE ON mailcrew_entity
FOR EACH ROW
EXECUTE PROCEDURE tsearch2('subjectidxfti', 'entsubject');

CREATE TRIGGER mailcrew_entity_texttsvectorupdate
BEFORE INSERT OR UPDATE ON mailcrew_entity
FOR EACH ROW
EXECUTE PROCEDURE tsearch2('textidxfti', 'enttext');

CREATE TRIGGER mailcrew_fromfield_tsvectorupdate
BEFORE INSERT OR UPDATE ON mailcrew_fromfield
FOR EACH ROW
EXECUTE PROCEDURE tsearch2('idxfti', 'fromname', 'fromaddr');

CREATE TRIGGER mailcrew_tofield_tsvectorupdate
BEFORE INSERT OR UPDATE ON mailcrew_tofield
FOR EACH ROW
EXECUTE PROCEDURE tsearch2('idxfti', 'toname', 'toaddr');

CREATE TRIGGER vesselhelp_tsvectorupdate
BEFORE INSERT OR UPDATE ON vesselhelp
FOR EACH ROW
EXECUTE PROCEDURE tsvector_update_trigger('idxfti', 'pg_catalog.english', 
'content');

CREATE TRIGGER vmanews_tsvectorupdate
BEFORE INSERT OR UPDATE ON vmanews
FOR EACH ROW
EXECUTE PROCEDURE tsvector_update_trigger('idxfti', 'pg_catalog.english', 
'header', 'content');


CREATE INDEX paintgentypes_parents ON paintgentypes USING gin (parents 
gin__int_ops);
CREATE INDEX machdefs_parents_gistsmall ON machdefs USING gin (parents 
gin__int_ops);


as root:
grep COPY dynacom_DUMP_OUR_DUMP_FROM_7_4.sql | grep FROM stdin | awk '{print 
select count(*) from $2;}'  checkcounts.sql
psql -a -f checkcounts.sql dynacom  checkcounts.out.8.3
psql -h old-7.4-machine -a -f checkcounts.sql dynacom  checkcounts.out.7.4
diff -u checkcounts.out.7.4 checkcounts.out.8.3


 The discrepancies you're seeing are closely related

[ADMIN] FK disappeared in 8.3.3

2010-11-22 Thread Achilleas Mantzios
Hello,
in one (out of our 60+) PostgreSQL systems which are deployed on tanker vessels 
at open sea, we had the following problem:
One (ON DELETE CASCADE) FK constraint which was supposed to be there was found 
to be (mysteriously) absent. 
That caused am erroneous behaviour which manifested the problem so we thougth 
of doing a proper investigation on the whole 
of the schema in this DB to see if everything is in place.
The schema itself is of the order of 25,000 lines long.
So we pg_dumped the schema on a known good 8.3.3 identical test database we 
have at the office (on shore),
and compared it against the suspicious schema on the problematic vessel.
The diff (without options) alone produced ~ 7500 lines of output. (quite an 
unpleasant result).
So i am now currently in the process of examining one by one the different 
portions of the two schema dumps.
Some of the strange things i notice are (diff -u)

1) (many cases like:)
CREATE TABLE machclasses (
-id integer DEFAULT nextval(('public.machclasses_id_seq'::text)::regclass) 
NOT NULL,
+id integer NOT NULL,
 name character varying(100) NOT NULL,
 vslsecid integer NOT NULL
 );
@@ -545,7 +537,7 @@
 --
(later in the bad dump i get the ALTER TABLE machclasses ALTER COLUMN id SET 
DEFAULT nextval('machclasses_id_seq'::regclass); stmt)

2)
 CREATE VIEW big_machclasses AS
-SELECT machclasses.id, machclasses.name, machclasses.vslsecid FROM 
machclasses WHERE (machclasses.id = ANY (ARR
AY[1, 2, 16, 74, 647, 3, 15, 76, 137, 267, 31, 32, 412, 33, 95, 10, 312, 11, 
118, 70, 106, 22, 8, 21, 571, 80, 81, 1
20]));
+SELECT machclasses.id, machclasses.name, machclasses.vslsecid FROM 
machclasses WHERE ((
((machclasses.id = 1) OR (machclasses.id = 2)) OR (machclasses.id = 16)) OR 
(machclasses.id = 74)) OR (machclasses.i
d = 647)) OR (machclasses.id = 3)) OR (machclasses.id = 15)) OR (machclasses.id 
= 76)) OR (machclasses.id = 137)) OR
 (machclasses.id = 267)) OR (machclasses.id = 31)) OR (machclasses.id = 32)) OR 
(machclasses.id = 412)) OR (machclas
ses.id = 33)) OR (machclasses.id = 95)) OR (machclasses.id = 10)) OR 
(machclasses.id = 312)) OR (machclasses.id = 11
)) OR (machclasses.id = 118)) OR (machclasses.id = 70)) OR (machclasses.id = 
106)) OR (machclasses.id = 22)) OR (mac
hclasses.id = 8)) OR (machclasses.id = 21)) OR (machclasses.id = 571)) OR 
(machclasses.id = 80)) OR (machclasses.id
= 81)) OR (machclasses.id = 120));

In the good dump the view definition uses the minimalistic ANY expression, 
while in the bad dump it is expressed verbosely.

Both databases were created with the same procedure using dumps from 7.4.2.

I must mention that the HW of the problematic vessel died some time around 
summer, and i had myself
personally onboard, pg_dump the old DB, and restore it to the new box.

I am puzzled about the differences in the schema, if any one has any ideas of 
why this might be happening, would be great.
That would help throw out of the equation those recurring cases and concentrate 
on any real and fatal differences in the schema.

One additional issue that is worsening our situation is that we dont have 
TCP/IP access to the vessel. 
We only have term emulator (minicom) which dials up a remote mgetty 
(which works only if the weather conditions are ok, and nothing else is broken: 
read if we are lucky)
-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] FK disappeared in 8.3.3

2010-11-22 Thread Achilleas Mantzios
Kevin thanx,
I explicitly wrote that:
One additional issue that is worsening our situation is that we dont have 
TCP/IP access to the vessel. 
We only have term emulator (minicom) which dials up a remote mgetty 
(which works only if the weather conditions are ok, and nothing else is broken: 
read if we are lucky)
(which means that any upgrade ... aspirations are ... optimistic in the best 
case scenario.)
Now if what you asked is why didnt i upgrade the DB to whatever newest 8.3.* 
version existed
at the mid of July, the answer is that we have not done an upgrade plan yet, so 
yes we prefer to keep
the same version on all vessels, until we are capable (find resources/time) to 
design a proper upgrade
plan (i repeat we do not have TCP/IP access to the DBs)
(imagine having to upgrade 61 installations for which you dont have ssh/scp ;) 
sounds fun right? )

To your question about versions, yes both are 8.3.3, and the pg_dump used to 
diff the schemas was from 8.3.3

Στις Monday 22 November 2010 16:58:17 ο/η Kevin Grittner έγραψε:
 Achilleas Mantzios ach...@matrix.gatewaynet.com wrote:
  
  One (ON DELETE CASCADE) FK constraint which was supposed to be
  there was found to be (mysteriously) absent.
  
 Do you have old backups to see how long it has been gone?
  
  So we pg_dumped the schema on a known good 8.3.3 identical test
  database
  
 That's an oxymoron.  8.3.3 has known bugs and security vulnerability
 which have been fixed in maintenance releases which can be applied
 without a dump and reload.
  
 http://www.postgresql.org/support/versioning
  
 The current 8.3 release is 8.3.12.  For details of what's been
 fixed, see this:
  
 http://www.postgresql.org/docs/8.3/static/release.html
  
  and compared it against the suspicious schema on the problematic
  vessel.  The diff (without options) alone produced ~ 7500 lines of
  output.
  
 Were both databases at the same version number?  Was the same
 version of pg_dump used for both dumps?  (Note: you can always dump
 an older database with a newer version of pg_dump, but not vice
 versa.)
  
  Both databases were created with the same procedure using dumps
  from 7.4.2.
  
 The current version of 7.4 is 7.4.30!:
  
 http://www.postgresql.org/docs/7.4/static/release.html
  
 I'm not clear what you mean, though.  Both databases are on 8.3.3?
  
  I must mention that the HW of the problematic vessel died some
  time around summer, and i had myself personally onboard, pg_dump
  the old DB, and restore it to the new box.
  
 Did you get any errors when the dump was loaded?  A damaged database
 might have left orphaned rows which would have prevented creation of
 the foreign key.  Do you still have a dump file from that point?
  
  I am puzzled about the differences in the schema, if any one has
  any ideas  of why this might be happening, would be great.
  
 My first guess is that they were dumped by pg_dump executables from
 different versions.
  
 -Kevin
 



-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Postgres error: could not open relation base/xxxxx/yyyyy

2010-11-16 Thread Achilleas Mantzios
: 0x1000
  Error Information Handle: Not Provided
  Total Width: 72 bits
  Data Width: 64 bits
  Size: 4096 MB
  Form Factor: FB-DIMM
  Set: 3
  Locator: DIMM6
  Bank Locator: Not Specified
  Type: DDR2 FB-DIMM
  Type Detail: Synchronous
  Speed: 667 MHz
  Manufacturer: 830B7FB3830B
  Serial Number: 9F472B5F
  Asset Tag: 0D0934
  Part Number: NT4GT72U4ND2BD-3C
  Rank: 2
 
  Handle 0x1106, DMI type 17, 28 bytes
  Memory Device
  Array Handle: 0x1000
  Error Information Handle: Not Provided
  Total Width: 72 bits
  Data Width: 64 bits
  Size: 4096 MB
  Form Factor: FB-DIMM
  Set: 4
  Locator: DIMM7
  Bank Locator: Not Specified
  Type: DDR2 FB-DIMM
  Type Detail: Synchronous
  Speed: 667 MHz
  Manufacturer: 830B7FB3830B
  Serial Number: C51E2A5F
  Asset Tag: 0D0938
  Part Number: NT4GT72U4ND2BD-3C
  Rank: 2
 
  Handle 0x1107, DMI type 17, 28 bytes
  Memory Device
  Array Handle: 0x1000
  Error Information Handle: Not Provided
  Total Width: 72 bits
  Data Width: 64 bits
  Size: 4096 MB
  Form Factor: FB-DIMM
  Set: 4
  Locator: DIMM8
  Bank Locator: Not Specified
  Type: DDR2 FB-DIMM
  Type Detail: Synchronous
  Speed: 667 MHz
  Manufacturer: 830B7FB3830B
  Serial Number: 1E1E2A5B
  Asset Tag: 0D0938
  Part Number: NT4GT72U4ND2BD-3C
  Rank: 2
 
 
  On Mon, Nov 15, 2010 at 2:05 PM, Achilleas Mantzios 
  ach...@matrix.gatewaynet.com wrote:
 
  Whoever did the memory installation, should also have taken care of
  checking it's working ok.
  A good idea is to use ECC memory.
  Does fsck on the remote system run with reporting problems as well?? (it
  might report problems such as incorrect block counts, missing inodes 
  etc...)
 
  Also check for disk, controller problems.
 
  Στις Monday 15 November 2010 14:42:32 ΞΏ/Ξ· Pablo Delgado DΓ­az-Pache 
  έγραψΡ:
   Hi Achilleas,
  
   Thanks for your quick response.
   We do have backups and a couple of spare servers running in parallel so
   we're safe in that sense. Thanks for your advice anyway.
  
   To run *memtest86 *we have to go to the datacenter and that will take us
  a
   few days. Is there anything we can do remotely?
  
   Could it be anything different than memory HW problem?
   Btw, we are not using that server at the moment :)
  
   Thanks
  
   On Mon, Nov 15, 2010 at 12:03 PM, Achilleas Mantzios 
   ach...@matrix.gatewaynet.com wrote:
  
Stop using the system immediately, since many things inserted to the
  DB
might simply be garbage.
Inspect your memory with memtest86.
   
I would even suggest moving to a new HW if available, and start
  working
into two parallel directions:
   
a) try to bring your DB into a sane state
b) try to fix your HW server, which apparently has problems.
   
Files are not disappearing like that, in normal situations.
You cannot trust your current HW to run postgresql. Its a guarantee
  method
to lose your data, if you
go on by this start/stop/restore/pray circle.
   
IMPORTANT: Handle your old (pre-memory upgrade) backups like expensive
chinese vases.
Start from those good and trustworthy backups to do any database
  replay
(or similar) work which
would lead to a viable situation.
   
Στις Monday 15 November 2010 11:55:38 ΞΏ/Ξ· Pablo Delgado 
DΓ­az-Pache
  έγραψΡ:
 Hi all,

 We've been using postgres for 9 years without a problem until now!
  Two
 problems in a very short time!
 The first one is described in

   
  http://postgresql.1045698.n5.nabble.com/Autovacuum-seems-to-block-database-WARNING-worker-took-too-long-to-start-td3264261.html
 This is another one (not related I think) ...

 Postgres server is usually working fine. All of a sudden we start
  getting
 these errors ...

 * **2010-11-09 11:49:15.320

  CET|2|database1|10.19.0.51(18895)|20929|SELECT|4cd926fd.51c1|2010-11-09
 11:48:29 CET|10/417796|1390150|postgres| LOG:  duration: 1518.422 ms
  execute unnamed: SELECT id_token_fk,x ORDER BY avadate*
 * **2010-11-09 11:52:25.364

  CET|1|database1|10.19.0.51(23286)|21566|PARSE|4cd927cf.543e|2010-11-09
 11:51:59 CET|7/430041|0|postgres| ERROR:  could not open relation
 base/273198960/273198979: No such file or directory*
 * **2010-11-09 11:52:25.364

  CET|2|database1|10.19.0.51(23286)|21566|PARSE|4cd927cf.543e|2010-11-09
 11:51:59 CET|7/430041|0|postgres| STATEMENT:  SELECT
id_token_fkxxx
  ORDER BY avadate*
 * **2010-11-09 11:52:29.981

  CET|3|database1|10.19.0.51(23286)|21566|PARSE|4cd927cf.543e|2010-11-09

Re: [ADMIN] Postgres error: could not open relation base/xxxxx/yyyyy

2010-11-15 Thread Achilleas Mantzios
 (installation out-of-the-box using yum)
 Server memory: 28GB
 
 
 Any help would be appreciated
 
 Pablo
 



-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Postgres error: could not open relation base/xxxxx/yyyyy

2010-11-15 Thread Achilleas Mantzios
Whoever did the memory installation, should also have taken care of checking 
it's working ok.
A good idea is to use ECC memory.
Does fsck on the remote system run with reporting problems as well?? (it might 
report problems such as incorrect block counts, missing inodes etc...)

Also check for disk, controller problems.

Στις Monday 15 November 2010 14:42:32 ο/η Pablo Delgado Díaz-Pache έγραψε:
 Hi Achilleas,
 
 Thanks for your quick response.
 We do have backups and a couple of spare servers running in parallel so
 we're safe in that sense. Thanks for your advice anyway.
 
 To run *memtest86 *we have to go to the datacenter and that will take us a
 few days. Is there anything we can do remotely?
 
 Could it be anything different than memory HW problem?
 Btw, we are not using that server at the moment :)
 
 Thanks
 
 On Mon, Nov 15, 2010 at 12:03 PM, Achilleas Mantzios 
 ach...@matrix.gatewaynet.com wrote:
 
  Stop using the system immediately, since many things inserted to the DB
  might simply be garbage.
  Inspect your memory with memtest86.
 
  I would even suggest moving to a new HW if available, and start working
  into two parallel directions:
 
  a) try to bring your DB into a sane state
  b) try to fix your HW server, which apparently has problems.
 
  Files are not disappearing like that, in normal situations.
  You cannot trust your current HW to run postgresql. Its a guarantee method
  to lose your data, if you
  go on by this start/stop/restore/pray circle.
 
  IMPORTANT: Handle your old (pre-memory upgrade) backups like expensive
  chinese vases.
  Start from those good and trustworthy backups to do any database replay
  (or similar) work which
  would lead to a viable situation.
 
  Στις Monday 15 November 2010 11:55:38 ο/η Pablo Delgado Díaz-Pache έγραψε:
   Hi all,
  
   We've been using postgres for 9 years without a problem until now! Two
   problems in a very short time!
   The first one is described in
  
  http://postgresql.1045698.n5.nabble.com/Autovacuum-seems-to-block-database-WARNING-worker-took-too-long-to-start-td3264261.html
   This is another one (not related I think) ...
  
   Postgres server is usually working fine. All of a sudden we start getting
   these errors ...
  
   * **2010-11-09 11:49:15.320
   CET|2|database1|10.19.0.51(18895)|20929|SELECT|4cd926fd.51c1|2010-11-09
   11:48:29 CET|10/417796|1390150|postgres| LOG:  duration: 1518.422 ms
execute unnamed: SELECT id_token_fk,x ORDER BY avadate*
   * **2010-11-09 11:52:25.364
   CET|1|database1|10.19.0.51(23286)|21566|PARSE|4cd927cf.543e|2010-11-09
   11:51:59 CET|7/430041|0|postgres| ERROR:  could not open relation
   base/273198960/273198979: No such file or directory*
   * **2010-11-09 11:52:25.364
   CET|2|database1|10.19.0.51(23286)|21566|PARSE|4cd927cf.543e|2010-11-09
   11:51:59 CET|7/430041|0|postgres| STATEMENT:  SELECT
  id_token_fkxxx
ORDER BY avadate*
   * **2010-11-09 11:52:29.981
   CET|3|database1|10.19.0.51(23286)|21566|PARSE|4cd927cf.543e|2010-11-09
   11:51:59 CET|7/430049|0|postgres| ERROR:  could not open relation
   base/273198960/273199235: No such file or directory*
   * **2010-11-09 11:52:30.988
   CET|6|database1|10.19.0.51(23286)|21566|PARSE|4cd927cf.543e|2010-11-09
   11:51:59 CET|7/430050|0|postgres| STATEMENT:  SELECT max(avadate)
  
   32036)*
   * **2010-11-09 11:53:36.346
   CET|16|database2|10.19.0.42(44916)|22107|SELECT|4cd9280e.565b|2010-11-09
   11:53:02 CET|94/516004|0|postgres| STATEMENT:  SELECT * FROM photos
   xx LIMIT 1*
   * **2010-11-09 11:53:37.956
   CET|17|database2|10.19.0.42(44916)|22107|SELECT|4cd9280e.565b|2010-11-09
   11:53:02 CET|94/516025|0|postgres| ERROR:  could not open relation
   base/271253899/271254075: No such file or directory*
   * ***
   * ***
   * **2010-11-09 11:53:55.560 CET|111|||26090||4cc6e970.65ea|2010-10-26
   16:45:04 CEST||0|| ERROR:  could not open relation
  base/273198960/273199235:
   No such file or directory*
   * **2010-11-09 11:53:55.560 CET|112|||26090||4cc6e970.65ea|2010-10-26
   16:45:04 CEST||0|| CONTEXT:  writing block 8866 of relation
   base/273198960/273199235*
   * **2010-11-09 11:53:55.560 CET|113|||26090||4cc6e970.65ea|2010-10-26
   16:45:04 CEST||0|| WARNING:  could not write block 8866 of
   base/273198960/273199235*
   * **2010-11-09 11:53:55.560 CET|114|||26090||4cc6e970.65ea|2010-10-26
   16:45:04 CEST||0|| DETAIL:  Multiple failures --- write error might be
   permanent.*
   * **2010-11-09 11:53:56.590 CET|115|||26090||4cc6e970.65ea|2010-10-26
   16:45:04 CEST||0|| ERROR:  could not open relation
  base/273198960/273199235:
   No such file or directory*
   * **2010-11-09 11:53:56.590 CET|116|||26090||4cc6e970.65ea|2010-10-26
   16:45:04 CEST||0|| CONTEXT:  writing block 8866 of relation
   base/273198960/273199235*
  
   See that there are 2 different databases involved. (database1 and
   database2).
  
   Looking for distinct errors (among

Re: [ADMIN] Oracle's Long data type equivalent data type in Postgresql ?

2010-05-28 Thread Achilleas Mantzios
Στις Friday 28 May 2010 11:45:17 ο/η Ravi Katkar έγραψε:
 Hi ,
 
 What is the  Oracle's  Long data type equivalent data type  in Postgresql ?
 

int8 i guess

 Regards,
 Ravi katkar
 
 
 
 



-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Table space grow big - PostgreSQL

2010-05-05 Thread Achilleas Mantzios
Στις Wednesday 05 May 2010 14:45:26 ο/η Kevin Grittner έγραψε:
 Khangelani Gama  wrote:
  
  /usr/local/pgsql/data/base directory shows the following where
  95186722/ takes a lot of space :
 
  51G ./95186722
 
  Now the dump file of the very same database created by using
  command : pg_dump -U user -O dbname  /tmp/filename is 2.8G big.
  
  The main question I have is: What makes the
  /usr/local/pgsql/data/base/95186722/ grow so big while the actual
  data with its schema is only 2.8G and that is there a way to reduce
  the table space sizes or anything related to that?
  
 The size of a dump is often somewhat smaller than the raw data, but
 this extreme difference suggests that your tables are bloated due to
 inadequate routine maintenance.  What version of PostgreSQL is this? 
 What is your vacuum policy?  Please show the contents of your
 postgresql.conf file with all comments stripped.
  

It depends, in the case of indexes, one SQL CREATE INDEX statement could result 
to many megs of memory on disk,
while, on the other hand, binary content (a bytea holding e.g. image bitmaps) 
will have bigger size in the dump rather than
on disk's raw data, since it is compressed there.
In our case the dump is twice the size of the db on disk, but generally i think 
there cant be no universal rule of what is
larger than what.

 -Kevin
 
 



-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Querying the hostname of the server

2010-04-30 Thread Achilleas Mantzios
Yes, nice and simple.

I just did this in C and it works ok!

hostname.c
===

#include unistd.h
#include postgres.h
#include utils/elog.h
#include utils/palloc.h
#include storage/bufpage.h
#define MAX_HOST_SIZE   200

PG_MODULE_MAGIC;
PG_FUNCTION_INFO_V1(hostname);
Datum hostname(PG_FUNCTION_ARGS);

Datum
hostname(PG_FUNCTION_ARGS)
{
int len;
char buf[MAX_HOST_SIZE + 1];
text *result;

gethostname(buf,MAX_HOST_SIZE);
len = strlen(buf);
//elog(NOTICE, hostname=%s\n,buf);
result=(text *)palloc(len + VARHDRSZ);
SET_VARSIZE(result, len + VARHDRSZ);
memcpy(VARDATA(result),buf,strlen(buf));
PG_RETURN_POINTER(result);
}


Makefile
===
MODULE_big = hostname
OBJS = hostname.o

ifdef USE_PGXS
PG_CONFIG = pg_config
PGXS := $(shell $(PG_CONFIG) --pgxs)
include $(PGXS)
else
subdir = /var/lib/pgsql/src/Ccode/hostname
top_builddir = /usr/local/src/postgresql-8.3.3
include $(top_builddir)/src/Makefile.global
include $(top_srcdir)/contrib/contrib-global.mk
endif


hostname.sql
==
CREATE OR REPLACE FUNCTION hostname() RETURNS text
AS '$libdir/hostname', 'hostname'
LANGUAGE c IMMUTABLE STRICT;


Στις Friday 30 April 2010 17:43:49 ο/η Glyn Astill έγραψε:
 --- On Fri, 30/4/10, Ross J. Reedstrom reeds...@rice.edu wrote:
 
   Hi,
   
   I have a number of PostgreSQL servers which I often
  access through ssh
   tunnel with Pgadmin3. I would like to double check
  which one I have landed
   on (if the tunnel is really configured the way I
  want). Is there a way to
   query the hostname from the catalogs?
  
  Hmm, that's a bit tricky, since I assume you're using a
  local db
  connection inside the tunnel, so inet_server_addr()
  probably returns
  null. If you're talking unix/linux machines, then
  /etc/hostname _should_
  have the current hostname in it, so:
  
  create temp table foo (t text);
  copy foo from '/etc/hostname';
  select * from foo;
  drop table foo;
  
  Should work.
  
 
 Or you could do something like:
 
 CREATE OR REPLACE FUNCTION hostname()
 RETURNS text AS
 $BODY$
   $host = `hostname`;
 return $host;
 $BODY$
 LANGUAGE 'plperlu';
 
 
 
 
 
 



-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] pg_dump: Error message from server: ERROR: compressed data is corrupt

2010-04-27 Thread Achilleas Mantzios
Στις Friday 23 April 2010 17:05:46 ο/η Tom Lane έγραψε:
 Achilleas Mantzios ach...@matrix.gatewaynet.com writes:
  Then i did
 
  # CREATE TABLE mail_entity2 AS SELECT * FROM mail_entity;
 
  which went fine
 
  but, for some crazy reason, pg_dump on mail_entity2 also results to an 
  error:
  srv:~ pg_dump -t mail_entity2  /dev/null
  pg_dump: SQL command failed
  pg_dump: Error message from server: ERROR:  compressed data is corrupt
 
 The create/select would not have bothered to decompress any
 inline-compressed datums, so the corrupted data went right over to the
 new table.  I think you'll have to engage in another round of
 divide-and-conquer to locate the specific row or rows with a corrupted
 value.

We just removed one memory SIM, and the problem went away.
It must have been a problem caused by on the fly memory hardware error rather 
than
actual corrupted data on the disk.
I have a decent dump now,
thank you a lot.

 
   regards, tom lane
 



-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] pgsql 8.3.3 Invalid page header leads to FATAL: segment too big

2010-04-23 Thread Achilleas Mantzios
Στις Thursday 22 April 2010 19:02:00 ο/η Cédric Villemain έγραψε:
 2010/4/22 Achilleas Mantzios ach...@matrix.gatewaynet.com:
  Στις Thursday 22 April 2010 16:53:05 ο/η Cédric Villemain έγραψε:
  2010/4/22 Achilleas Mantzios ach...@matrix.gatewaynet.com:
   Hello,
   i have this serious problem in one of our remote vessels. (comm is done 
   by minicom to the remote satelite modem)
   I think that this server was under some sort of constant resets or 
   hardware failures.
   Initially,i had this problem:
   ERROR:  invalid page header in block 672720 of relation 
   pg_toast_125716009
  
   This toast table corresponds to a table named mail_message,
                                  Table public.mail_message
    Column   |       Type        |                         Modifiers
   ---+---+---
    msgno     | mail_msgno_domain | not null default 
   nextval('mail_msgno_sequence'::regclass)
    msgsource | bytea             |
   Indexes:
      mail_message_key PRIMARY KEY, btree (msgno)
  
   (obviously the TOAST table serves the msgsource varlena) the contents of 
   which is not of vital importance.
   I tried, REINDEXING, with no success, and after that, i tried resetting 
   the said block on disk as per this
   suggestion by Tom here: 
   http://old.nabble.com/invalid-page-header-td11981154.html
  
   i found the oid of the table:
   SELECT tableoid,1 from pg_toast.pg_toast_125716009 order by chunk_id 
   LIMIT 1
    tableoid  | ?column?
   ---+--
    125716013 |        1
  
   (and just to verify)
   SELECT relname from pg_class where oid=125716013;
        relname
   
    pg_toast_125716009
  
   Then i did: (as i said i do not need the contents of msgsource - yet the 
   contents of msgno are vital)
   dd if=/dev/zero 
   of=/usr/local/var/lib/pgsql/data/base/125714957/125716013 seek=672720 
   bs=8192 count=1
 
  segment have 1.1GB size maximum. You have to catch in what segment the
  faulty block is, and reajust the block value from the error report to
  the real one in the good segment.
 
 
  Thanx,
  Taking a look at 
  /usr/local/src/postgresql-8.3.3/src/backend/storage/smgr/md.c
  i see the the error comes from function mdnblocks
 
  if (nblocks  ((BlockNumber) RELSEG_SIZE))
                         elog(FATAL, segment too big);
 
  That means, that some segment file is bigger than RELSEG_SIZE
  At least in my system:
  #define BLCKSZ   8192
  #define RELSEG_SIZE (0x4000 / BLCKSZ)
  So, any segment file cannot be bigger than RELSEG_SIZE blocks (or 2^30 
  bytes = 1GB)
 
  Currently i dont have any access to the machine but tomorrow i will check 
  the file sizes.
 
  Can anyone shed some light as to some method of identifying all the segment 
  files of a table?
  The first one has the same name as the tableoid.
  How about the subsequent segments?
 
 Your execution of dd make your first segment bigger than expected.
 Other segment have the same name with a .1 .2 etc  suffix.
 
 You have to shrink your first segment to the correct size.
 check what happens, you should have now the original error.
 
 And, I have never used it, but I think it is the purpose of
 zero_damaged_pages to parameter to allow postgresql itself to zero the
 bad black. (reading
 src/backend/storage/buffer/bufmgr.c confirm that.
 
 *BUT* take care that it will zero *every* bad page, perhaps not only
 the one trapping an error.
 
 In those situation, it is good to make a snapshot of the pgdata
 directory, in case your fingers surf  too fast on the keyboard
 
 If you don't want to activate zero_damage_page,  then go and calculate
 which block in which segment you have to zeroing.
 
 side note, it may be usefull to have the relevant information in the
 error message...
 


Many Thanks, Cédric Villemain and Tom
What i did was first to correct the first segment file with smth like
dd if=216293737 of=216293737.good seek=0 bs=8192 count=131072
which effectively truncates all but the first 131072 blocks (or 2^30 bytes = 
1GB)

After that was done, and restarting postgresql backend, then i fell back to the 
situation
with the Invalid page header, as noted before.

I stopped the backend and calculated the exact segment file and offset where 
the problem was:
The block with the invalid header was the block with number: 672720 
Now each segment contains at most 131072 blocks, with all but the last
containing exactly 131072 blocks.
So my problematic segment was the one with number:
672720 /131072  = 5
and the block offset inside this segment was:
672720 - (5*131072) = 17360

so i tried to zero that patricular block with

dd conv=notrunc if=216293737.5 of=216293737.5.GOOD seek=17360 bs=8192 count=1

i started postgresql and i threw out a warning about initializing this zero 
page.
After that, i reported error in header for block 672740,
i repeated the same procedure, and i was able to go further and even reindex

Re: [ADMIN] pgsql 8.3.3 Invalid page header leads to FATAL: segment too big

2010-04-22 Thread Achilleas Mantzios
Στις Thursday 22 April 2010 17:42:33 γράψατε:
 Achilleas Mantzios ach...@matrix.gatewaynet.com writes:
  Then i did: (as i said i do not need the contents of msgsource - yet the 
  contents of msgno are vital)
  dd if=/dev/zero of=/usr/local/var/lib/pgsql/data/base/125714957/125716013 
  seek=672720 bs=8192 count=1 
 
  However, after that, unfortunately i get constant postgresql server 
  restarts with:
  FATAL:  segment too big
 
 You got the dd command wrong and made the file size change to something
 it shouldn't be.  I think you can use dd to truncate the file back to
 what it should be (ie, exactly 1GB) but haven't had enough caffeine to
 remember exactly how.
 
 I think the underlying error is that block 672720 isn't going to be in
 the first segment of the table --- you need to be zeroing something in
 one of the other segments...

Great, thanx a lot,
any idea about the naming of the files of those other segments?

 
   regards, tom lane
 



-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] severely wierd problem PgSQL log analyzer

2010-03-01 Thread Achilleas Mantzios
Hi,
i am facing a very wierd situation, a piece of java code seems correct, however 
*some* times a connection instead of being rollbacked gets commited.
(which has driven me nuts). So what i did, is replicate the problem with full 
logging enabled from java, in order to know at any point
in the code against which postgres pid backend i am running against.
At the same time i had full logging enabled in postgresql 8.3 and i tried to 
trace the exact backend which resulted in commit.
(i try to replicate the problem by hitting the reload button in firefox).
However that gives me about over 160 instances of the program, making difficult 
to put together all those sessions from the logs,
and it seems i cannot see whats happening unless i have a handy tool that can 
extract info from the logs and present it in a human way.

Any suggestions for a good pgsql log analyzer? i am currently struggling with 
http://pgfouine.projects.postgresql.org/tutorial.html
(having failed with http://bucardo.org/wiki/Pgsi which wants only syslog)
-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] migration problem 8.1.17 -- 8.4.2

2010-02-09 Thread Achilleas Mantzios
Στις Tuesday 09 February 2010 10:48:31 ο/η Lutz Steinborn έγραψε:
 Hi,
 
 then importing an pg_dump result from a 8.1.17 to a 8.4.2 some varchar fields
 are not imported and left blank. I've checked the dump file if the fields are
 correct dumped and they are.
 So it looks like the import is not working as expected.

Are you getting any messages during dump/restore?

 
 Any idea what the problem is ?
 
 Kindly regards
 
 Lutz
 



-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] migration problem 8.1.17 -- 8.4.2

2010-02-09 Thread Achilleas Mantzios
Στις Tuesday 09 February 2010 11:54:38 ο/η Lutz Steinborn έγραψε:
 On Tue, 9 Feb 2010 11:35:45 +0200
 Achilleas Mantzios ach...@matrix.gatewaynet.com wrote:
 
  Στις Tuesday 09 February 2010 10:48:31 ΞΏ/Ξ· Lutz Steinborn 
  έγραψΡ:
   Hi,
   
   then importing an pg_dump result from a 8.1.17 to a 8.4.2 some varchar 
   fields
   are not imported and left blank. I've checked the dump file if the fields 
   are
   correct dumped and they are.
   So it looks like the import is not working as expected.
  
  Are you getting any messages during dump/restore?
 No and dumping/importing only on table give me the same problem.

Could you post the table schema and some sample data which can reproduce the 
problem?

 
 -- 
 Lutz
 



-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Issues with LC_COLLATE, across 8.3/8.4 on various platforms

2010-02-04 Thread Achilleas Mantzios
Στις Wednesday 03 February 2010 21:16:36 ο/η Peter Eisentraut έγραψε:
 On ons, 2010-02-03 at 17:05 +0200, Achilleas Mantzios wrote:
  Στις Wednesday 03 February 2010 16:53:20 ο/η Peter Eisentraut έγραψε:
   On tis, 2010-02-02 at 17:28 +0200, Achilleas Mantzios wrote:
It seems that libc's behaviour regarding LC_COLLATE in Linux/FreeBSD
are two different stories, hence the seen results.
   
   The UTF-8 locales on FreeBSD and Mac OS X are pretty much broken.
   
   This is becoming a FAQ ...

I guess you are right.  In my new FreeBSD system i get:
postg...@smadevnew:~ uname -sr
FreeBSD 8.0-RELEASE-p2
postg...@smadevnew:~ ls -l /usr/share/locale/en_US.UTF-8/
total 0
lrwxr-xr-x  1 root  wheel  28 Nov 21 16:30 LC_COLLATE - 
../la_LN.US-ASCII/LC_COLLATE
lrwxr-xr-x  1 root  wheel  17 Nov 21 16:30 LC_CTYPE - ../UTF-8/LC_CTYPE
lrwxr-xr-x  1 root  wheel  30 Nov 21 16:30 LC_MESSAGES - 
../en_US.ISO8859-1/LC_MESSAGES
lrwxr-xr-x  1 root  wheel  30 Nov 21 16:30 LC_MONETARY - 
../en_US.ISO8859-1/LC_MONETARY
lrwxr-xr-x  1 root  wheel  29 Nov 21 16:30 LC_NUMERIC - 
../en_US.ISO8859-1/LC_NUMERIC
lrwxr-xr-x  1 root  wheel  26 Nov 21 16:30 LC_TIME - ../en_US.ISO8859-1/LC_TIME

 
 



-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Issues with LC_COLLATE, across 8.3/8.4 on various platforms

2010-02-03 Thread Achilleas Mantzios
Στις Tuesday 02 February 2010 17:37:05 ο/η Tom Lane έγραψε:
 Achilleas Mantzios ach...@matrix.gatewaynet.com writes:
  What is the shortest way to change the default locale on a production 
  installation running PostgreSQL-8.3.9?
  Is there anything less painful than dump, initdb,restore?
 
 No :-(
 

Thanx Tom,
taking into account that the large and most important DB in our installation 
(over 100Gb as SQL dump) lives in a tablespace of its own,
is there a way to cheat postgresql after initdb in order to see this 
tablespace and the database in it, avoiding the whole cycle?
i think this is not a good idea, i just thought i could ask, just to make sure.

   regards, tom lane
 



-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Issues with LC_COLLATE, across 8.3/8.4 on various platforms

2010-02-03 Thread Achilleas Mantzios
Στις Wednesday 03 February 2010 16:53:20 ο/η Peter Eisentraut έγραψε:
 On tis, 2010-02-02 at 17:28 +0200, Achilleas Mantzios wrote:
  It seems that libc's behaviour regarding LC_COLLATE in Linux/FreeBSD
  are two different stories, hence the seen results.
 
 The UTF-8 locales on FreeBSD and Mac OS X are pretty much broken.
 
 This is becoming a FAQ ...

This is a libc/locale combination issue. Care to elaborate?
AFAICT for my case in FreeBSD, the greek UTF locale el_GR.UTF-8 behaves as 
expected without problems.

 
 



-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] Issues with LC_COLLATE, across 8.3/8.4 on various platforms

2010-02-02 Thread Achilleas Mantzios
Hi, i have the following problem, text ordering seems to behave incosistently 
across various lc_collate values, OS'es, PostgreSQL versions.
Some behaviour might be expected, some not, thats why i am asking to see where 
i stand with this.
Test Data
postg...@dynacom=# SELECT * from test_sort_order;
 fooname
-
 Cylinder head cover No1
 Cylinder Liner No1
 Cylinder head No1
(3 rows)

Now the query 
# SELECT * from test_sort_order order by fooname; 
in PostgreSQL 8.3.7 on x86_64-unknown-linux-gnu  (lc_collate=en_US.UTF-8) gives
fooname
-
 Cylinder head cover No1
 Cylinder head No1
 Cylinder Liner No1

while in all of
PostgreSQL 8.3.3 on i686-pc-linux-gnu (lc_collate=C)
PostgreSQL 8.3.7 (lc_collate=el_GR.UTF-8) on i386-unknown-freebsd6.3 
PostgreSQL 8.4.1 on x86_64-unknown-freebsd8.0 (lc_collate=en_US.UTF-8), gives
fooname
-
 Cylinder Liner No1
 Cylinder head No1
 Cylinder head cover No1

Database encoding is SQL_ASCII in all four cases.

-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Issues with LC_COLLATE, across 8.3/8.4 on various platforms

2010-02-02 Thread Achilleas Mantzios
It seems that libc's behaviour regarding LC_COLLATE in Linux/FreeBSD are two 
different stories, hence the seen results.
(Switching LC_COLLATE between en_US.UTF-8 and POSIX, in FreeBSD it does not 
have any impact on orderring while in Linux does)
I think i have resolved the issue, so the solution boils down to setting 
LC_COLLATE=C in our linux central production postgresql server.
What is the shortest way to change the default locale on a production 
installation running PostgreSQL-8.3.9?
Is there anything less painful than dump, initdb,restore?
I know 8.4 allows for per-db locale settings, i am just asking if there is a 
way to solve the issue now without waiting till
the migration to 8.4

Thanx!

Στις Tuesday 02 February 2010 14:26:44 ο/η Achilleas Mantzios έγραψε:
 Hi, i have the following problem, text ordering seems to behave incosistently 
 across various lc_collate values, OS'es, PostgreSQL versions.
 Some behaviour might be expected, some not, thats why i am asking to see 
 where i stand with this.
 Test Data
 postg...@dynacom=# SELECT * from test_sort_order;
  fooname
 -
  Cylinder head cover No1
  Cylinder Liner No1
  Cylinder head No1
 (3 rows)
 
 Now the query 
 # SELECT * from test_sort_order order by fooname; 
 in PostgreSQL 8.3.7 on x86_64-unknown-linux-gnu  (lc_collate=en_US.UTF-8) 
 gives
   fooname
 -
  Cylinder head cover No1
  Cylinder head No1
  Cylinder Liner No1
 
 while in all of
 PostgreSQL 8.3.3 on i686-pc-linux-gnu (lc_collate=C)
 PostgreSQL 8.3.7 (lc_collate=el_GR.UTF-8) on i386-unknown-freebsd6.3 
 PostgreSQL 8.4.1 on x86_64-unknown-freebsd8.0 (lc_collate=en_US.UTF-8), gives
   fooname
 -
  Cylinder Liner No1
  Cylinder head No1
  Cylinder head cover No1
 
 Database encoding is SQL_ASCII in all four cases.
 
 -- 
 Achilleas Mantzios
 



-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Very simple password for DB administrator

2010-01-12 Thread Achilleas Mantzios
Στις Tuesday 12 January 2010 10:33:56 ο/η N K έγραψε:
 By default, all connections to DB are restricted by only local ones. I use
 the only superuser with non-standard name and very simple password - one
 or two, for example. Connection to DB is established over SSH. Is it safe
 such technique?
 

Why do you want a simple password? How about protection against your local 
(Unix) users?
They could:
SELECT * from pg_user ;
and then try a dictionary attack on the password.

-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Anyone tried storing software users as database users?

2009-04-28 Thread Achilleas Mantzios
Στις Tuesday 28 April 2009 15:27:58 ο/η Paul Rogers έγραψε:
 If so then how did you handle permissions, errors, connection pooling?
 

In jboss for instance you could do that using 
org.jboss.security.auth.spi.DatabaseServerLoginModule
in conf/login-config.xml

One issue might be the quering of pg_shadow, also another issue would be to 
ensure
if your user md5 hashed password really matches the one stored in the db 
(getting rid of leading md5 prefix first ofcourse)

Having said that, i haven't done it with the postgresql sys tables tho.

Anyway, why do you want to combine database users and app users?
Normally the app accesses the database as one db user.
Most app descriptors are static xml (at least in J2EE), meaning you cant set up 
your connection pools
dynamically with whoever user tries to login.

If you manage to do it, however, there is great advantage in what you are 
trying to do.
Both application *and* database logs will log according to the same user!
So if for example one db process eats up 100% of the CPU you will know exactly
who did what.

 
 Privileged/Confidential Information may be contained in this message.
 
 If you are not the addressee indicated in this message (or responsible for 
 delivery of the message to such person), you may not copy or deliver this 
 message to anyone. In such case, you should destroy this message and kindly 
 notify the sender by reply email. Please advise immediately if you or your 
 employer does not consent to Internet email for messages of this kind. 
 Opinions, conclusions and other information in this message that do not 
 relate to the official business of my firm shall be understood as neither 
 given nor endorsed by it.
 
 



-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Question on rule

2009-02-23 Thread Achilleas Mantzios
Στις Saturday 21 February 2009 10:43:35 ο/η Michael Monnerie έγραψε:
 $$ LANGUAGE plpgSQL;


do smth like:

CREATE or REPLACE function foofunc() RETURNS TRIGGER as $$ 
BEGIN 
insert into test2(descr) values(new.descr); 
INSERT into test2(descr) values(new.descr||'FOOBAR'); 
RETURN NEW; 
END 
$$ LANGUAGE plpgSQL;

CREATE TRIGGER foofunc_trig AFTER INSERT ON test1 FOR EACH ROW EXECUTE 
PROCEDURE foofunc();

-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] 8.3.5 broken after power fail

2009-02-17 Thread Achilleas Mantzios
Στις Tuesday 17 February 2009 10:54:52 ο/η Michael Monnerie έγραψε:

 * your daughter with 3.5 years switching off the power supply of the 
 server
1st line of defense is to set your system to shutdown normally when the power 
button is pressed.
2nd line of defense is to get your self a decent UPS unit

My daughter does this all the time on our family FreeBSD box. No probs.

Also at work at more than 20 tanker vessels running 7.4.2, the captains do that 
on a constant
basis and PgSQL always has survived (more than the rest of the system anyways..)

 What can I do?
 
 mfg zmi



-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] 8.3.5 broken after power fail

2009-02-17 Thread Achilleas Mantzios
Στις Tuesday 17 February 2009 15:53:33 ο/η Michael Monnerie έγραψε:
 Michael Monnerie michael.monne...@is.it-management.at schrieb:
  pg_dump: Fehlermeldung vom Server: ERROR:  missing chunk number 0 for 
  toast value 1460201 in pg_toast_1281127
  pg_dump: Die Anweisung war: COPY public.dbmail_messageblks 
  (messageblk_idnr, physmessage_id, messageblk, blocksize, is_header) TO 
  stdout;
 
 The file is there:
 # ls -l base/16386/1281127
 -rw--- 1 postgres postgres 417447936 17. Feb 12:14 base/16386/1281127
 

did you reindex pg_toast_1281127?
take a look at 
http://archives.free.net.ph/message/20080924.191644.d692f468.el.html

 What can I do about the missing chunk? I need the data in there...
 
 mfg zmi
 
 
 



-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] 8.3.5 broken after power fail

2009-02-17 Thread Achilleas Mantzios
Στις Tuesday 17 February 2009 17:11:41 ο/η Michael Monnerie έγραψε:
 Achilleas Mantzios ach...@matrix.gatewaynet.com schrieb:
  did you reindex pg_toast_1281127?
  take a look at 
  http://archives.free.net.ph/message/20080924.191644.d692f468.el.html
 
 REINDEX TABLE pg_toast.pg_toast_1281127;
 
 That runs now since 33 CPU minutes, using 100% with no visible I/O
 happening. top says:
 29687 postgres  20   0  605m 187m 3024 R  100 15.4  32:58.65 postmaster
 
 And strace -p 29687 displays nothing. Could it be the process hangs? It
 shouldn't take that long I guess, but I'm not sure how to find out if it's
 borked or really working... ideas?

iostat systat vmstat report anything?

 
 mfg zmi
 
 
 



-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] max_files_per_process limit

2008-11-10 Thread Achilleas Mantzios
Στις Monday 10 November 2008 16:18:37 ο/η Dilek Küçük έγραψε:
 Hi,
 
 We have a database of about 62000 tables (about 2000 tablespaces) with an
 index on each table. Postgresql version is 8.1.
 

So you have about 62000 distinct schemata in your db?
Imagine that the average enterprise has about 200 tables max,
and an average sized country has about 300 such companies,
including public sector, with 62000 tables you could blindly model
 the whole activity of a whole country.

Is this some kind of replicated data?
Whats the story?
Just curious.

 Although after the initial inserts to about 32000 tables the subsequent
 inserts are considerable fast, subsequent inserts to more than 32000 tables
 are very slow.
 
 This seems to be due to the datatype (integer) of max_files_per_process
 option in the postgres.conf file which is used to set the maximum number of
 open file descriptors.
 Is there anything we could do about this max_files_per_process limit or any
 other way to speed up inserts to all these tables?
 
 Any suggestions are wellcome.
 
 Kind regards,
 Dilek Küçük
 



-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Ideal way to upgrade to postgres 8.3 with less downtime

2008-11-07 Thread Achilleas Mantzios
Στις Friday 07 November 2008 08:04:05 ο/η Swathi S έγραψε:
 Hi,
 
 I have a database of size approx 400G. It's a 24/7 database. Only few tables
 have regular reads and writes. The other tables just store information.
 
 I would like to upgrade to 8.3 and would like to know the best way to
 upgrade with less downtime.
 
 Thanks
 Swathi
 

And now you are running with which version?
The ideal way is to read the HISTORY file, do several upgrade excercises on a 
test system with the
same database (e.g. with a recent backup of the original), and then 
plan/organize your actions.
But i must say that the older version of postgresql you run, the fancier things 
your apps do, and the more lines of code they have,
the harder it will be to upgrade.
Usually SPI and JDBC require large amounts of work as well.
Well i dont mean to be pessimistic, but our 7.4 - 8.3 upgrade was pretty 
painful.

-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] update to 8.3.3

2008-09-08 Thread Achilleas Mantzios
Στις Monday 08 September 2008 16:14:29 ο/η Devrim GÜNDÜZ έγραψε:
 On Mon, 2008-09-08 at 11:35 +0800, Jumping wrote:
  I plan to update my database from 8.1.9 to 8.3.3.
  Is there any tricks or tips ?
 
 I would first update to 8.1.13, then go on.

Hehe i upgraded from 7.4 to 8.3.3 and it was a big pain,
but since i survived i'd say 8.1.9-8.3.3 is at least safe!
It all depends on the quality (standard-wise) of the original SQL code however.

 
 Anysay, Read the release notes, and check for any possible
 incompatibility issues. 
 
 http://www.postgresql.org/docs/current/static/release-8-3.html
 http://www.postgresql.org/docs/current/static/release-8-2.html
 
 I believe you should first check the casting changes in 8.3 .
 
 Regards,



-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Database Conversion

2008-09-04 Thread Achilleas Mantzios
Στις Thursday 04 September 2008 16:24:34 ο/η Carol Walter έγραψε:
 Hello, All,
 
 I have a new faculty member who has a large database that is in  
 MySQL.  We don't support MySQL so the database needs to be ported to  
 PostgreSQL.  Her GA, who know MySQL, says that he has a query that he  
 will run that will put the data into postgres.  I thought that the  
 data would have to be output to a text file and then copied into  
 postgres.  I don't know MySQL.  I've done a conversion from Oracle  
 and this is how I did it.  Is he correct that he can put the data  
 into a postgres database by running a MySQL query?  It doesn't sound  
 possible to me.
 

We recently did a conversion from MS Access (i dont know details) to pgsql 
8.3.3.
The MS Access-aware guy just declared the correct postgresql ODBC settings,
i adjusted the pgsql backend to accept connections from the MS workstation,
then performed an EXPORT from MS Access to the pgsql datasource
and thats all.
Of course all i got was the exact MS Access tables, which then were useful
to populate my new designed pgsql tables.

One caveat here, most commonly, is the design of the DB.
The lower end you get mysql-sql server-access-COBOL, etc...
the greater chance you need a re-engineering of the schema.

 Carol
 



-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Problems with connection

2008-08-08 Thread Achilleas Mantzios
Στις Friday 08 August 2008 09:21:22 ο/η Курашкин Дмитрий έγραψε:

maybe the listen_addresses in postgresql.conf in Server installation?


-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] conditional logging based on client

2008-07-22 Thread Achilleas Mantzios
Στις Tuesday 22 July 2008 15:17:42 ο/η Rajesh Kumar Mallah έγραψε:
 Hi,
 
 We want to log all SQLs that has been executed by using psql client.
 we do not want to use .psql_history as it is distributed and may be
 deleted by users .
 
 The original objective is that we should be able to know what all hand made
 SQLs have  been executed in past  (which can date back as long as 1 year even)

You have to adjust log_statement in your postgresql.conf
however you will have to bear in mind the performance implications of this.
Normal production sites dont do this.
You can enable/disable this kind of logging any time by killing -HUP
 
 regds
 Mallah.
 



-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] How do i compare 2 postgresql instanses ?

2008-07-08 Thread Achilleas Mantzios
Στις Tuesday 08 July 2008 15:42:28 ο/η Potluri Srikanth έγραψε:
 
  How do i compare 2 postgresql instanses ?
  1) compare all objects including data in tables a) production box b)
 DR site ?
  can you help me with the query, through which i can get all the
 object of each schema/database ?
  Regards, 
  Srikanth k Potluri 
 

you could write a little program (awk) to produce an sql file of the form:
select count(*) from airlines;
select count(*) from airports;
select count(*) from airticket;
select count(*) from airticketflights;

select count(*) from zzzobjects;

and then run it against the two databases, and diff the outputs.
If the outputs are not identical then you have 2 different databases,
otherwise you have to search more to know the answer.
So this technique can prove that yours DBs are *not* identical, but does not 
say anything
about the opposite.

-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] how to get fy info from the parent table

2008-07-04 Thread Achilleas Mantzios
Στις Friday 04 July 2008 16:57:20 ο/η Jessica Richard έγραψε:
 Could some one tell me how to check this pk/fy thing please?


smth like 
SELECT kid.relname as KID,c.confrelid,c.conkey,c.confkey,f.relname as DAD 
FROM pg_catalog.pg_constraint c,pg_catalog.pg_class f,pg_catalog.pg_class kid 
WHERE 
c.contype = 'f' AND c.confrelid = f.oid AND c.conrelid=kid.oid AND 
f.relname='vessels';

could be your friend. substitute vessels with your DAD table, to find the 
kids.
 
 From the parent table, I can see only the primary key there, now I wan to 
 know how many and what other foreign keys are referencing this primary 
 key... did this check long time ago and totally forgot how I did it then
 
 Thanks a log,
 Jessica
 
 
 
   



-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] changing sequence in serial field

2008-07-03 Thread Achilleas Mantzios
Στις Thursday 03 July 2008 17:19:48 ο/η Julius Tuskenis έγραψε:
 Hello
 
 I noticed, that some serial field is using wrong sequence, so I altered 
 the default value pointing to the sequence I want, but now I cant delete 
 currently no longer used sequence. Did I do someting wrong changing it?
 
 PgSQL 8.1 on Gentoo Linux.
 
 
 What I try to do
 DROP SEQUENCE b_prekes_br_kodas_pbk_id_seq1;

try
ALTER SEQUENCE b_prekes_br_kodas_pbk_id_seq1 OWNED BY NONE;
first, adn then drop.

 
 Result
 ERROR:  cannot drop sequence b_prekes_br_kodas_pbk_id_seq1 because table 
 b_prekes_br_kodas column pbk_id requires it
 HINT:  You may drop table b_prekes_br_kodas column pbk_id instead.
 
 ** Error **
 
 ERROR: cannot drop sequence b_prekes_br_kodas_pbk_id_seq1 because table 
 b_prekes_br_kodas column pbk_id requires it
 SQL state: 2BP01
 Hint: You may drop table b_prekes_br_kodas column pbk_id instead.
 
 -- 
 Julius Tuskenis
 Programavimo skyriaus vadovas
 UAB nSoft
 mob. +37068233050
 
 



-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] ask...!!!

2008-07-01 Thread Achilleas Mantzios
Στις Tuesday 01 July 2008 11:12:39 ο/η ria ulfa έγραψε:
 i use linux fedora 7 and database PostgreSQL 8.
 i want to copy table in postgre to a file.
 i used code : simps=# COPY log TO '/root/documents/log.sql';
   note= simps: name of database
 but there is error:  could not open file /root/documents/log.sql for
 writing: Permission denied
 what can i do for repair this error??
 

The best thing you can do is get yourself familiar with a UNIX environment.
Apparently the user on behalf of which, you run psql, cannot write to 
/root/documents/log.

-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Major upgrade advice

2008-06-20 Thread Achilleas Mantzios
Why not simply,
SELECT * FROM xxx WHERE timestamp_column::date = '2008-05-20'::date;

Στις Thursday 19 June 2008 21:56:09 ο/η Roberto Garcia έγραψε:
 We changed it because 8.3 doesn't allow the operator LIKE on timestamp 
 columns. Your syntax works fine but we weren't used to use as u do. 
 There weren't any specific reason, only another way to do that.
 
 I think when we read that operator LIKE and timestamp values were 
 incompatible we assumed that timestamp values couldn't be compared to 
 any char value, opposed as your syntax is.
 
 We've tried to do select * from X where timestamp column = 
 '2008-05-20 10:', expecting that the result would be any minute from 10 
 o'clock (10:15, 10:30, 10:45, etc) of the specific date, but this syntax 
 retrieves only 10:00 from that date.
 
 Tks for the new syntax.
 
 Regards
 Roberto Garcia
 
 Gregory S. Youngblood wrote:
  That's a pretty substantial change.  Why did you have to make this change?
  Was it causing syntax errors or to get better performance on those types of
  queries? Actually, now that I think about it, didn't:
  select * from X where timestamp column between '2008-05-20 00:00:00' and
  '2008-05-20 23:59:59' 
  work? I could have sworn I have used that syntax in 8.2 without having to
  arbitrarily cast the arguments... now I'm going to have to go look. :)
 
  Yup, confirmed, 8.2.7 and no casting on the date arguments when I was using
  between.
 
  I'm just curious if there was a specific reason (i.e. better performance,
  better use of indexes, etc.) for your syntax.
 
  Thanks,
  Greg
 
  -Original Message-
  From: [EMAIL PROTECTED]
  [mailto:[EMAIL PROTECTED] On Behalf Of Roberto Garcia
  Sent: Wednesday, June 18, 2008 12:01 PM
  Cc: pgsql-admin@postgresql.org
  Subject: Re: [ADMIN] Major upgrade advice
 
  Just to mention one issue we had here:
 
  In 8.1 we did this to retrieve all data from a specific date:
  SELECT * FROM xxx
  WHERE timestamp_column LIKE '2008-05-20%'
 
  In 8.3 we had to change to:
  SELECT * FROM xxx
  WHERE timestamp_column = CAST('2008-05-20' as timestamp) AND
timestamp_column  CAST('2008-05-21' as timestamp)
 
  Regards
  Roberto Garcia
 
 

 
 
 Roberto Garcia
 Banco de Dados, MSc
 Fone: (12) 3186-8405 
 -- 
 A luta contra o aquecimento global depende de cada um de nós, faça sua parte, 
 economize recursos naturais.
 -- 
 http://www.cptec.inpe.br
 http://www.inpe.br 
 
 



-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Major upgrade advice

2008-06-20 Thread Achilleas Mantzios
Στις Friday 20 June 2008 16:26:19 ο/η Roberto Garcia έγραψε:
 We have an index on the time_stamp column, if the format of argument is 
 different from the format the index was created it is not used, then 
 performance is decreased because a sequential scan is done instead of an 
 index scan.
Then create an additional index like
CREATE INDEX tablename_tscol_date on tablename (date(tscol));
where tscol is timestamp

 
 Achilleas Mantzios wrote:
  Why not simply,
  SELECT * FROM xxx WHERE timestamp_column::date = '2008-05-20'::date;
 
  Στις Thursday 19 June 2008 21:56:09 ο/η Roberto Garcia έγραψε:

  We changed it because 8.3 doesn't allow the operator LIKE on timestamp 
  columns. Your syntax works fine but we weren't used to use as u do. 
  There weren't any specific reason, only another way to do that.
 
  I think when we read that operator LIKE and timestamp values were 
  incompatible we assumed that timestamp values couldn't be compared to 
  any char value, opposed as your syntax is.
 
  We've tried to do select * from X where timestamp column = 
  '2008-05-20 10:', expecting that the result would be any minute from 10 
  o'clock (10:15, 10:30, 10:45, etc) of the specific date, but this syntax 
  retrieves only 10:00 from that date.
 
  Tks for the new syntax.
 
  Regards
  Roberto Garcia
 
  Gregory S. Youngblood wrote:
  
  That's a pretty substantial change.  Why did you have to make this change?
  Was it causing syntax errors or to get better performance on those types 
  of
  queries? Actually, now that I think about it, didn't:
  select * from X where timestamp column between '2008-05-20 00:00:00' and
  '2008-05-20 23:59:59' 
  work? I could have sworn I have used that syntax in 8.2 without having to
  arbitrarily cast the arguments... now I'm going to have to go look. :)
 
  Yup, confirmed, 8.2.7 and no casting on the date arguments when I was 
  using
  between.
 
  I'm just curious if there was a specific reason (i.e. better performance,
  better use of indexes, etc.) for your syntax.
 
  Thanks,
  Greg
 
  -Original Message-
  From: [EMAIL PROTECTED]
  [mailto:[EMAIL PROTECTED] On Behalf Of Roberto Garcia
  Sent: Wednesday, June 18, 2008 12:01 PM
  Cc: pgsql-admin@postgresql.org
  Subject: Re: [ADMIN] Major upgrade advice
 
  Just to mention one issue we had here:
 
  In 8.1 we did this to retrieve all data from a specific date:
  SELECT * FROM xxx
  WHERE timestamp_column LIKE '2008-05-20%'
 
  In 8.3 we had to change to:
  SELECT * FROM xxx
  WHERE timestamp_column = CAST('2008-05-20' as timestamp) AND
timestamp_column  CAST('2008-05-21' as timestamp)
 
  Regards
  Roberto Garcia
 
 


  Roberto Garcia
  Banco de Dados, MSc
  Fone: (12) 3186-8405 
  -- 
  A luta contra o aquecimento global depende de cada um de nós, faça sua 
  parte, economize recursos naturais.
  -- 
  http://www.cptec.inpe.br
  http://www.inpe.br 
 
 
  
 
 
 

 
 
 Roberto Garcia
 Banco de Dados, MSc
 Fone: (12) 3186-8405 
 -- 
 A luta contra o aquecimento global depende de cada um de nós, faça sua parte, 
 economize recursos naturais.



-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] How add db users from shell script with their passwords?

2008-06-03 Thread Achilleas Mantzios
Στις Tuesday 03 June 2008 11:28:44 ο/η Medi Montaseri έγραψε:
 You can write a program in say Perl and use create user bob with
 .options encrypted password x valid until xxx  which is pretty
 much what createuser(1) does.
 

why not just

#!/bin/sh
psql -c CREATE USER foo PASSWORD 'bar'


 Cheers
 Medi
 
 On Tue, Jun 3, 2008 at 12:54 AM, Илья Скорик [EMAIL PROTECTED] wrote:
 
  Dear community, help please.
 
  I need to add some standard users to database together with their
  standard passwords from a shell script. That the script would not ask
  to enter passwords manually for each user. How it can be made?
 
  How I have understood, createuser comand does not allow to make it?
 
  In advance thanks.
 
  --
  --
  С уважением, Илья Скорик
  специалист
  Inprint - автоматизация вашего издательства
 
  Yours faithfully, Ilya Skorik
  the expert
  Inprint - automation of your publishing house
 
  e-mail: [EMAIL PROTECTED]
  web: http://www.inprint.ru/
 
  --
  Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
  To make changes to your subscription:
  http://www.postgresql.org/mailpref/pgsql-admin
 
 



-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] postgres, syslog and freeBSD

2008-05-30 Thread Achilleas Mantzios
Στις Friday 30 May 2008 07:35:19 ο/η kevin kempter έγραψε:
 Hi list;
 
 I'm trying to get postgres  syslog to get along with no luck. I'm  
 running freeBSD7 and postgres 8.3.1
 
 Here's my postgres setup
 
 log_destination = 'syslog'
 #logging_collector = off
 syslog_facility = 'LOCAL0'
 syslog_ident = 'postgres'
 silent_mode = on
 log_min_duration_statement = 0
 
 
 Here's my /etc/syslog.conf file:
 
 
 security.*/var/log/security
 auth.info;authpriv.info   /var/log/auth.log
 mail.info /var/log/maillog
 lpr.info  /var/log/lpd-errs
 ftp.info  /var/log/xferlog
 cron.*/var/log/cron
 *.=debug  /var/log/debug.log
 *.emerg   *
 # uncomment this to log all writes to /dev/console to /var/log/ 
 console.log
 #console.info /var/log/console.log
 # uncomment this to enable logging of all log messages to /var/log/ 
 all.log
 # touch /var/log/all.log and chmod it to mode 600 before it will work
 #*.*  /var/log/all.log
 # uncomment this to enable logging to a remote loghost named loghost
 #*.*  @loghost
 # uncomment these if you're running inn
 # news.crit   /var/log/news/news.crit
 # news.err/var/log/news/news.err
 # news.notice /var/log/news/news.notice
 !startslip
 *.*   /var/log/slip.log
 !ppp
 *.*   /var/log/ppp.log
 LOCAL0.*  /var/log/pgsql

change that to

local0.*/var/log/pgsql.log

(lowercase, also a .log suffix is more standard)
make sure the /var/log/pgsql.log file exists (create it with vi),
and then change its permissions to smth like
-rw---  1 root  wheel  109274 30 Μαϊ 09:00 /var/log/pgsql.log

This works in my FreeBSD system, although with 7.4.*, i dont have any 8.3.1 
handy at the moment.
 
 
 
 I dont see any files show up in /var/log
 
 what am I doing wrong?
 
 Thanks in advance..
 
 /Kevin
 
 



-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] how to get a number of seconds in some interval ?

2008-05-13 Thread Achilleas Mantzios
Στις Monday 12 May 2008 18:09:11 ο/η Tom Lane έγραψε:
 Achilleas Mantzios [EMAIL PROTECTED] writes:
  Στις Monday 12 May 2008 17:32:39 ο/η Julius Tuskenis έγραψε:
  do you know why division of intervals was not implemented? As I see it - 
  there should be an easy way to do so. '1 hour 30 minutes' / '1 hour' = 
  1.5  Seems straight forward...
  
  No idea why. However as you pose it,it makes sense.
 
 No, it doesn't really, because intervals contain multiple components.
 What would you define as the result of
   '1 month 1 hour' / '1 day'
 bearing in mind that the number of days per month is not fixed, and
 neither is the number of hours per day?
 

SELECT extract(epoch from '1 month'::interval)/(60*60*24) as Days in 1 month 
interval;
 Days in 1 month interval
--
   30
(1 row)
So it seems that the arbitary assumption of (1 month=30 days, and 1 day = 24 
hours) has already been made 
in the extract function.

 You can derive a number if you're willing to make arbitrary assumptions
 about the relationships of the different units (as indeed extract(epoch)
 does), but hard-wiring such assumptions into a basic arithmetic operator
 doesn't seem like a good idea to me.
 
   regards, tom lane
 



-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] how to get a number of seconds in some interval ?

2008-05-13 Thread Achilleas Mantzios
Στις Tuesday 13 May 2008 11:22:28 ο/η Scott Marlowe έγραψε:
 On Tue, May 13, 2008 at 12:56 AM, Achilleas Mantzios
 [EMAIL PROTECTED] wrote:
   Monday 12 May 2008 18:09:11 �/� Tom Lane ��:
  Achilleas Mantzios [EMAIL PROTECTED] writes:
    Monday 12 May 2008 17:32:39 �/� Julius Tuskenis ��:
   do you know why division of intervals was not implemented? As I see it -
   there should be an easy way to do so. '1 hour 30 minutes' / '1 hour' =
   1.5  Seems straight forward...
  
   No idea why. However as you pose it,it makes sense.
 
  No, it doesn't really, because intervals contain multiple components.
  What would you define as the result of
'1 month 1 hour' / '1 day'
  bearing in mind that the number of days per month is not fixed, and
  neither is the number of hours per day?
 
 
  SELECT extract(epoch from '1 month'::interval)/(60*60*24) as Days in 1 
  month interval;
   Days in 1 month interval
  --
30
  (1 row)
  So it seems that the arbitary assumption of (1 month=30 days, and 1 day = 
  24 hours) has already been made
  in the extract function.
 
 True.  But that's only because it doesn't have a date to work against.
  If you run:
 
 select '2007-02-01 12:00:00'::timestamp + '1 month'::interval;
 you get: 2007-03-01 12:00:00
 
 If you run:
 select '2007-03-01 12:00:00'::timestamp + '1 month'::interval;
 you get:  2007-04-01 12:00:00
 
 Then, if we run:
  select ('2007-03-01 12:00:00'::timestamp + '1 month'::interval) -
 '2007-03-01 12:00:00'::timestamp;
 we get: 31 days
 
 But if we run:
  select ('2007-02-01 12:00:00'::timestamp + '1 month'::interval) -
 '2007-02-01 12:00:00'::timestamp;
 we get: 28 days
 
 So, I'm not sure how many days a month has.
 
 But your point is valid that given no date to compare to, an interval
 of 1 month gets translated to 30 days.  I can also see a lot of people
 showing up doing fancy interval math THEN wondering why it changes
 when you put a timestamp in the middle of it.
 

Thanx for this well written explanation!

-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] how to get a number of seconds in some interval ?

2008-05-12 Thread Achilleas Mantzios
Στις Monday 12 May 2008 16:44:16 ο/η Julius Tuskenis έγραψε:
 Hello,
 
 Is there a good method of knowing how mutch seconds does some interval 
 have?? I tried my_interval / (interval '1 second'), but there is no 
 operator interval / interval defines (I wonder why).
 Is there a method to do so?
 
smth like
dynacom=# SELECT extract(epoch from '49 hours 0 mins 12 seconds'::interval);

 -- 
 Julius Tuskenis
 Programavimo skyriaus vadovas
 UAB nSoft
 mob. +37068233050
 
 



-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] how to get a number of seconds in some interval ?

2008-05-12 Thread Achilleas Mantzios
Στις Monday 12 May 2008 17:32:39 ο/η Julius Tuskenis έγραψε:
 Thank you, Achilleas.
 
 do you know why division of intervals was not implemented? As I see it - 
 there should be an easy way to do so. '1 hour 30 minutes' / '1 hour' = 
 1.5  Seems straight forward...
 
No idea why. However as you pose it,it makes sense.
 
 Achilleas Mantzios rašė:
  Στις Monday 12 May 2008 16:44:16 ο/η Julius Tuskenis έγραψε:

  Hello,
 
  Is there a good method of knowing how mutch seconds does some interval 
  have?? I tried my_interval / (interval '1 second'), but there is no 
  operator interval / interval defines (I wonder why).
  Is there a method to do so?
 
  
  smth like
  dynacom=# SELECT extract(epoch from '49 hours 0 mins 12 seconds'::interval);
 

  -- 
  Julius Tuskenis
  Programavimo skyriaus vadovas
  UAB nSoft
  mob. +37068233050
 
 
  
 
 
 

 
 



-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Examining very large dumps

2008-04-17 Thread Achilleas Mantzios
Στις Thursday 17 April 2008 08:46:24 ο/η Achilleas Mantzios έγραψε:
 Στις Thursday 17 April 2008 08:25:22 ο/η Tom Lane έγραψε:
  Achilleas Mantzios [EMAIL PROTECTED] writes:
   Did you make the dump using 8.3's pg_dump?
  
   Yes, with 8.3.1's pg_dump (data only dump)
  
  That would be your problem.  *Don't* use a data-only dump, it
  lobotomizes all intelligence in the system and leaves it up to you
  to deal with foreign-key ordering issues.  There are lots of
  performance arguments against that as well.  See the advice at
  
 
 This procedure is kind of the official upgrade noted on 
 http://www.sai.msu.su/~megera/wiki/Tsearch2_83_changes
 and described on 
 http://sql-info.de/postgresql/notes/converting-tsearch2-to-8.3.html
 
  http://www.postgresql.org/docs/8.3/static/populate.html
 I am reading this link right away.
So it seems that whoever decides to take this path (schema-only dump, then 
data-only dump), will be initially forced to disable triggers, 
which means that he/she will have to recrate all FK constraints inorder to 
enforce/verify the integrity of the DB.
Also all indexes will have to be recreated.
However, one think that puzzles me is why this fact about data-only dumps wasnt 
mentioned (at least from what i've read) 
in the numerous conversations regarding upgrade of tsearch2 to 8.3, plus it is 
even recommended (indirectly) via the official tsearch2
site (http://www.sai.msu.su/~megera/wiki/Tsearch2_83_changes) which points to 
http://sql-info.de/postgresql/notes/converting-tsearch2-to-8.3.html

Anyway thats what i did in my case, where i had to cleanup from a previous 7.4 
installation with tsearch2 and intarray installed.
1) take a full dump using pg_dump of 8.3.1
pg_dump -U postgres -h old7.4.19machine mydbname  DUMPFULL.sql
2) createdb mydbname --encoding=original_db_encoding (this is essential in 
order to avoid encoding conversion problems)
3) In the 8.3.1 installation, we make sure tsearch2,intarray are not installed. 
This step is needed in order to force any 
tsearch2.so, _int.so related stmts to break.
4) 
create domain public.tsvector as pg_catalog.tsvector;
create domain public.gtsvector as pg_catalog.gtsvector;
create domain public.tsquery as pg_catalog.tsquery;
(Thanx to Tom). This step is needed in order to force the corresponding CREATE 
TYPE stmts to fail,
in order for the CREATE TABLE stmts with tsvector type columns to succeed!
THIS IS IMPORTANT. In my case moving a large DB dump (35 GB) from 7.4.19 to 
8.3.1
for some reason, and while i had this problem of broken CREATE TABLE stmts, i 
ended up with
incomplete DB. And i mean not only those missing tables, but also several other 
tables having 0 row count.
I didnt chase this one thru, however i got the idea that it is very important 
to have all tables in place.
5) psql dynacom -f DUMPFULL.sql 2see 21
here most things having to do with either intarray or tsearch2 will fail, 
hopefully tho, we will have all our data in place!
6) We drop whatever tsearch2,intarray related is left from the dump
DROP TABLE public.pg_ts_cfg;
DROP TABLE public.pg_ts_cfgmap;
DROP TABLE public.pg_ts_dict ;
DROP TABLE public.pg_ts_parser ;
DROP TYPE public.query_int ;
DROP TYPE public.statinfo ;
DROP TYPE public.tokenout ;
DROP TYPE public.tokentype ;
DROP FUNCTION public.ts_debug(text) ;
DROP TYPE public.tsdebug ;
at this point hopefully we are clean of garbage in our new 8.3.1 DB
7) 
ALTER TABLE vmanews ALTER idxfti TYPE pg_catalog.tsvector;
ALTER TABLE smanews ALTER idxfti TYPE pg_catalog.tsvector;
ALTER TABLE vesselhelp ALTER idxfti TYPE pg_catalog.tsvector;
now we drop the domains used in the intial Tom's trick
DROP DOMAIN public.gtsvector ;
DROP DOMAIN public.tsquery ;
DROP DOMAIN public.tsvector ;
8) Now we create the triggers for the update of tsvector columns:
CREATE TRIGGER mytable_tsvectorupdate
BEFORE INSERT OR UPDATE ON mytable
FOR EACH ROW
EXECUTE PROCEDURE tsvector_update_trigger('idxfti', 
'pg_catalog.english', 'header', 'content');
NOTE here that the previous functionality of having an extra function (.e.g. 
dropatsymbol) is removed from tsearch2.

9) installation of intarray and creation of all relevant indexes.

And that was it!
Any comments are welcome.
 
 Any thoughts very welcome.
  
  regards, tom lane
  
 
 
 
 -- 
 Achilleas Mantzios
 



-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] Examining very large dumps

2008-04-16 Thread Achilleas Mantzios
Hi, i just started the process of migrating from 7.4 to 8.3.1.
On restoring, apart from the easily explainable ERRORs (both in DDL, COPY), i 
got some ugly ERRORs
denoting foreign key constraint violations.
Since the dump is 35 Gbytes, i'd like your advice for examining, editing, 
extracting portions of large dumps,
in order to find the cause of the problem. vim (linux), vi (FreeBSD) or any 
other unix editor i tried, could not handle the
volume.

Thank you very much for any help.
It's been the first day of the upgrade effort, and if i judge from the 
begining, the rest of the process will not be any easier.
-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Examining very large dumps

2008-04-16 Thread Achilleas Mantzios
Στις Wednesday 16 April 2008 12:52:45 ο/η Tino Schwarze έγραψε:
 On Wed, Apr 16, 2008 at 12:35:26PM +0300, Achilleas Mantzios wrote:
  Hi, i just started the process of migrating from 7.4 to 8.3.1.
  On restoring, apart from the easily explainable ERRORs (both in DDL, COPY), 
  i got some ugly ERRORs
  denoting foreign key constraint violations.
  Since the dump is 35 Gbytes, i'd like your advice for examining, editing, 
  extracting portions of large dumps,
  in order to find the cause of the problem. vim (linux), vi (FreeBSD) or any 
  other unix editor i tried, could not handle the
  volume.
 
 Do you have a plain text dump or a compressed archive? Try using less
 for a first view, then extract things with grep or alter with sed.
Its a text dump. however less, tail, head seem dead slow, i'll try with some 
form
of grep.
I'd like to know if there is some kind of more efficient editor for viewing, 
searching
very large text files, searching by lineno or content.
 
 HTH,
 
 Tino.
 
 -- 
 β€žWhat we resist, persists.” (Zen saying)
 
 www.craniosacralzentrum.de
 www.forteego.de
 



-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Examining very large dumps

2008-04-16 Thread Achilleas Mantzios
Στις Thursday 17 April 2008 08:25:22 ο/η Tom Lane έγραψε:
 Achilleas Mantzios [EMAIL PROTECTED] writes:
  Did you make the dump using 8.3's pg_dump?
 
  Yes, with 8.3.1's pg_dump (data only dump)
 
 That would be your problem.  *Don't* use a data-only dump, it
 lobotomizes all intelligence in the system and leaves it up to you
 to deal with foreign-key ordering issues.  There are lots of
 performance arguments against that as well.  See the advice at
 

Ooops, now it seems i have an issue.
The whole point i went this way, was because i wanted to have a schema-only 
dump first,
in order to clean it from everything it had to do with contrib/tsearch2, 
contrib/intarray, dbsize
as well as to edit the triggers (substitute tsearch2 with 
tsvector_update_trigger), update the tsearch2 indexes
to use GIN.

So the plan was:
1) i take the schema-only dump
2) i edit the schema dump
3) i create the db
4) import _int.sql
5) import the schema
6) restore data
This procedure is kind of the official upgrade noted on 
http://www.sai.msu.su/~megera/wiki/Tsearch2_83_changes
and described on 
http://sql-info.de/postgresql/notes/converting-tsearch2-to-8.3.html

 http://www.postgresql.org/docs/8.3/static/populate.html
I am reading this link right away.

Any thoughts very welcome.
 
   regards, tom lane
 



-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] off-topic - list files on linux

2008-03-31 Thread Achilleas Mantzios
Στις Friday 28 March 2008 19:38:02 ο/η salman έγραψε:
 
 Achilleas Mantzios wrote:
  Στις Friday 28 March 2008 14:39:53 ο/η A.Burbello έγραψε:
  ls -t1p  | grep -v / | grep -v backup | head -1
  
  maybe:
  ls -t1p | grep -v / | grep -v backup | grep -v `ls -t1p | grep -v / | grep 
  -v backup | head -1`
  
 
 1) the -e flag for grep can be used to group strings so you don't have 
 to pipe multiple times (eg: grep -i -e bleh -e bleh2 -e bleh3 -ve backup)
 
 2) wouldn't find be easier to use? find . -maxdepth 1 -type -f -print
 
I tried to give an example, using *only* primitives that already existed in the 
original question.
 -salman
 



-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] off-topic - list files on linux

2008-03-28 Thread Achilleas Mantzios
Στις Friday 28 March 2008 14:39:53 ο/η A.Burbello έγραψε:
 ls -t1p  | grep -v / | grep -v backup | head -1

maybe:
ls -t1p | grep -v / | grep -v backup | grep -v `ls -t1p | grep -v / | grep -v 
backup | head -1`

-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] off-topic - list files on linux

2008-03-28 Thread Achilleas Mantzios
Στις Friday 28 March 2008 14:39:53 ο/η A.Burbello έγραψε:
 ls -t1p  | grep -v / | grep -v backup | head -1

maybe:
ls -t1p | grep -v / | grep -v backup | grep -v `ls -t1p | grep -v / | grep -v 
backup | head -1`

-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] DB Structure Recovery

2008-03-19 Thread Achilleas Mantzios
Στις Wednesday 19 March 2008 16:56:19 ο/η veejar έγραψε:
 Hello!
 
 My filesystem (UFS-FreeBSD) was crashed and I have lost files from
 PGSQL DATA DIR FOLDER after fsck-utility.
 
 I have lost files from directory pg_xlog
 I reset my pg_xlog.
 
 And now have such error by starting pgsql-server:
 
 LOG:  database system was interrupted at 2008-03-18 22:29:48 EET
 LOG:  checkpoint record is at 3E/6C20
 LOG:  redo record is at 3E/6C20; undo record is at 3E/6C20;
 shutdown TRUE
 LOG:  next transaction ID: 0/1624775486; next OID: 9528514
 LOG:  next MultiXactId: 6643; next MultiXactOffset: 13690
 LOG:  database system was not properly shut down; automatic recovery in 
 progress
 LOG:  record with zero length at 3E/6C68
 LOG:  redo is not required
 FATAL:  could not access status of transaction 6643
 DETAIL:  Could not read from file pg_multixact/offsets/ at
 offset 24576: Success.
 LOG:  startup process (PID 6267) exited with exit code 1
 LOG:  aborting startup due to startup process failure
 LOG:  logger shutting down
 
 Is it possible to lose this problem?
 HELP!!!
 

maybe take a look at your lost_found dir under the fs in question.

-- 
Achilleas Mantzios

KOSOVO IS SERBIA FOR EVER

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Queriy results erratic on Linux

2008-02-08 Thread Achilleas Mantzios
Στις Friday 08 February 2008 00:35:52 ο/η PostgreSql έγραψε:
 Hi
 My application runs fine on Windows.
 However when I copy the files to the Linux server some queries return no 
 results or errors even though the records and tables exist!
 It is always the same records or tables that are not found!
 In case of a table the error is: function.pg-query: Query failed: ERROR: 
 relation quot;sitesquot; does not exist in 
 
 Any idea what might cause the problem?

That you were using windows before, so the spirit of UNIX/open source
is punishing you!

Maybe your table/column names are sensitive to lower/upper case?

 
 The server configuration:
 PHP Version 5.0.4
 PostgreSQL 8.1.4 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.2.2 
 20030222 (Red Hat Linux 3.2.2-5)
 
 Thanks 
 
 
 
 ---(end of broadcast)---
 TIP 4: Have you searched our list archives?
 
http://archives.postgresql.org
 



-- 
Achilleas Mantzios

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [ADMIN] postgresql in FreeBSD jails: proposal

2008-01-17 Thread Achilleas Mantzios
Στις Thursday 17 January 2008 07:52:50 ο/η Marc G. Fournier έγραψε:
 [EMAIL PROTECTED] (Mischa Sandberg) writes:
 
 Unfortunately, with multiple jails running PG servers and (due to app
 limitations) all servers having same PGPORT, you get the situation that
 when jail#2 (,jail#3,...) server comes up, it:
 - detects that there is a shm seg with ipc key 5432001
 - checks whether the associated postmaster process exists (with kill -0)
 - overwrites the segment created and being used by jail #1
 
 Easiest fix: change the UID of the user running the postmaster (ie. pgsql) so 
 that each runs as a distinct UID (instead of distinct PGPORT) ... been doing 
 this since moving to FreeBSD 6.x ... no patches required ...

Hi Marc,
what happened with the patches at 
http://docs.freevps.com/doku.php?id=freebsd:changelog?
Those patches are supposed to make IPC run without collisions in many jails, 
thus solving the
classic problem with postgresql, allowing the admin to use the same uid, PGPORT 
in all pgsql
server instances.
Do you know if any of this stuff make it anywhere in official FreeBSD?

-- 
Achilleas Mantzios

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [ADMIN] How to kill process idle in transaction

2007-11-01 Thread Achilleas Mantzios
Στις Τετάρτη 31 Οκτώβριος 2007 23:47, ο/η Scott Marlowe έγραψε:
 On 10/30/07, Achilleas Mantzios [EMAIL PROTECTED] wrote:
   � 30 � 2007 10:45, �/� Patrick De 
Zlio ��:
   Hi,
  
   We are running PostgreSQL 8.2.4 with quite highly updated millions rows
   tables.
  
   It comes sometimes to get some idle in transaction processes, which
   doesn't have first really viewable effect out of giving very bad
   estimated numbers for pgclass.reltuples and, if we let them run as it
   is, block the autovacuum from releasing rows, and after few days some
   worse problems.
  
   Now we are able to detect these idle in transaction processes, but
   what to do with them ?
  
   What to do to avoid such situation? How to find the bad request which
   put the process in this state? From the time it is idle in transaction,
   we can't see the query in the pg_lock table anymore.
  
   How to deal with such process when it is already idle in transaction?
   can we simply kill it? It is a backend from pgpool. Can we simply
   restart it (how) without restarting all the database?
 
  Some connection pools issue a BEGIN on the connection just
  after the previous user of the connection issues COMMIT and
  then closes it (Jboss does it). So, IDLE in transaction is not
  apriori bad.

 You can turn off that behaviour, IIRC.  It is broken behaviour as far
 as pgsql is concerned, I believe.

Note nice surely, but why broken?

-- 
Achilleas Mantzios

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [ADMIN] How to kill process idle in transaction

2007-10-30 Thread Achilleas Mantzios
Στις Τρίτη 30 Οκτώβριος 2007 10:45, ο/η Patrick De Zlio έγραψε:
 Hi,

 We are running PostgreSQL 8.2.4 with quite highly updated millions rows
 tables.

 It comes sometimes to get some idle in transaction processes, which
 doesn't have first really viewable effect out of giving very bad estimated
 numbers for pgclass.reltuples and, if we let them run as it is, block the
 autovacuum from releasing rows, and after few days some worse problems.

 Now we are able to detect these idle in transaction processes, but what
 to do with them ?

 What to do to avoid such situation? How to find the bad request which put
 the process in this state? From the time it is idle in transaction, we
 can't see the query in the pg_lock table anymore.

 How to deal with such process when it is already idle in transaction? can
 we simply kill it? It is a backend from pgpool. Can we simply restart it
 (how) without restarting all the database?

Some connection pools issue a BEGIN on the connection just 
after the previous user of the connection issues COMMIT and 
then closes it (Jboss does it). So, IDLE in transaction is not 
apriori bad.

However if in your system you have lock problems, 
and transactions that dont get rollbacked or commited, 
then it is a problem with your application.

Have you done any monitoring on the size of the pool?
You should look at:
1) Connections get closed eventually
2) Transactions either have the autocommit flag on, are commited or are 
rollbacked.


 Thanks for help

-- 
Achilleas Mantzios

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [ADMIN] How to check running database is 32 bit

2007-07-19 Thread Achilleas Mantzios
Στις Πέμπτη 19 Ιούλιος 2007 11:27, ο/η nilesh khode έγραψε:
 Hello,

 I have installed Postgresql 8.2.4 using Tarball on Suse Linux 10.1 (64
 Bit).

 I already had running Postgresql 64 bit under /var/lib/pgsql. I installed
 32 bit 2nd Postgresql using Tarball under /usr/local/pgsql. I want to cross
 check whether it is 32 bit or 64 bit. So how shld I check the same?

In SELECT version(); it should say smth about the architecture.
If this does not help, write a C function that checks the sizeof(void *), and 
compile it in both systems (64,32), and then install it on both postgresql's.
Then at calling time, check its value, if its 4 then it is on a 32 bit system,
if it is 8 then it is on a 64 bit system.


 Thanks,
 Nilesh







 ___
_ Get the free Yahoo! toolbar and rest assured with the added
 security of spyware protection.
 http://new.toolbar.yahoo.com/toolbar/features/norton/index.php

 ---(end of broadcast)---
 TIP 6: explain analyze is your friend

-- 
Achilleas Mantzios

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [ADMIN] My mails to the forum get garbled, and a new line is being inserted. Why? How?

2007-07-19 Thread Achilleas Mantzios
Στις Πέμπτη 19 Ιούλιος 2007 12:29, ο/η Abraham, Danny έγραψε:
 Thanks


As a first measure, stop posting HTML attachments, and then see how it goes :)

 Danny Abraham

 [EMAIL PROTECTED]

-- 
Achilleas Mantzios

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [ADMIN] Can I backup/restore a database in a sql script?

2007-06-22 Thread Achilleas Mantzios
Στις Παρασκευή 22 Ιούνιος 2007 10:59, ο/η Joost Kraaijeveld έγραψε:
 Hi,

 I want to write a sql script in which I backup a database and restore a new
 (altered) version of that database. Is that possible? If so , can anyone
 give me an example of how to do that?

 I can run it from any command prompt (psql -U postgres template1 
 my_db.backup)but I would like it to run from psql (which should give me
 1 script for all platforms I must do this: Windows, FreeBSD and Debian)


With \! you can execute shell commands.
So you could probably
\c template1
\! pg_dump your_db  your_db.sql
\! some_processing your_db.sql  your_db_altered.sql
DROP DATABSE your_db
CREATE DATABSE your_db
\i your_db_altered.sql

I dont know if the \! escape works for windows, and in any case pg_dump, 
some_processing programs must be in the PATH for Unix/Windows

Debian/FreeBSD behaviour is exactly the same, it is on windows that you should 
try to make it work.

 TIA

 Joost

 ---(end of broadcast)---
 TIP 3: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faq

-- 
Achilleas Mantzios

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


  1   2   >