Re: [GENERAL] How to check streaming replication status

2017-08-31 Thread Glyn Astill
>From: Condor 
>To: Glyn Astill 
>Cc: "pgsql-general@postgresql.org" ; 
>"pgsql-general-ow...@postgresql.org" 
>Sent: Thursday, 31 August 2017, 09:42:17 GMT+1
>Subject: Re: [GENERAL] How to check streaming replication status

>>> My question is: How I can check the replication status when the
>> slave
>>> does not accept connections ?
>>
>> That's right for a server in recovery you need to call
>> pg_last_xlog_receive_location() or pg_last_xlog_replay_location() to
>> get the current xlog position.
>
>
>Yes,
>but my question is how to call them when Im unable to connect with slave
>even when
>replication is over. How I can ask the slave server: Are you in recovery
>mode ?
>

Define "unable to connect", in your previous example you appeared to be 
connected to the slave and attempting to call pg_current_xlog_location() ...

If you want to know if postgres is in recovery call pg_is_in_recovery()

https://www.postgresql.org/docs/current/static/functions-admin.html

>
>What is the last wal file send from master, which file you processing
>now ?
>How far behind you ?
>
>As I ask:  My question is: How I can check the replication status when
>the slave does not accept connections ?

Again I think you need to define "the slave does not accept connections".

If you've not configured the slave to be a hot standby, then try setting 
hot_standby=on in postgresql.conf on the slave.  If you don't want to do that 
you can run the pg_controldata executable on the slave to see the cluster state.

You should also be able to see streaming replication slave lag on the master by 
looking at pg_stat_replication and using pg_xlog_location_diff()

hth


Re: [GENERAL] How to check streaming replication status

2017-08-31 Thread Glyn Astill

> From: Condor 
> To: "pgsql-general@postgresql.org" 
> Sent: Thursday, 31 August 2017, 08:36:19 GMT+1
>
> after a hour I get error message on slave server:
>
> LOG:  restored log file "0001008B00DC" from archive
> LOG:  restored log file "0001008B00DD" from archive
> cp: can get attribute '/archive/0001008B00DE': No such file or 
> directory
> LOG:  started streaming WAL from primary at 8B/DD00 on timeline 1
So it read all the log from the archive then started streaming, if there are no 
futrher messages you're ok.

...

> and after I did it, got:
>
> STATEMENT:  SELECT pg_current_xlog_location()
> ERROR:  recovery is in progress
> HINT:  WAL control functions cannot be executed during recovery.
> 
> My question is: How I can check the replication status when the slave 
> does not accept connections ?

That's right for a server in recovery you need to call 
pg_last_xlog_receive_location() or pg_last_xlog_replay_location() to get the 
current xlog position.


Re: [GENERAL] Invalid field size

2017-07-04 Thread Glyn Astill

>On Tuesday, 4 July 2017, 12:16:57 GMT+1, Moreno Andreo 
> wrote:
>
>
> Any ideas? As for many error I got in the past I assume we are trying to 
> COPY FROM corrupted data (when using cheap pendrives we get often this 
> error). Should it be reasonable or I have to search elsewhere?

I'd start by looking at the data on line 619 of your file, perhaps you could 
post it?




Re: [GENERAL] Trigger based logging alternative to table_log

2017-03-28 Thread Glyn Astill
> From: Jeff Janes 
> To: "pgsql-general@postgresql.org" 
> Sent: Monday, 27 March 2017, 18:08
> Subject: [GENERAL] Trigger based logging alternative to table_log
>
> I have some code which uses table_log 
> (http://pgfoundry.org/projects/tablelog/) to keep a log of changes to 
> selected tables.  I don't use the restore part, just the logging part. 
>
> It creates a new table for each table being logged, with several additional 
> columns, and adds triggers to insert rows in the new table for changes in the 
> original.
> 
> The problem is that table_log hasn't been maintained in nearly 10 years, and 
> pgfoundry itself seems to have one foot in the grave and one on a banana peel.
>
>There are several other systems out there which store the data in hstore or 
>json, which I would probably use if doing this from scratch.  But I'd rather 
>preserve the existing log tables than either throw away that data, or port it 
>over to a new format.
>
>Is there any better-maintained code out there which would be compatible with 
>the existing schema used by table_log?

I was in exactly the same situation a few years ago.  As you say ideally we'd 
move away from table_log - but when the users are used to doing things the 
table_log way and they like it...

I have a slightly more up to date fork (here: 
https://github.com/glynastill/pg_table_audit), which as I recall works fine 
with 9.6.  In general the whole thing would benefit an overhaul, but I think 
the effort of moving to a better format would be less.


I also wrote a pl/pgsql version as mentioned by Felix, but I wasn't ever 
particularly happy it so stuck with the above fork with the intention of 
switching away to a json format eventually.

Glyn


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Table not cleaning up drom dead tuples

2017-03-14 Thread Glyn Astill
> We're, in general, pretty carefull with our DB, as it contains important 
> data. 
> Most rollback is issued by application (which processes all data inside 
> transactions).
> 
> p.s.  Time is in UTC (GMT+0)
> 
> =# select min(xact_start) from pg_stat_activity where state<>'idle';
>   min
> ---
>  2017-03-14 15:36:05.432139+00
> (1 row)
> 
> =# select * from pg_stat_activity where state<>'idle' order by 
> xact_start limit 1;
>   datid  | datname |  pid  | usesysid | usename | application_name | 
> client_addr | client_hostname | client_port | backend_start | 
>  xact_start   |  query_start  | 
> state_change  | waiting | state  | backend_xid | backend_xmin |   
>query
> -+-+---+--+-+--+-+-+-+---+---+---+---+-++-+--+--
>  4906146 |  | 37235 |   10 | pgsql   |  |
>  | | | 2017-03-14 05:55:43.287128+00 | 
> 2017-03-14 15:36:05.432139+00 | 2017-03-14 15:36:05.432139+00 | 2017-03-14 
> 15:36:05.432141+00 | f   | active | |   1621959045 | 
> autovacuum: VACUUM public.stats_y2017_m3_d13_hk2
> (1 row)
> 
> 


And no prepared transactions you say?

select * from pg_prepared_xacts;

Perhaps someone else will chime in ...


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Table not cleaning up drom dead tuples

2017-03-14 Thread Glyn Astill


> This tables is original ones, it doesn't have any activity now. We copied 
> data to NEW tables and trying to solve root of the problem
> 
>  - target database where broken tables are located
> 
> 
> - VACUUM FULL VERBOSE
> =# VACUUM (FULL, VERBOSE) __orders_y2017_m2_to_drop;
> INFO:  vacuuming "public.__orders_y2017_m2_to_drop"
> INFO:  "__orders_y2017_m2_to_drop": found 0 removable, 3179076 nonremovable 
> row versions in 551423 pages
> DETAIL:  1778770 dead row versions cannot be removed yet.
> CPU 30.92s/102.66u sec elapsed 184.69 sec.
> 
> =# VACUUM (FULL, VERBOSE) __orders_y2017_m3_to_drop;
> INFO:  vacuuming "public.__orders_y2017_m3_to_drop"
> INFO:  "__orders_y2017_m3_to_drop": found 0 removable, 9103104 nonremovable 
> row versions in 1520371 pages
> 
> DETAIL:  8396820 dead row versions cannot be removed yet.
> CPU 65.00s/284.03u sec elapsed 399.66 sec.
> 
> 
> - DB INFO
> =# select * from pg_stat_user_tables where relname in 
> ('__orders_y2017_m3_to_drop', '__orders_y2017_m2_to_drop');
>relid   | schemaname |  relname  | seq_scan | seq_tup_read 
> | idx_scan  | idx_tup_fetch | n_tup_ins | n_tup_upd | n_tup_del | 
> n_tup_hot_upd | n_live_tup | n_dead_tup | n_mod_since_analyze |  
> last_vacuum  |last_autovacuum| last_analyze   
>|   last_autoanalyze| vacuum_count | autovacuum_count | 
> analyze_count | autoanalyze_count 
> ---++---+--+--+---+---+---+---+---+---+++-+---+---+---+---+--+--+---+---
>  179718008 | public | __orders_y2017_m2_to_drop | 5615 |   7934041177 
> | 328044580 |7979850698 | 0 |   3065776 | 0 | 
> 25685 |3082885 |1759481 |   0 | 2017-03-14 
> 11:57:40.388527+00 | 2017-03-14 07:37:50.907757+00 | 2017-03-14 
> 11:57:42.656628+00 | 2017-03-13 16:15:55.60846+00  |5 |   
> 96 | 4 |15
>  207347508 | public | __orders_y2017_m3_to_drop | 1128 |794959804 
> | 129799001 |1292952066 |706089 |   8377499 | 0 |
> 118035 |8937540 |8406385 |   0 | 2017-03-14 
> 11:57:58.026816+00 | 2017-03-14 10:09:08.597031+00 | 2017-03-14 
> 11:57:59.117331+00 | 2017-03-14 04:11:11.370923+00 |4 |   
> 11 | 4 | 7
> (2 rows)
> 
> =# select * from pg_stat_database;
>datid   |  datname   | numbackends | xact_commit | xact_rollback | 
> blks_read |   blks_hit   | tup_returned  | tup_fetched  | tup_inserted | 
> tup_updated | tup_deleted | conflicts | temp_files | temp_bytes | deadlocks | 
> blk_read_time | blk_write_time |  stats_reset 
> ---++-+-+---+---+--+---+--+--+-+-+---+++---+---++---
>4906146 | |  62 |24781721 |   5888121 
> | 492125811 | 348274702788 | 1127846911908 | 250049066062 |413981238 |   
> 188610068 |  397036 | 0 | 53 | 7507001344 | 1 |   
>   0 |  0 | 2017-03-06 02:33:26.466458+00
> 
>  113713583 | sentry |   0 |  350030 |   342 | 
> 11574 | 33444698 |  22519113 | 10577975 | 2438 |  
>  27672 |   2 | 0 |  0 |  0 | 0 |  
>0 |  0 | 2017-03-06 02:33:24.156858+00
> 
>  148539615 | test   |   0 |   0 | 0 | 
> 0 |0 | 0 |0 |0 |  
>  0 |   0 | 0 |  0 |  0 | 0 |  
>0 |  0 | 
>  161510793 | ... |   0 |   0 | 0 | 0 |
> 0 | 0 |0 |0 |   0 |   
> 0 | 0 |  0 |  0 | 0 | 0 | 
>  0 | 
> (8 rows)

Quite a large quantity of rollbacks there.  In your initial email the longest 
running transaction was an autovacuum task wasn't it?  Are you sure there are 
no other long running transactions?


Whats the output of:

select min(xact_start) from pg_stat_activity where state<>'idle';
select * from pg_stat_activity where state<>'idle' order by xact_start limit 1;


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/p

Re: [GENERAL] Table not cleaning up drom dead tuples

2017-03-14 Thread Glyn Astill
> 

> From: Антон Тарабрин 
> To: "pgsql-general@postgresql.org" 
> Sent: Tuesday, 14 March 2017, 14:05
> Subject: Re: [GENERAL] Table not cleaning up drom dead tuples
> 
> 
> Yep. VACUUM FULL not helping us on OLD table, that are not getting updated 
> and not used in any requests. Bloat is still there 
> This is production system, so now we are investigating why it's happening.
> 
> 
> > Information about problematic tables:
> > https://gist.github.com/tarabanton/edf7f8dc26eb7ec37a9cfa3424493871
> At the link above is some results of VACUUM (ANALYZE, VERBOSE) from source 
> (OLD) table.
> 

So what's the output of vacuum full? Or are you saying you can't sustain the 
exclusive lock vacuum full would require?

Plain vacuum can only reclaim free space at the end of the table, fragmented 
dead rows can only be marked available for reuse.

Perhaps give us some idea of activity on your database/tables:


select * from pg_stat_user_tables where relname in 
('__orders_y2017_m3_to_drop', '__orders_y2017_m2_to_drop');
select * from pg_stat_database;


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Table not cleaning up drom dead tuples

2017-03-14 Thread Glyn Astill


> From: Антон Тарабрин 
> To: pgsql-general@postgresql.org 
> Sent: Tuesday, 14 March 2017, 12:09
> Subject: [GENERAL] Table not cleaning up drom dead tuples

> General info about  our database: 

> https://gist.github.com/aCLr/dec78ab031749e517550ac11f8233f70
> 
> Information about problematic tables:
> https://gist.github.com/tarabanton/edf7f8dc26eb7ec37a9cfa3424493871
> 
> As you can see, there 2 to 10 times dead tuples compared to actual row count.
> We've tried VACUUM FULL and CLUSTER without any success.
> There is no long term locks, idle in transaction requests or prepared 
> transactions.
> 
> We are temporarily fixing this like that:
> BEGIN;
> CREATE TABLE _orders_temp (LIKE orders_y2017_m3 INCLUDING DEFAULTS INCLUDING 
> CONSTRAINTS INCLUDING INDEXES INCLUDING COMMENTS);
> INSERT INTO _orders_temp select * from orders_y2017_m3;
> ALTER TABLE orders_y2017_m3 RENAME TO __orders_y2017_m3_to_drop;
> ALTER TABLE __orders_y2017_m3_to_drop NO INHERIT orders;
> ALTER TABLE _orders_temp RENAME TO orders_y2017_m3;
> ALTER TABLE orders_y2017_m3 INHERIT orders;
> COMMIT;
> 
> but bloat returns again and again

Some bloat is to be expected unless you've totally static data due to the 
postgres MVCC model.

Are you saying vacuum full and cluster aren't removing the bloat?  Sounds 
unlikely to me.


Issues only arise when you can't manage it; from what you've posted we can see 
autovacuum is running, but perhaps it's not keeping up with your workload, or 
your update patterns make it difficult to keep bloat down; we can see some 
rollbacks which I'm sure are part of your problem.

You could try updating to the latest minor version of postgres as there are a 
few fixes to autovacuum in versions after 9.5.3, but I think analyzing your 
update patterns and/or tuning autovacuum will be your starting point.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgres driver for mysql

2016-09-06 Thread Glyn Astill

> From: Mimiko 
>To: Posthresql-general  
>Sent: Monday, 5 September 2016, 19:38
>Subject: [GENERAL] postgres driver for mysql
> 
>
>Hello to all.
>
>I want to move applications to postgres. But there are applications 
>which can use only mysql or local mdb or mssql. For now I run a mysql 
>server to which this applications are connected. Long time ago I've 
>encountered a federeted module for mysql to allow to store data in 
>postgres. Now that site doesnot exists.
>
>Can anyone suggest a db federeted plugin for mysql/mariadb to store data 
>in pg. Changing applications is impossible, they are proprietary and 
>work only with specific databases only.
>


I've no answer to your question really, however it looks like at some point a 
while back I bookmarked this page:

http://www.pinaraf.info/2013/03/my-post-engine-a-postgres-engine-for-mysql/

Is that what you were using? If so the git repository is still accessible via 
gitweb:


https://www.gitorious.org/my-post-engine/my-post-engine

If you're desperate to use it you could pull a snapshot from there.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Transaction abort difference between psql, isql, ODBC and JDBC pgsql 8.4

2015-06-19 Thread Glyn Astill

> From: Gary Cowell 
>To: pgsql-general@postgresql.org 
>Sent: Friday, 19 June 2015, 12:15
>Subject: [GENERAL] Transaction abort difference between psql, isql, ODBC and 
>JDBC pgsql 8.4
> 
>
>
>Hello
>
>I'm aware of the automatic transaction abort that occurs in PostgreSQL if you 
>have DML throw an error during a transaction, this prevents future selects 
>within transaction, until rollback or commit (and hence, new transaction). I'm 
>okay with this.
>
>
>Doing all this on Red Hat 6.5 with Postgresql 8.4 (shipped repository version 
>in Red Hat 6.5).
>Example in psql:
>
>$ psql
>psql (8.4.20)
>Type "help" for help.
>
>e5=# begin transaction;
>BEGIN
>e5=# select 1;
> ?column?
>--
>1
>(1 row)
>
>e5=# insert into conc values(1,'mouse');
>ERROR:  duplicate key value violates unique constraint "conc_key"
>e5=# select 1;
>ERROR:  current transaction is aborted, commands ignored until end of 
>transaction block
>e5=# \q
>
>
>So I start a transaction, then get a DML error, and I can't select any more.
>
>Same thing happens with JDBC :
>
>$ java -cp .:/usr/share/java/postgresql-jdbc3.jar t
>PostgreSQL 8.4.20 on x86_64-redhat-linux-gnu, compiled by GCC gcc (GCC) 4.4.7 
>20120313 (Red Hat 4.4.7-11), 64-bit
>Jun 19, 2015 11:39:55 AM t main
>SEVERE: ERROR: duplicate key value violates unique constraint "conc_key"
>org.postgresql.util.PSQLException: ERROR: duplicate key value violates unique 
>constraint "conc_key"
>at 
> org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2094)
>at 
> org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1827)
>at 
> org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:255)
>at 
> org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:508)
>at 
> org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:370)
>at 
> org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:362)
>at t.main(t.java:48)
>
>Jun 19, 2015 11:39:55 AM t main
>SEVERE: ERROR: current transaction is aborted, commands ignored until end of 
>transaction block
>org.postgresql.util.PSQLException: ERROR: current transaction is aborted, 
>commands ignored until end of transaction block
>at 
> org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2094)
>at 
> org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1827)
>at 
> org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:255)
>at 
> org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:508)
>at 
> org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:370)
>at 
> org.postgresql.jdbc2.AbstractJdbc2Statement.executeQuery(AbstractJdbc2Statement.java:250)
>at t.main(t.java:56)
>
>I'm just selecting version() before and after a duplicate insert. Again the 
>transaction is aborted.
>
>But with ODBC in isql, and with other ODBC apps, we get this:
>
>+---+
>| Connected!|
>|   |
>| sql-statement |
>| help [tablename]  |
>| quit  |
>|   |
>+---+
>SQL> begin transaction
>SQLRowCount returns -1
>SQL> select 1
>++
>| ?column?   |
>++
>| 1  |
>++
>SQLRowCount returns 1
>1 rows fetched
>SQL> insert into conc values(1,'mouse');
>[23505][unixODBC]ERROR: duplicate key value violates unique constraint 
>"conc_key";
>Error while executing the query
>[ISQL]ERROR: Could not SQLExecute
>SQL> select 1
>++
>| ?column?   |
>++
>| 1  |
>++
>SQLRowCount returns 1
>1 rows fetched
>
>The transaction is not aborted with ODBC, but is with JDBC
>
>My odbcinst.ini says:
>
>
># Driver from the postgresql-odbc package
># Setup from the unixODBC package
>[PostgreSQL]
>Description = ODBC for PostgreSQL
>Driver  = /usr/lib/psqlodbc.so
>Setup   = /usr/lib/libodbcpsqlS.so
>Driver64= /usr/lib64/psqlodbc.so
>Setup64 = /usr/lib64/libodbcpsqlS.so
>FileUsage   = 1
>
>
>and the driver odbc.ini:
>[e5]
>Description = Test to Postgres
>Driver  = PostgreSQL
>Trace   = Yes
>TraceFile   = sql.log
>Database= e5
>Servername  = localhost
>UserName=
>Password=
>Port= 5432
>Protocol= 6.4
>ReadOnly= No
>RowVersioning   = No
>ShowSystemTables= No
>ShowOidColumn   = No
>FakeOidIndex= No
>ConnSettings=
>
>
>
>I don't mind which way it works, either aborting transactions after failed 
>dml, or no

Re: [GENERAL] replacing jsonb field value

2015-05-30 Thread Glyn Astill




- Original Message -
> From: Andreas Kretschmer 
> To: pgsql-general@postgresql.org
> Cc: 
> Sent: Saturday, 30 May 2015, 13:10
> Subject: Re: [GENERAL] replacing jsonb field value
> 
> Michael Paquier  wrote:
> 
>> 
>>  Append the new value to it the existing field, jsonb has as property
>>  to enforce key uniqueness, and uses the last value scanned for a given
>>  key.
> 
> can you show a simple example, how to append a jsonb to an jsonb-field?
> Maybe i'm blind, but i can't find how it works.
> 
> Thx.
> 

> 
> Andreas

Prior to 9.5 you can't, I think you have to use something like jsonb_each to 
unwrap it then wrap it back up again.

The jsonbx extension, which I believe is what ended up in 9.5 has a simple 
concatenate function (here: https://github.com/erthalion/jsonbx), I also had a 
go (here: https://github.com/glynastill/pg_jsonb_opx).


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] unexpected (to me) sorting order

2015-04-09 Thread Glyn Astill
> From: Scott Marlowe 
 > To: Glyn Astill 
 > Cc: Björn Lundin ; "pgsql-general@postgresql.org" 
 > 
 > Sent: Thursday, 9 April 2015, 13:23
 > Subject: Re: [GENERAL] unexpected (to me) sorting order
 > 
> On Wed, Apr 8, 2015 at 3:33 AM, Glyn Astill  
> wrote:
> 
>> 
>>> From: Björn Lundin 
>>> To: pgsql-general@postgresql.org
>>> Sent: Wednesday, 8 April 2015, 10:09
>>> Subject: [GENERAL] unexpected (to me) sorting order
>>> 
>>> select * from T_SORT order by NAME ;
>>> 
>>> rollback;
>>> id |        name
>>> +
>>>   1 | FINISH_110_150_1
>>>   2 | FINISH_110_200_1
>>>   3 | FINISH_1.10_20.0_3
>>>   4 | FINISH_1.10_20.0_4
>>>   5 | FINISH_1.10_30.0_3
>>>   6 | FINISH_1.10_30.0_4
>>>   7 | FINISH_120_150_1
>>>   8 | FINISH_120_200_1
>>> (8 rows)
>>> 
>>> why is FINISH_1.10_20.0_3 between
>>> FINISH_110_200_1 and
>>> FINISH_120_150_1
>>> ?
>>> 
>>> That is why is '.' between 1 and 2 as in 110/120 ?
>>> 
>>> 
>>> pg_admin III reports the database is created like
>>> CREATE DATABASE bnl
>>>   WITH OWNER = bnl
>>>       ENCODING = 'UTF8'
>>>       TABLESPACE = pg_default
>>>       LC_COLLATE = 'en_US.UTF-8'
>>>       LC_CTYPE = 'en_US.UTF-8'
>>>       CONNECTION LIMIT = -1;
>>> 
>>> 
>> 
>> 
>> 
>> The collation of your "bnl" database is utf8, so the 
> "." punctuation character is seen as a "variable element" 
> and given a lower weighting in the sort to the rest of the characters.  
> That's just how the collate algorithm works in UTF8.
> 
> utf8 is an encoding method, not a collation. The collation is en_US,
> encoded in utf8. You can use C collation with utf8 encoding just fine.
> So just replace UTF8 with en_US in your sentence and you've got it
> right.
> 

Yes, thanks for the correction there, and we're talking about the wider unicode 
collate algorithm.
 

Re: [GENERAL] unexpected (to me) sorting order

2015-04-08 Thread Glyn Astill
> From: Chris Mair 

> To: Björn Lundin ; pgsql-general@postgresql.org
> Cc: 
> Sent: Wednesday, 8 April 2015, 10:36
> Subject: Re: [GENERAL] unexpected (to me) sorting order
> 
> 
>>  select * from T_SORT order by NAME ;
>> 
>>  rollback;
>>   id |name
>>  +
>>1 | FINISH_110_150_1
>>2 | FINISH_110_200_1
>>3 | FINISH_1.10_20.0_3
>>4 | FINISH_1.10_20.0_4
>>5 | FINISH_1.10_30.0_3
>>6 | FINISH_1.10_30.0_4
>>7 | FINISH_120_150_1
>>8 | FINISH_120_200_1
>>  (8 rows)
> 
> Hi,
> 
> PostreSQL relies on the OS's C lib. So this kind
> of ordering problems depend on the OS' idea about
> collations.
> 
> I get the exact same order on 9.4.1 running on Centos 7.1:
> 
> chris=# select * from T_SORT order by NAME ;
> id |name
> +
>   1 | FINISH_110_150_1
>   2 | FINISH_110_200_1
>   3 | FINISH_1.10_20.0_3
>   4 | FINISH_1.10_20.0_4
>   5 | FINISH_1.10_30.0_3
>   6 | FINISH_1.10_30.0_4
>   7 | FINISH_120_150_1
>   8 | FINISH_120_200_1
> (8 rows)
> 
> But I get this on 9.3.5 running on OS X 10.8
> 
> chris=# select * from T_SORT order by NAME ;
> id |name
> +
>   3 | FINISH_1.10_20.0_3
>   4 | FINISH_1.10_20.0_4
>   5 | FINISH_1.10_30.0_3
>   6 | FINISH_1.10_30.0_4
>   1 | FINISH_110_150_1
>   2 | FINISH_110_200_1
>   7 | FINISH_120_150_1
>   8 | FINISH_120_200_1
> 
> with both databases having Collate = en_US.UTF-8.
> 
> If I put your data in a file and use the command sort
> from the shell I get the same effect (this is on
> the Centos 7.1 box):
> 
> [chris@mercury ~]$ cat x
> FINISH_1.10_20.0_3
> FINISH_1.10_20.0_4
> FINISH_1.10_30.0_3
> FINISH_1.10_30.0_4
> FINISH_110_150_1
> FINISH_110_200_1
> FINISH_120_150_1
> FINISH_120_200_1
> 
> [chris@mercury ~]$ sort x
> 
> FINISH_110_150_1
> FINISH_110_200_1
> FINISH_1.10_20.0_3
> FINISH_1.10_20.0_4
> FINISH_1.10_30.0_3
> FINISH_1.10_30.0_4
> FINISH_120_150_1
> FINISH_120_200_1
> [chris@mercury ~]$
> 
> I don't know what's the rationale behin this,
> but it looks like Linux ignores the . when doing the sort.
> 
> 


I think this is down to behaviour changes in glibc, there was a thread a while 
ago where somebody replicating via streaming rep between with different 
versions of glibc ended up in a bit of a pickle.

http://www.postgresql.org/message-id/ba6132ed-1f6b-4a0b-ac22-81278f5ab...@tripadvisor.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] unexpected (to me) sorting order

2015-04-08 Thread Glyn Astill

> From: Björn Lundin 
>To: pgsql-general@postgresql.org 
>Sent: Wednesday, 8 April 2015, 10:09
>Subject: [GENERAL] unexpected (to me) sorting order
> 
>
>
>Hi!
>below are some commands to 
>replicate a strange sorting order.
>
>I do not see why id:s 3-6 are in the middle of the result set.
>
>What am I missing?
>
>
>begin;
>
>create table T_SORT (
>  ID bigint default 1 not null , -- Primary Key
>  NAME varchar(100) default ' ' not null 
>);
>alter table T_SORT add constraint T_SORTP1 primary key (
>  ID
>);
>
>
>insert into T_SORT values ( 1,'FINISH_110_150_1');
>insert into T_SORT values ( 2,'FINISH_110_200_1');
>insert into T_SORT values ( 3,'FINISH_1.10_20.0_3');
>insert into T_SORT values ( 4,'FINISH_1.10_20.0_4');
>insert into T_SORT values ( 5,'FINISH_1.10_30.0_3');
>insert into T_SORT values ( 6,'FINISH_1.10_30.0_4');
>insert into T_SORT values ( 7,'FINISH_120_150_1');
>insert into T_SORT values ( 8,'FINISH_120_200_1');
>
>select * from T_SORT order by NAME ;
>
>rollback;
> id |name
>+
>  1 | FINISH_110_150_1
>  2 | FINISH_110_200_1
>  3 | FINISH_1.10_20.0_3
>  4 | FINISH_1.10_20.0_4
>  5 | FINISH_1.10_30.0_3
>  6 | FINISH_1.10_30.0_4
>  7 | FINISH_120_150_1
>  8 | FINISH_120_200_1
>(8 rows)
>
>why is FINISH_1.10_20.0_3 between 
> FINISH_110_200_1 and  
> FINISH_120_150_1
>?
>
>That is why is '.' between 1 and 2 as in 110/120 ?
>  
>  
>pg_admin III reports the database is created like 
>CREATE DATABASE bnl
>  WITH OWNER = bnl
>   ENCODING = 'UTF8'
>   TABLESPACE = pg_default
>   LC_COLLATE = 'en_US.UTF-8'
>   LC_CTYPE = 'en_US.UTF-8'
>   CONNECTION LIMIT = -1;
>
>



The collation of your "bnl" database is utf8, so the "." punctuation character 
is seen as a "variable element" and given a lower weighting in the sort to the 
rest of the characters.  That's just how the collate algorithm works in UTF8.


Try with LC_COLLATE = 'C' and it should sort how you expect.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Reg: PL/pgSQL commit and rollback

2015-03-18 Thread Glyn Astill

> From: Medhavi Mahansaria 
>To: Adrian Klaver  
>Cc: "pgsql-general@postgresql.org"  
>Sent: Tuesday, 17 March 2015, 14:30
>Subject: Re: [GENERAL] Reg: PL/pgSQL commit and rollback
> 
>
>
>Yes. I have read this document. 
>
>But my issue is that even when it throws
and exception I need to rollback the changes made by that query and move
on to the next block. 
>
>Is there any way to accomplish that? 
>


Yes, as per the docs in the link:

"When an error is caught by an EXCEPTION clause, the local variables of the 
PL/pgSQL function remain as they were when the error occurred, but all changes 
to persistent database state within the block are rolled back."

So you do something like:

BEGIN
  UPDATE .
EXCEPTION WHEN  THEN
.. 

END;


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Synchronous Replication Timeout

2014-12-01 Thread Glyn Astill
> From: Teresa Bradbury 
>To: "pgsql-general@postgresql.org"  
>Sent: Friday, 28 November 2014, 2:24
>Subject: [GENERAL] Synchronous Replication Timeout
> 
> 
>Hi,
> 
>I have a replication setup with a master and a single synchronous slave. If 
>the slave dies (or the network goes down) I would like any transaction on the 
>master that requires writing to fail so I can roll it back. At the moment, 
>when I commit it just hangs forever or (if I cancel it using ^C in psql or 
>using kill) it commits locally and not on the synchronous slave. Neither of 
>these options are ok in my use case. I have tried setting statement_timeout 
>but it does not work. So my questions are:
> 
>1) Is it possible to rollback transactions that fail to commit after a certain 
>amount of time waiting for the slave?
> 
>2) If not, is there any intension of implementing such a feature in the near 
>future? 
> 
>3) Do any of the answers above change if we are dealing with two-phase commits 
>instead? At the moment it hangs forever on ‘prepare transaction’, ‘commit 
>prepared’ and ‘rollback prepared’ commands.
> 
>Thanks,
> 
>Tessa
> 

>

I don't think this is possible; my understanding (which may or may not be 
correct) is that PostgreSQL's synchronous replication works by 
shipping/streaming the WAL records to the standby, which then applies the 
changes to it's own WAL.  AFAIK The commit has to happen on the master first, 
and the master is just blocking and waiting for the standby to confirm that it 
has reached the position in the XLOG and applied that commit.

I think the recommended method might be to have another standby, and specify it 
in synchronous_standby_names so it can take over as the synchronous standby 
when the first standby disconnects/fails.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pgsql and asciidoc output

2014-02-12 Thread Glyn Astill
> From: Bruce Momjian 

> To: PostgreSQL-general 
> Sent: Tuesday, 11 February 2014, 22:56
> Subject: [GENERAL] pgsql and asciidoc output
> 
> Someone suggested that 'asciidoc'
> (http://en.wikipedia.org/wiki/AsciiDoc) would be a good output format
> for psql, similar to the existing output formats of html, latex, and
> troff.
> 
> Would this be useful?
> 

Not sure about the arguments for and against either, but I'm rather fond of 
markdown (http://en.wikipedia.org/wiki/Markdown)


> -- 
>   Bruce Momjian          http://momjian.us
>   EnterpriseDB                            http://enterprisedb.com
> 
>   + Everyone has their own god. +


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to turn off DEBUG statements from psql commends

2014-02-10 Thread Glyn Astill




- Original Message -
> From: peterlen 
> To: pgsql-general@postgresql.org
> Cc: 
> Sent: Monday, 10 February 2014, 15:43
> Subject: [GENERAL] How to turn off DEBUG statements from psql commends
> 
> We are using PostgreSQL 9.3.  Something seems to have changed with our psql
> command-line output since we first installed it.  When I run commands at my
> plsql prompt, I am getting a lot of debug statements which I was not getting
> before.  I am just trying to find out how to tell psql not to display this
> output.  As an example, if I were to create a new 'test' schema, the 
> output
> looks like:
> 
> gis_demo=# create schema test;
> DEBUG:  StartTransactionCommand
> DEBUG:  StartTransaction
> DEBUG:  name: unnamed; blockState:       DEFAULT; state: INPROGR,

I'm guessing you've got client_min_messages set to one of the debug levels, try 
setting it to "log" or lower.  What does "SHOW client_min_messages;" say?



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Better Connection Statistics

2014-02-10 Thread Glyn Astill


> From: Shaun Thomas 
>To: 'bricklen'  
>Cc: "pgsql-general@postgresql.org"  
>Sent: Friday, 7 February 2014, 22:36
>Subject: Re: [GENERAL] Better Connection Statistics
> 
>
>> I don't know any tools off-hand, but you might be able to generate
>> partial statistics from the log files with a descriptive log_line_prefix
>> like "%m [%p] (user=%u) (db=%d) (rhost=%h) [vxid:%v txid:%x] [%i] "
>
>We get 60k queries per second all day long. No way am I turning on query 
>logging to capture the stats I want. :)
>
>Last month, I needed to track something down and set 
>log_min_duration_statement to 0, logging everything each connection does. It 
>was only like that for 10 seconds, and I ended up with about 400MB of log 
>output. I shudder to think of what would happen if I left it that way.
>


We have a similar issue here, I tend to set log_min_diration statement = 0 just 
for the user I want to know about and then run the logs through pg_badger.  
Agreed that a more granular pg_stat_database would be awesome.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] unnest on multi-dimensional arrays

2013-11-29 Thread Glyn Astill
> From: Pavel Stehule 

>To: bricklen  
>Cc: "pgsql-general@postgresql.org"  
>Sent: Thursday, 28 November 2013, 16:03
>Subject: Re: [GENERAL] unnest on multi-dimensional arrays
> 
>2013/11/28 bricklen 
>
>On Wed, Nov 27, 2013 at 11:28 PM, Pavel Stehule  
>wrote:
>>
>>Hello
>>>
>>>
>>>postgres=# CREATE OR REPLACE FUNCTION public.reduce_dim(anyarray)
>>> RETURNS SETOF anyarray
>>> LANGUAGE plpgsql
>>>AS $function$
>>>DECLARE s $1%type;
>>>BEGIN
>>>  FOREACH s SLICE 1  IN ARRAY $1 LOOP
>>>      RETURN NEXT s;
>>>  END LOOP;
>>>RETURN; 
>>>END;
>>>$function$;
>>>CREATE FUNCTION
>>>
>>>postgres=# select reduce_dim(array[array[1, 2], array[2, 3]]);
>>> reduce_dim 
>>>
>>> {1,2}
>>> {2,3}
>>>(2 rows)
>>>
>>
>>Hi Pavel,
>>
>>
>>I hope you don't mind, I took the liberty of adding your nifty function to 
>>the Postgresql Wiki at 
>>
>>https://wiki.postgresql.org/wiki/Unnest_multidimensional_array
>>
>>
>>Feel free to edit directly or suggest any changes to it.
>>
>>
>
>+1
>
>
>Pavel
>
> 
>
>>
>>Cheers,
>>
>>Bricklen
>>
>
>

In pre 9.1 I use the following:

CREATE OR REPLACE FUNCTION public.part_unnest(anyarray)
RETURNS SETOF anyarray
AS $BODY$
BEGIN
    RETURN QUERY 
    SELECT (SELECT array_agg($1[i][i2])
        FROM generate_series(array_lower($1,2), array_upper($1,2)) i2)
    FROM generate_series(array_lower($1,1), array_upper($1,1)) i;
END;
$BODY$
LANGUAGE plpgsql IMMUTABLE;

Not sure if anyone has a cleaner / quicker example.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Slony-I installation Help !

2013-11-25 Thread Glyn Astill


> From: Tobadao 
> To: pgsql-general@postgresql.org
> Cc: 
> Sent: Monday, 25 November 2013, 16:40
> Subject: [GENERAL] Slony-I installation Help !
> 
> Hi.
> I have downloaded "postgresql-9.3.1-1-windows.exe" and
> "edb_slony_i_pg93.exe"
> I'm using Windows XP v3.
> installation + set Slony-I path: C:\Program 
> Files\PostgreSQL\9.3\share
> But When I try to initiate a new Slony cluster in the PgAdmin interface, the
> status-bar says "Slony-I creation scripts not available; only joining
> possible"  and the ‘OK’ button in the Dialog is in disabled state.
> Please tell me how to fix the error ?
> 

You need to point pgAdmin at the sql scripts provided by Slony-I, I think you 
may find them in the "share" directory of your postgresql prefix / install 
location.

I'm not sure how up to date pgAdmins slony functionality is though, I always 
prefer to use slonik...



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to find transaction ID

2013-08-08 Thread Glyn Astill


> From: Glyn Astill 
> To: "ascot.m...@gmail.com" ; PostgreSQL general 
> 
> Cc: 
> Sent: Thursday, 8 August 2013, 15:20
> Subject: Re: [GENERAL] How to find transaction ID
> 
> 
> 
>>  From: "ascot.m...@gmail.com" 
>>  To: PostgreSQL general 
>>  Cc: ascot.m...@gmail.com
>>  Sent: Thursday, 8 August 2013, 14:52
>>  Subject: [GENERAL] How to find transaction ID
>> 
>>  Hi,
>> 
>>  I am trying some restore tools,  can you advise how to find the latest 
>>  transaction ID in PostgreSQL and the transaction ID at a particular 
>>  "Point-In-Time"?
>> 
> 
> Hmm, it's not clear entirely what you want.  Afaik "select 
> txid_current()" should get you the current transaction id. The column xmin 
> for a row from any table will get you the inserting transaction id, and xmax 
> will get you the deleting (or attempted delete) transaction id.
> 
> See http://www.postgresql.org/docs/9.0/static/functions-info.html
> 

I guess I shouldn't have said that about xmin and xmax; you can't rely on those 
columns in various circumstances i.e. after a vacuum.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to find transaction ID

2013-08-08 Thread Glyn Astill


> From: "ascot.m...@gmail.com" 
> To: PostgreSQL general 
> Cc: ascot.m...@gmail.com
> Sent: Thursday, 8 August 2013, 14:52
> Subject: [GENERAL] How to find transaction ID
> 
> Hi,
> 
> I am trying some restore tools,  can you advise how to find the latest 
> transaction ID in PostgreSQL and the transaction ID at a particular 
> "Point-In-Time"?
> 

Hmm, it's not clear entirely what you want.  Afaik "select txid_current()" 
should get you the current transaction id. The column xmin for a row from any 
table will get you the inserting transaction id, and xmax will get you the 
deleting (or attempted delete) transaction id.

See http://www.postgresql.org/docs/9.0/static/functions-info.html


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Function tracking

2013-06-07 Thread Glyn Astill


> From: Rebecca Clarke 
>To: pgsql-general@postgresql.org 
>Sent: Friday, 7 June 2013, 11:30
>Subject: [GENERAL] Function tracking
> 
>
>
>Hi all
>
>
>I'm looking for suggestions on the best way to track the updates to a function.
>
>
>We have two databases, Dev & Live, so I want to update Live with just the 
>functions that have been modified in the DEV databas3e. 
>Is there another, easier way to track the updates than manually recording it 
>in a document? (I'm trying to eliminate human interference).
>


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Function tracking

2013-06-07 Thread Glyn Astill


> From: Pavel Stehule 
> To: Rebecca Clarke 
> Cc: pgsql-general@postgresql.org
> Sent: Friday, 7 June 2013, 11:44
> Subject: Re: [GENERAL] Function tracking
> 
> Hello
> 
> 2013/6/7 Rebecca Clarke :
>>  Hi all
>> 
>>  I'm looking for suggestions on the best way to track the updates to a
>>  function.
>> 
>>  We have two databases, Dev & Live, so I want to update Live with just 
> the
>>  functions that have been modified in the DEV databas3e.
>>  Is there another, easier way to track the updates than manually recording 
> it
>>  in a document? (I'm trying to eliminate human interference).
>> 
> 
> There is a few tools
> 
> http://stackoverflow.com/questions/4804779/how-to-check-difference-between-two-databases-in-postgressql
> http://pgdiff.sourceforge.net/
> 
> But I prefer editing files for storing schema and function
> definitions. And I use a git. I dislike direct object modifying via
> tools like pgAdmin and similar.
> 

I agree, things can get a bit chaotic with everyone using pgAdmin. We do 
similiar with a set of script files in source control. In addition some sort of 
automated deployment process helps.

My soloution is probably overkill, but we have to deploy over a number of slony 
nodes in a big two phase commit.  I have a controlled deployment process that 
checks the changes against a small list of things I don't want the devs doing, 
checks for any errors by testing against a special clone, and then records the 
actual effects of the changes in the scripts (i.e. drop cascaded etc) before 
anything is actually deployed.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Newer kernels and CFS scheduler again

2013-04-30 Thread Glyn Astill
>
> From: Glyn Astill 
>To: "pgsql-general@postgresql.org"  
>Sent: Tuesday, 30 April 2013, 16:58
>Subject: [GENERAL] Newer kernels and CFS scheduler again
> 
>Hi All,
>
>
>As I'll soon be looking at migrating some of our debian servers onto the new 
>stable release, I've started doing a bit of basic pgbench testing.
>
>
>Initially I've seen a little performance regression with higher concurrent 
>clients when going from the 2.6.32 kernel to 3.2.14 (select only and tpc-b).  
>After trying the suggestions made by Shaun Thomas a while back (here: 
>http://www.postgresql.org/message-id/50e4aab1.9040...@optionshouse.com) and 
>getting nowhere, I'm seeing big improvements instead increasing the 
>

Slight correction, I meant 3.2.41

>defaults for sched_min_granularity_ns and sched_wakeup_granularity_ns (As 
>described here: 
>https://www.kernel.org/doc/Documentation/scheduler/sched-design-CFS.txt) from 
>debians defaults of 300 and 400 respectively.
>
>
>
>On my initial test setup (which admittedly is far from cutting edge) of 
>2xE5320 / 32Gb the following seem pretty optimal:
>
>
>kernel.sched_min_granularity_ns=900
>kernel.sched_wakeup_granularity_ns=1200
>
>
>I've yet to do any testing on our larger machines, but as there have been a 
>few posts here about performance vs newer kernels I was just wondering what 
>other peoples findings are regarding CFS?
>
>
>Glyn
>
>


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Newer kernels and CFS scheduler again

2013-04-30 Thread Glyn Astill
    

Hi All,

As I'll soon be looking at migrating some of our debian servers onto the new 
stable release, I've started doing a bit of basic pgbench testing.

Initially I've seen a little performance regression with higher concurrent 
clients when going from the 2.6.32 kernel to 3.2.14 (select only and tpc-b).  
After trying the suggestions made by Shaun Thomas a while back (here: 
http://www.postgresql.org/message-id/50e4aab1.9040...@optionshouse.com) and 
getting nowhere, I'm seeing big improvements instead increasing the 

defaults for sched_min_granularity_ns and sched_wakeup_granularity_ns (As 
described here: 
https://www.kernel.org/doc/Documentation/scheduler/sched-design-CFS.txt) from 
debians defaults of 300 and 400 respectively.


On my initial test setup (which admittedly is far from cutting edge) of 2xE5320 
/ 32Gb the following seem pretty optimal:

kernel.sched_min_granularity_ns=900
kernel.sched_wakeup_granularity_ns=1200

I've yet to do any testing on our larger machines, but as there have been a few 
posts here about performance vs newer kernels I was just wondering what other 
peoples findings are regarding CFS?

Glyn

Re: [GENERAL] Why does slony use a cursor? Anyone know?

2013-03-07 Thread Glyn Astill
> From: Shaun Thomas 

> To: Glyn Astill 
> Cc: PostgreSQL General 
> Sent: Wednesday, 6 March 2013, 14:35
> Subject: Re: [GENERAL] Why does slony use a cursor? Anyone know?
> 
> On 03/06/2013 04:49 AM, Glyn Astill wrote:
> 
>>  What version of slony are you on?  The specifics of what you mention
>>  don't sound quite right, but it sounds very much like bug 167 which
>>  was fixed in 2.1.2 if I remember correctly.
> 
> We're on 2.1.2. Presumably, anyway. I didn't encounter the problem in 
> stage when I set up a testbed. But it also might not be related. The problem 
> I 
> can tell from the logs, is that it was closing the cursor pretty much right 
> as 
> soon as it got the results. 75 seconds to set up a cursor of that size and 
> then 
> an hour to sync all the data isn't a problem. 75 seconds for every 500 rows 
> *is*.
> 
> The stage test I did didn't do that when I deleted 20M rows from a 50M row 
> table, but I also only set it up with a single replication set. My next test 
> will be to test with two or three replication sets that all get big deletes 
> like 
> that. My guess is that it can't adequately swap between them on SYNC events, 
> so it has to rebuild the cursor every time.
> 


Yeah, you'd expect the reason for using the cursor would be to pull those 500 
lines into memory, process them and then get the next 500 etc.  I've not seen 
any such lags on our systems, that doesn't mean it's not happening with much 
milder symptoms.


You say it happened on your production setup but not when you tried to 
reproduce it in your test environment, so is there anything useful in the slony 
logs to suggest things were not quite right at the time? I'm guessing your 
slons were running and generating syncs.


I'd definitely be asking on the slony lists about this, either something isn't 
right with your setup or it's something they can resolve.

Re: [GENERAL] Why does slony use a cursor? Anyone know?

2013-03-06 Thread Glyn Astill
> From: Shaun Thomas  To: PostgreSQL General 
> Cc: 
> Sent: Tuesday, 5 March 2013, 14:51
> Subject: [GENERAL] Why does slony use a cursor? Anyone know?
> 
> Hey everyone,
> 
> Frankly, I'm shocked at what I just found.
> 
> We did a delete last night of a few million rows, and come back this morning 
> to 
> find that slony is 9-hours behind. After some investigation, it became 
> apparent 
> that slony opens up a cursor and orders it by the log_actionseq column. Then 
> it 
> fetches 500 rows, and closes the cursor. So it's fetching several million 
> rows into a cursor, to fetch 500, and then throw the rest away.
> 
> That is quite possibly the least efficient manner I could think of to build a 
> sync system, so maybe someone knows why they did it that way?
> 
> At least with a regular query, it could sort by the column it wanted, and put 
> a 
> nifty index on it for those 500-row chunks it's grabbing. I must be missing 
> something...

What version of slony are you on?  The specifics of what you mention don't 
sound quite right, but it sounds very much like bug 167 which was fixed in 
2.1.2 if I remember correctly.

Glyn



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Recommendations on plpgsql debugger?

2013-01-15 Thread Glyn Astill
Hi Chris

> From: Chris Travers 

>To: Postgres General  
>Sent: Tuesday, 15 January 2013, 7:59
>Subject: [GENERAL] Recommendations on plpgsql debugger?
> 
>
>Hi all;
>
>
>I have a client who needs a way to step through a PL/PGSQL function and 
>ideally see what one is doing at present.  I noticed that there used to be an 
>EDB Debugger module for this purpose but I can't seem to find one for 9.1 and 
>I can't seem to pull from csv to try.
>
>
>Any alternatives? Am I missing something?


Yeah, the cvs repository appears to have been replaced with git instead, try 
doing a git clone from:


git://git.postgresql.org/git/pldebugger.git

Glyn



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Vacuum analyze verbose output

2012-12-19 Thread Glyn Astill
> From: Anjali Arora 

>To: pgsql-general@postgresql.org 
>Sent: Wednesday, 19 December 2012, 9:14
>Subject: [GENERAL] Vacuum analyze verbose output
> 
>
>Hi all,
>
>
>I ran following command on 8.2.2 postgresql:
>
>
> psql -p port dbname -c "vacuum analyze verbose"
>
>
>last few lines from "vacuum analyze verbose" output:
>
>
>DETAIL:  A total of 2336 page slots are in use (including overhead).
>2336 page slots are required to track all free space.
>Current limits are:  153600 page slots, 1000 relations, using 965 kB.
>
>
>After upgrade to postgresql 9.0.4 I am not getting this output as the part of 
>"vacuum analyze verbose" output.
>
>
>Can anybody please let me know where can I find this information in postgresql 
>9.0.4.


I'm not sure you can, as of 8.4 free space tracking was made automatic, so you 
no longer need to track max_fsm_pages and max_fsm_relations

See: http://www.postgresql.org/docs/8.4/static/release-8-4.html



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Npgsql

2012-11-26 Thread Glyn Astill
An actual error message would be useful, but did you add a reference to the 
assembly in your project?





>
> From: Peter Kroon 
>To: "pgsql-general@postgresql.org"  
>Sent: Friday, 23 November 2012, 18:13
>Subject: [GENERAL] Npgsql
> 
>
>I've installed Npgsql via Application stack builder without a problem(I guess, 
>no error messages seen).
>
>
>
>http://npgsql.projects.pgfoundry.org/docs/manual/UserManual.html
>
>The projectpage tells me I have to add:
><%@ Assembly name="System.Data" %>
<%@ Assembly name="Npgsql" %>
>
>
>When I do this I get an error:
>Regel 1:  <%@ Page Language="VB" Debug="true" %>
Regel 2:  <%@ Import Namespace="System.Data.Odbc" %> Regel 3:  <%@ Assembly 
name="Npgsql" %> Regel 4:  <%
Regel 5: 
>It's unable to find/load dependencies.
>Has anyone ever encountered this problem?
>If so, what was your solution.
>
>
>Thanks,
>
>
>Peter
>
>
>
>

Re: [GENERAL] Type Name / Internal name returned by pg_catalog.format_type with/without prepended schema name?

2012-06-27 Thread Glyn Astill

>
> From: Tom Lane 
>To: Glyn Astill  
>Cc: "pgsql-general@postgresql.org"  
>Sent: Wednesday, 27 June 2012, 14:31
>Subject: Re: [GENERAL] Type Name / Internal name returned by 
>pg_catalog.format_type with/without prepended schema name?
> 
>Glyn Astill  writes:
>> I was wondering if anyone could shed some light with type names returned by 
>> pg_catalog.format_type sometimes having the schema name prepended, and 
>> sometimes not? I'm calling it like format_type(pg_type.oid, NULL) .
>
>General case: the schema name is used if the type wouldn't be found
>without it, according to your current search_path.
>
>There are some specific built-in types for which schema names are
>never used, because their names are actually keywords according to SQL
>(which a fortiori means there's no ambiguity on the lookup side).
>

However none of that explains why one of the types is returned with the schema 
name and the other is not, both are in the same schema which is in the current 
search_path.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Type Name / Internal name returned by pg_catalog.format_type with/without prepended schema name?

2012-06-27 Thread Glyn Astill
Hi Guys,

I was wondering if anyone could shed some light with type names returned by 
pg_catalog.format_type sometimes having the schema name prepended, and 
sometimes not? I'm calling it like format_type(pg_type.oid, NULL) .


I'm using pg9.0, but I remember seeing this years ago on older versions too - I 
just can't remember what I did about it.

I've got two types that appear to have been created in the same way, except 
that pg_catalog.format_type returns the schema name for one, but not the 
other.  I can't reproduce this just by running the sql used to create the types 
now though.

CREATE TYPE website.block_selection AS
   (block character varying,
    description character varying,
    amount numeric,
    "left" integer,
    best integer,
    type integer,
    singles_only boolean);

CREATE TYPE website.buy_with_detail AS
   (price numeric,
    must_buy_with integer[],
    limit_type text,
    multiplier integer);

SEE=# \dT+ website.block_selection
  List of data types
 Schema  |  Name   |  Internal name  | Size  | Elements | Description
-+-+-+---+--+-
 website | block_selection | block_selection | tuple |  |



SEE=# \dT+ website.buy_with_detail
  List of data types
 Schema  |  Name   |  Internal name  | Size  | Elements | 
Description
-+-+-+---+--+-
 website | website.buy_with_detail | buy_with_detail | tuple |  |



Any ideas how this could have come about? All the types were created on 9.0, 
and I've tried with and without the website schema in the search path etc, but 
I'm sure I'm just missing something simple?

Thanks
Glyn

Re: [GENERAL] PostgreSQL DBA in SPAAAAAAAACE

2011-12-06 Thread Glyn Astill
__

> From: Merlin Moncure 
>To: Joe Miller  
>Cc: pgsql-general@postgresql.org 
>Sent: Tuesday, 6 December 2011, 17:30
>Subject: Re: [GENERAL] PostgreSQL DBA in SPCE
> 
>On Tue, Dec 6, 2011 at 10:56 AM, Joe Miller  wrote:
>> You may have seen this, but RedGate software is sponsoring a contest
>> to send a DBA on a suborbital space flight.
>>
>> And there is a PostgreSQL representativeme!
>>
>> https://www.dbainspace.com/finalists/joe-miller
>>
>> Voting is open for 7 days. Don't let one of those Oracle or SQL Server
>> punks win :p
>
>so jealous -- I didn't make the cut.  Well, you'll have my vote.
>
>merlin
>

Me neither, voted. Good luck.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] delete query taking way too long

2010-08-12 Thread Glyn Astill
What's the output of explain?

--- On Thu, 12/8/10, Ivan Sergio Borgonovo  wrote:

> From: Ivan Sergio Borgonovo 
> Subject: [GENERAL] delete query taking way too long
> To: pgsql-general@postgresql.org
> Date: Thursday, 12 August, 2010, 12:14
> I've
> delete from catalog_items where ItemID in (select id from
> import.Articoli_delete);
> 
> id and ItemID have an index.
> 
> catalog_items is ~1M rows
> Articoli_delete is less than 2K rows.
> 
> This query has been running for roughly 50min.
> Right now it is the only query running.
> 
> PostgreSQL 8.3.4 on x86_64-pc-linux-gnu, compiled by GCC cc
> (GCC)
> 4.1.2 20061115 (prerelease) (Debian 4.1.1-21)
> 
> -- 
> Ivan Sergio Borgonovo
> http://www.webthatworks.it
> 
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
> 




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore: [custom archiver] dumping a specific TOC data block out of order is not supported without ID on this input stream (fseek required)

2010-05-21 Thread Glyn Astill
--- On Fri, 21/5/10, Alban Hertroys  wrote:

> On 21 May 2010, at 11:58, Glyn Astill
> wrote:
> 
> > Well I've ony just gotten round to taking another look
> at this, response inline below:
> > 
> > --- On Fri, 30/4/10, Tom Lane 
> wrote:
> > 
> >> Glyn Astill 
> >> writes:
> >>> The schema is fairly large, but I will try.
> >> 
> >> My guess is that you can reproduce it with not a
> lot of
> >> data, if you can
> >> isolate the trigger condition.
> >> 
> > 
> > Hmm, tried reducing the amount of data and the issue
> goes away. Could this indicate some issue with the file,
> like an issue with it's size (~~ 5gb)? Or could it be an
> issue with the data itself?
> 
> The file-size in combination with an "out of order" error
> smells of a 32-bit integer wrap-around problem.
> 
> And indeed, from the documentation 
> (http://www.postgresql.org/docs/8.4/interactive/lo-intro.html):
> "One remaining advantage of the large object facility is
> that it allows values up to 2 GB in size"
> 
> So I guess your large object is too large.

Hmm, we don't use any large objects though, all our data is pretty much just 
date, text and numeric fields etc

Glyn.





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore: [custom archiver] dumping a specific TOC data block out of order is not supported without ID on this input stream (fseek required)

2010-05-21 Thread Glyn Astill
Well I've ony just gotten round to taking another look at this, response inline 
below:

--- On Fri, 30/4/10, Tom Lane  wrote:

> Glyn Astill 
> writes:
> > The schema is fairly large, but I will try.
> 
> My guess is that you can reproduce it with not a lot of
> data, if you can
> isolate the trigger condition.
> 

Hmm, tried reducing the amount of data and the issue goes away. Could this 
indicate some issue with the file, like an issue with it's size (~~ 5gb)? Or 
could it be an issue with the data itself?

> > One thing I forgot to mention is that in the restore
> script I drop the indexes off my tables between restoring
> the schema and the data. I've always done this to speed up
> the restore, but is there any chance this could be causing
> the issue?
> 
> Possibly.  I think there must be *something* unusual
> triggering the
> problem, and maybe that is it or part of it.

I've removed this faffing with indexes inbetween but the problem still persists.

> 
> > I guess what would help is some insight into what the
> error message means. 
> 
> It's hard to tell.  The likely theories are (1) we're
> doing things in an
> order that requires seeking backwards in the file, and for
> some reason
> pg_restore thinks it can't do that; (2) there's a bug
> causing the code
> to search for a item number that isn't actually in the
> file.
> 
> One of the previous reports actually turned out to be pilot
> error: the
> initial dump had failed after emitting a partially complete
> file, and
> so the error from pg_restore was essentially an instance of
> (2).  But
> with three or so reports I'm thinking there's something
> else going on.
> 

So I'm still at a loss as to why it's happening.

I've tried to dig a little deeper (and I may just be punching thin air here) by 
adding the value of id into the error message at die_horribly() and it gives me 
id 7550 which is the first table in the TOC entry list when I do a pg_restore 
-l, everything above it is a sequence. 

Here's a snip of pg_restore -l:

7775; 0 0 SEQUENCE SET website ui_content_id_seq pgcontrol
7550; 0 22272 TABLE DATA _main_replication sl_archive_counter slony

And the output if run it under gdb:

GNU gdb 6.8-debian
Copyright (C) 2008 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu"...
(gdb) set args -U postgres --disable-triggers -j 4 -c -d SEE 
Way5a-pgsql-SEE-data.gz
(gdb) break die_horribly
Breakpoint 1 at 0x4044b0: file pg_backup_archiver.c, line 1384.
(gdb) run
Starting program: /usr/local/pgsql/bin/pg_restore -U postgres 
--disable-triggers -j 4 -c -d SEE Way5a-pgsql-SEE-data.gz
[Thread debugging using libthread_db enabled]
[New Thread 0x7f72480eb700 (LWP 4335)]
pg_restore: [custom archiver] dumping a specific TOC data block out of order is 
not supported without ID on this input stream (fseek required)
hasSeek = 1 dataState = 1 id = 7550
[Switching to Thread 0x7f72480eb700 (LWP 4335)]

Breakpoint 1, die_horribly (AH=0x61c210, modulename=0x4171f6 "archiver", 
fmt=0x4167d8 "worker process failed: exit code %d\n") at 
pg_backup_archiver.c:1384
1384{
(gdb) pg_restore: [custom archiver] dumping a specific TOC data block out of 
order is not supported without ID on this input stream (fseek required)
hasSeek = 1 dataState = 1 id = 7550
pg_restore: [custom archiver] dumping a specific TOC data block out of order is 
not supported without ID on this input stream (fseek required)
hasSeek = 1 dataState = 1 id = 7550
pg_restore: [custom archiver] dumping a specific TOC data block out of order is 
not supported without ID on this input stream (fseek required)
hasSeek = 1 dataState = 1 id = 7550

(gdb) bt
#0  die_horribly (AH=0x61c210, modulename=0x4171f6 "archiver", fmt=0x4167d8 
"worker process failed: exit code %d\n") at pg_backup_archiver.c:1384
#1  0x00408f14 in RestoreArchive (AHX=0x61c210, ropt=0x61c0d0) at 
pg_backup_archiver.c:3586
#2  0x00403737 in main (argc=10, argv=0x7fffd5b8) at 
pg_restore.c:380
(gdb) step
pg_restore: [archiver] worker process failed: exit code 1

Program exited with code 01.


Any further ideas of where I should dig would be appreciated.

Thanks
Glyn




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 8.3.7, 'cache lookup failed' for a table

2010-05-12 Thread Glyn Astill
--- On Wed, 12/5/10, Grzegorz Jaśkiewicz  wrote:

> Alban Hertroys
> 
> wrote:
> > On 12 May 2010, at 12:01, Glyn Astill wrote:
> >
> >> Did you not mention that this server was a slony
> slave at some point though?
> >>
> >> Just because you have removed slony, and the error
> comes from postgresql itself does not mean the corruption
> was not caused by misuse of slony.
> >
> > Indeed. I wonder if "when we ere adding/removing slony
> to the system for Nth time (due to it sometimes going out of
> sync)" may be caused by that as well.
> >
> 
> ok, so either upgrade to newer version of slony, or drop
> all tables,
> and recreate them every time slony is removed and readded
> to the
> database.
> 

Upgrading to slony 2.03 would prevent this from happening, but no there's no 
need to drop and recreate all tables every time slony is removed and re-added 
to the database - you just need you make sure you use slonik SET DROP TABLE 
*before* dropping any table in postgresql. Look, here 
http://www.slony.info/documentation/stmtsetdroptable.html


> And I guess the only reason postgresql doesn't like it, is
> due to
> slony's behavior.
> 

Nope, due to slony not being used correctly!




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 8.3.7, 'cache lookup failed' for a table

2010-05-12 Thread Glyn Astill
--- On Wed, 12/5/10, Grzegorz Jaśkiewicz  wrote:
> Glyn Astill 
> wrote:
> > Hi Grzegorz,
> >
> > Is it always the same OID(s)?
> >
> > Usually this means something somewhere has a link to
> an OID that has been removed.
> >
> > You could try digging through pg_catalog lookng for an
> oid column that refers to the OID in question.
> >
> > In my experience, when a slony 1.2.x slave is
> involved, this usually means a relation was dropped without
> first dropping it from replication using DROP TABLE.  In
> this case it may be a trigger on a table that has been
> "disabled" by slony, it does this by changing
> pg_trigger.tgrelid to point to an index on the table in
> question rather than the table itself.  Thus when the table
> is dropped the trigger is left behind, pointing to an index
> that isn't there.  I' probably start with "select * from
> "pg_catalog".pg_trigger where tgrelid =  doesn't exist>", and prune from there.
> 
> It only happened to me once.
> You think it is because slony is poking around pg_catalog.
> schema, and
> it shouldn't , basically ?
> 

No, Slony 1.2.x pokes around in pg_catalog because in versions of postgres 
prior to 8.3 (which 1.2.x has to support) there was no built in way to disable 
the triggers. So it's not that it slony shouldn't be poking around there, it's 
that if you choose to use slony you should make sure you drop the relation from 
replication before dropping it - else you'll make a mess.




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 8.3.7, 'cache lookup failed' for a table

2010-05-12 Thread Glyn Astill
Did you not mention that this server was a slony slave at some point though?

Just because you have removed slony, and the error comes from postgresql itself 
does not mean the corruption was not caused by misuse of slony.

--- On Wed, 12/5/10, Grzegorz Jaśkiewicz  wrote:

> From: Grzegorz Jaśkiewicz 
> Subject: Re: [GENERAL] 8.3.7, 'cache lookup failed' for a table
> To: "Alban Hertroys" 
> Cc: pgsql-general@postgresql.org
> Date: Wednesday, 12 May, 2010, 10:57
> no it is not slony related.
> It is a postgresql problem.
> 
> my original post:
> http://archives.postgresql.org/pgsql-general/2010-05/msg00402.php
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
> 




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 8.3.7, 'cache lookup failed' for a table

2010-05-12 Thread Glyn Astill
Hi Grzegorz,

Is it always the same OID(s)?

Usually this means something somewhere has a link to an OID that has been 
removed.

You could try digging through pg_catalog lookng for an oid column that refers 
to the OID in question.

In my experience, when a slony 1.2.x slave is involved, this usually means a 
relation was dropped without first dropping it from replication using DROP 
TABLE.  In this case it may be a trigger on a table that has been "disabled" by 
slony, it does this by changing pg_trigger.tgrelid to point to an index on the 
table in question rather than the table itself.  Thus when the table is dropped 
the trigger is left behind, pointing to an index that isn't there.  I' probably 
start with "select * from "pg_catalog".pg_trigger where tgrelid = ", and prune from there.

Glyn

--- On Wed, 12/5/10, Grzegorz Jaśkiewicz  wrote:

> From: Grzegorz Jaśkiewicz 
> Subject: Re: [GENERAL] 8.3.7, 'cache lookup failed' for a table
> To: pgsql-general@postgresql.org
> Date: Wednesday, 12 May, 2010, 10:33
> anyone please ?
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
> 




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore: [custom archiver] dumping a specific TOC data block out of order is not supported without ID on this input stream (fseek required)

2010-04-30 Thread Glyn Astill
--- On Fri, 30/4/10, Alvaro Herrera  wrote:
> 
> Uh.  Why are you doing that?  pg_restore is
> supposed to restore the
> schema, then data, finally indexes and other stuff. 
> Are you using
> separate schema/data dumps?  If so, don't do that --
> it's known to be
> slower.

Yes, I'm restoring the schema first, then the data.  

The reason being that the data can come from different slony 1.2 slaves, but 
the schema always comes from the origin server due to modifications slony makes 
to schemas on the slaves.




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore: [custom archiver] dumping a specific TOC data block out of order is not supported without ID on this input stream (fseek required)

2010-04-30 Thread Glyn Astill


--- On Fri, 30/4/10, Tom Lane  wrote:

> Glyn Astill 
> writes:
> > I've just upgraded a server from 8.3 to 8.4, and when
> trying to use the parallel restore options I get the
> following error:
> 
> > "pg_restore: [custom archiver] dumping a specific TOC
> data block out of order is not supported without ID on this
> input stream (fseek required)"
> 
> This is the second or third report we've gotten of that,
> but nobody's
> been able to offer a reproducible test case.  Can
> you?
> 

Hi Tom,

The schema is fairly large, but I will try.

One thing I forgot to mention is that in the restore script I drop the indexes 
off my tables between restoring the schema and the data. I've always done this 
to speed up the restore, but is there any chance this could be causing the 
issue?

I guess what would help is some insight into what the error message means. 

It appers to orginate in _PrintTocData in pg_backup_custom.c, but I don't 
really understand what's happening here at all, a wild guess is it's trying to 
seek to a particular toc entry in the file? or process the file sequentially?

http://doxygen.postgresql.org/pg__backup__custom_8c.html#6024b8108422e69062072df29f48506f

Glyn




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pg_restore: [custom archiver] dumping a specific TOC data block out of order is not supported without ID on this input stream (fseek required)

2010-04-30 Thread Glyn Astill
Hi chaps,

I've just upgraded a server from 8.3 to 8.4, and when trying to use the 
parallel restore options I get the following error:

"pg_restore: [custom archiver] dumping a specific TOC data block out of order 
is not supported without ID on this input stream (fseek required)"

The dump I'm trying to restore is purely a data dump, and the schema is 
separate (due to the way our setup works).

These are the options I'm using for the dump and the restore:

pg_dump -Fc  -U postgres -h localhost -a --disable-triggers

pg_restore -U postgres --disable-triggers -j 4 -c -d 

can anyone tell me what I'm doing wrong, or why my files are not supported by 
parallel restore?

Thanks
Glyn



   

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Wikipedia entry - AmigaOS port - error?

2010-04-01 Thread Glyn Astill
--- On Thu, 1/4/10, Tom Lane  wrote:

> > But I do remember there was a set of libs called
> ixemul (http://aminet.net/package/dev/gg/ixemul-bin) that a lot
> of people used to port unix apps to the Amiga with, probably
> not enough to port postgres though.
> 
> Ah, I wondered if there might not be such a thing. 
> However, according
> to http://en.wikipedia.org/wiki/Ixemul.library it doesn't
> have support
> for fork(), which would have been a complete showstopper
> back in the day
> (though since the Windows port in 8.0 we can live without
> it).  So that
> pretty much kills any thought that it might've once worked
> and we just
> forgot.
> 

And now you mention it, I remember that was exactly the reason why the last 
version of perl for Amiga was 5.7.2, it says there was support for vfork() but 
not fork()

http://theory.uwinnipeg.ca/CPAN/perl/pod/perlamiga.html#perl_5_8_0_broken_in_amigaos

Glyn




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Wikipedia entry - AmigaOS port - error?

2010-04-01 Thread Glyn Astill
--- On Thu, 1/4/10, Tom Lane  wrote:

> > Just noticed on the wikipedia page under rdbms, it
> lists postgresql as available on AmigaOS.
> 
> > http://en.wikipedia.org/wiki/Comparison_of_relational_database_management_systems
> 
> > Is this just an error, as I see edb advanced server is
> also listed as available, or was there some working code at
> some point in time?
> 
> I think it's probably bogus.  There's some mention in
> very old versions
> of the ports list of being able to run on top of NetBSD on
> Amiga
> hardware.  But unless somebody had code to duplicate
> the Unix syscall set
> on AmigaOS, there'd have been no way to make PG run on
> that.
> 

Thanks Tom, I thought as much, I recall the Amiga community being full of 
vaporware.

But I do remember there was a set of libs called ixemul 
(http://aminet.net/package/dev/gg/ixemul-bin) that a lot of people used to port 
unix apps to the Amiga with, probably not enough to port postgres though.

Glyn








-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Wikipedia entry - AmigaOS port - error?

2010-04-01 Thread Glyn Astill
Hi Chaps,

Just noticed on the wikipedia page under rdbms, it lists postgresql as 
available on AmigaOS.

http://en.wikipedia.org/wiki/Comparison_of_relational_database_management_systems

Is this just an error, as I see edb advanced server is also listed as 
available, or was there some working code at some point in time?

I'm just merely amused/interested, I've got a stupidly modified Amiga somewhere 
from my teenage years, but I doubt I'll be pulling it out to try.

Glyn






-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] when a table was last vacuumed

2010-02-10 Thread Glyn Astill
--- On Wed, 10/2/10, AI Rumman  wrote:

> If it possible to find out when a table
> was last vacuumed?

Try:

select pg_stat_get_last_vacuum_time(oid) from "pg_catalog".pg_class where 
relname = 'tablename';

select pg_stat_get_last_autovacuum_time(oid) from "pg_catalog".pg_class where 
relname = 'tablename';




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Server name in psql prompt

2010-01-08 Thread Glyn Astill
--- On Fri, 8/1/10, Mark Morgan Lloyd  Is there any way of getting psql to
> display the name of the currently-connected server in its
> prompt, and perhaps a custom string identifying e.g. a disc
> set, without having to create a psqlrc file on every client
> system that's got a precompiled psql installed?

You could use the psql -v option to set the PROMPT variables (or set them as 
ENV)

see:

http://www.postgresql.org/docs/8.3/static/app-psql.html
http://www.postgresql.org/docs/8.3/static/app-psql.html#APP-PSQL-PROMPTING




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] LDAP configuration changes in 8.4?

2009-12-08 Thread Glyn Astill
--- On Tue, 8/12/09, Magnus Hagander  wrote:
> 
> ldapserver="notts.net.mycompany.com"
> 
> exclude the ldap:// part, and the base dn part.
> 

Excellent, that did the trick. Thanks.

Glyn





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] LDAP configuration changes in 8.4?

2009-12-08 Thread Glyn Astill
Hi Chaps,

I'm setting up a new server on 8.4, and I'm struggling to get LDAP 
authentication working, even though I've got it working fine on 8.3.

This is the format I'm using in 8.3:

ldap "ldap://notts.net.mycompany.com/My Company/Call Centre Users;CN=;,OU=Call 
Centre Users,OU=My Company,DC=notts,DC=net,DC=mycompany,DC=com"

So I figured for 8.4 I should use:

ldap ldapserver="ldap://notts.net.mycompany.com/My Company/Call Centre Users" 
ldapprefix="CN=" ldapsuffix=",OU=Call Centre Users,OU=My 
Company,DC=notts,DC=net,DC=mycompany,DC=com"

Can anyone spot where I've gone wrong?




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] unexpected pageaddr

2009-12-01 Thread Glyn Astill
--- On Tue, 1/12/09, Tom Lane  wrote:

> > I'm doing my usual pull-the-plug tests on some new
> servers, and I'm
> > seeing this in the logs during redo. Is this in any
> way normal?
> 
> Quite, this is one of the expected ways to detect
> end-of-WAL.
> 
> 

Excellent, thanks Tom.




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] unexpected pageaddr

2009-12-01 Thread Glyn Astill
Hi Chaps,

I'm doing my usual pull-the-plug tests on some new servers, and I'm seeing this 
in the logs during redo. Is this in any way normal? I'm guessing it is because 
it's just under "LOG" rather than something like "WARNING", but I'm not taking 
any chances.

I've got disk caches off, fsync on and some new fangled Zero-Maintenance Cache 
Protection.

These machines are pretty much identical to our current machines, differences 
are 8.4 rather than 8.3 and ZMCP rather than standard BBU.

Logs below
Glyn

2009-12-01 14:19:48 GMT [3510]: [1-1]: [user=]: [host=]: [db=]:: LOG:  database 
system was interrupted; last known up at 2009-12-01 14:16:32 GMT
2009-12-01 14:19:48 GMT [3510]: [2-1]: [user=]: [host=]: [db=]:: LOG:  database 
system was not properly shut down; automatic recovery in progress
2009-12-01 14:19:48 GMT [3510]: [3-1]: [user=]: [host=]: [db=]:: LOG:  redo 
starts at 6/901B8B0
2009-12-01 14:19:56 GMT [3510]: [4-1]: [user=]: [host=]: [db=]:: LOG:  
unexpected pageaddr 5/CC80C000 in log file 6, segment 52, offset 8437760
2009-12-01 14:19:56 GMT [3510]: [5-1]: [user=]: [host=]: [db=]:: LOG:  redo 
done at 6/3480B148
2009-12-01 14:19:56 GMT [3510]: [6-1]: [user=]: [host=]: [db=]:: LOG:  last 
completed transaction was at log time 2009-12-01 14:17:04.986986+00
2009-12-01 14:20:07 GMT [3513]: [1-1]: [user=]: [host=]: [db=]:: LOG:  
autovacuum launcher started
2009-12-01 14:20:07 GMT [3447]: [1-1]: [user=]: [host=]: [db=]:: LOG:  database 
system is ready to accept connections





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Too easy to log in as the "postgres" user?

2009-10-15 Thread Glyn Astill
> From: Thom Brown 
> Subject: [GENERAL] Too easy to log in as the "postgres" user?
> To: "PGSQL Mailing List" 
> Date: Thursday, 15 October, 2009, 11:38 AM
> I've noticed that if I just log in to
> my server, I don't su to root,
> or become the postgres user, I can get straight into the
> database as
> the postgres user merely with "psql -U postgres -h
> localhost".  My
> user account isn't a member of the postgres group.
> 
> It appears I've not applied my security settings
> correctly.  What can
> I do to prevent access this way?  I'd still want to be
> able to su to
> the postgres user and log in that way, but not with the -U
> parameter
> allowing access.

You just need to change the local connections to any authentication method 
other than trust.

http://www.postgresql.org/docs/8.3/interactive/auth-pg-hba-conf.html

Glyn




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] tar error while running basebackup

2009-10-13 Thread Glyn Astill
> From: Andre Brandt 
> Subject: [GENERAL] tar error while running basebackup
> To: pgsql-general@postgresql.org
> Date: Tuesday, 13 October, 2009, 11:40 AM
> Hi!
> 
> We're using two backup strategies to get consistent backups
> of our postgresql databases. First, we create a complete
> dump every night by running pg_dump, zipping the file and
> writing this backup on tape.
> Second, we create a basebackup every saturday. To create a
> basebackup, we run pg_start_backup. After that, we create a
> tar file of the complete database directory and stop the
> backup mode by running pg_stop_backup. Of course, all
> archived wal logs are also copied ;)
> 
> Well, everything was fine for month. But from time to time,
> I get an error when running tar:
> 
> tar: ./base/208106/209062: File shrank by 262144 bytes;
> padding with zeros
> tar: ./base/208106/210576: file changed as we read it
> tar: ./base/208106/210577: file changed as we read it
> tar: ./base/208106/210431: file changed as we read it
> 
> How can this happen? I always thought, that, when in backup
> mode, nothing is able to change the database - so the
> database files shouldn't change. Can autovaccumdb cause the
> changes?
> I already read something, that this kind of errors can be
> ignored when creating a basebackup, but I can't believe
> that. Normally, the tar file have to be worthless, when an
> error occurs - or do I have an error in reasoning?

My understanding was that when you back up like this you are actually taring up 
inconsistant database files, but it doesn't matter as you have the wal logs to 
replay any changes, thus correcting the inconsistancies.




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Eclipse jdbc postgresql

2009-09-01 Thread Glyn Astill
--- On Tue, 1/9/09, Sheepjxx  wrote:

> If I want to use postgres with jdbc ,
> I have already download jdbc, do I need  extra option
> for compile postgres?--with-java?do I need change
> postgres.conf?
> 

No, you just need the postgres jdbc driver (jdbc.postgresql.org) in your 
classpath.

Glyn




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL]

2009-08-04 Thread Glyn Astill



> From: sw...@opspl.com 
> Subject: Re: [GENERAL]
> To: pgsql-general@postgresql.org
> Date: Tuesday, 4 August, 2009, 11:03 AM
> 
> Hello ,
> 
> >>
> >         
>    You can use "kill  " command to
> kill the slon daemons,
> > find
> > the pid's of the cluster and kill.
> >
> >
> 
>      But that is not totally right :P .
> If there is no other way we will
> use it.. But I want to stop slony properly using slony
> command. :)
> 
> 

Maybe you should send this to the slony list.

Anyway, that's just sending a sigterm and AFAIK the the correct way to stop a 
slon...




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Replication

2009-06-23 Thread Glyn Astill

--- On Mon, 22/6/09, Gerry Reno  wrote:

> Have you ever tried any of the postgresql replication
> offerings? The only one that is remotely viable is slony and
> it is so quirky you may as well forget it. The rest are in
> some stage of decay/abandonment. There is no real
> replication available for postgresql. Postgresql needs to
> develop a real replication offering for postgresql. Builtin
> or a separate module.
> 

There was a similar thread on the Ferrari mailing list last week; some chap 
asking why the FFX didn't have a big red button to "make the steering go 
light". Apparently it is too hard to drive, whereas the Fiat Punto is easy and 
has this magic technology.

Seriously though, we use slony here in production and whilst it can be a pain 
in the arse at times it's a solid piece of kit. And bucardo, mammoth, 
londisite, pgpool are all good solutions - as long as you make yourself 
familiar with the one(s) you choose.

I've used the binlog streaming replication in mysql before, but I wouldn't 
trust it with my employer’s data.





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] DB Migration 8.4 -> 8.3

2009-06-15 Thread Glyn Astill



--- On Mon, 15/6/09, Eoghan Murray  wrote:

> From: Eoghan Murray 
> Subject: [GENERAL] DB Migration 8.4 -> 8.3
> To: pgsql-general@postgresql.org
> Date: Monday, 15 June, 2009, 10:19 PM
> I unintentionally installed 8.4beta2
> on a server (using yum), while I
> run 8.3.7 on my dev machine.
> The 8.3 version of pg_dump does not work with the server,
> even with
> the -i option:
>    8.3: pg_dump: Error message from server:
> ERROR:  column
> "reltriggers" does not exist
> the 8.4 pg_dump works okay, but then the 8.3 version of
> pg_restore is
> not able to restore the resultant databases:
>    8.3: pg_restore: [archiver] input file
> does not appear to be a
> valid archive
> 

Try the 8.4 pg_restore against the 8.3 server




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Could not open file "pg_clog/...."

2009-05-12 Thread Glyn Astill




--- On Tue, 12/5/09, Glyn Astill  wrote:

> I'm going to duck out of this now though, and I think
> you should probably wait until someone a little more
> knowlegable replies.
> 

Also see here:

http://archives.postgresql.org/pgsql-general/2006-07/msg01147.php


 

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Could not open file "pg_clog/...."

2009-05-12 Thread Glyn Astill

--- On Tue, 12/5/09, Markus Wollny  wrote:

> From: Markus Wollny 
> Subject: AW: [GENERAL] Could not open file "pg_clog/"
> To: glynast...@yahoo.co.uk, pgsql-general@postgresql.org
> Date: Tuesday, 12 May, 2009, 11:52 AM
> Hi! 
> 
> > -Ursprüngliche Nachricht-
> > Von: Glyn Astill [mailto:glynast...@yahoo.co.uk] 
> > Gesendet: Dienstag, 12. Mai 2009 12:33
> > An: pgsql-general@postgresql.org; Markus Wollny
> 
> > The first thing I would have done if I've been
> forced to do 
> > that (if there was no other option?) would be a dump /
> 
> > restore directly afterwards, then pick through for any
> 
> > inconsistencies.
> 
> That's a lot of data - somewhere around 43GB at the
> moment. And pg_dump seems to fail altogether on the affected
> databases, so the pg_clog issue actually means that I cannot
> make any current backups.
>  
> > Probably wait for the big-wigs to reply but perhaps a
> reindex 
> > may get you going.
> 
> Tried that, but it also makes PostgreSQL crash, so no luck
> there either. I also dropped template0, recreated it from
> template1, did a VACUUM FREEZE on it, marked it as template
> again and disallowed connections.
>  
> > I'd definately be starting with a fresh database
> once I got 
> > out of the whole though...
> 
> Yes, but that'll be a nightshift and I need some way to
> actually get at a working dump now...
> 

It appears to be failing on the pcaction.article table. Could you get away 
without that? Perhaps, and it'd be a longshot, you'd be able to dump the rest 
of the data with it gone?

I'm going to duck out of this now though, and I think you should probably wait 
until someone a little more knowlegable replies.




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Could not open file "pg_clog/...."

2009-05-12 Thread Glyn Astill

--- On Tue, 12/5/09, Markus Wollny  wrote:

> From: Markus Wollny 
> Subject: [GENERAL] Could not open file "pg_clog/"
> To: pgsql-general@postgresql.org
> Date: Tuesday, 12 May, 2009, 11:04 AM
> Hello!
> 
> Recently one of my PostgreSQL servers has started throwing
> error
> messages like these:
> 
> ERROR:  could not access status of transaction 3489956864
> DETAIL:  Could not open file "pg_clog/0D00":
> Datei oder Verzeichnis
> nicht gefunden. (file not found)
> 
> The machine in question doesn't show any signs of a
> hardware defect,
> we're running a RAID-10 over 10 disks for this
> partition on a 3Ware
> hardware RAID controller with battery backup unit, the
> controller
> doesn't show any defects at all. We're running
> PostgreSQL 8.3.5 on that
> box, kernel is 2.6.18-6-amd64 of Debian Etch, the
> PostgreSQL binaries
> were compiled from source on that machine.

Apart from not being on the latest release you have a very similar setup to me.

> 
> I know that I'd be loosing some transactions, but in
> our use case this
> is not critical. Anyway, this made the problem go away for
> a while but
> now I'm getting those messages again - and indeed the
> clog-files in
> question appear to be missing altogether. And what's
> worse, the
> workaround no longer works properly but makes PostgreSQL
> crash:
> 

The first thing I would have done if I've been forced to do that (if there was 
no other option?) would be a dump / restore directly afterwards, then pick 
through for any inconsistencies.

> Now what exactly is causing those missing clog files, what
> can I do to
> prevent this and what can I do to recover my database
> cluster, as this
> issue seems to prevent proper dumps at the moment?

Probably wait for the big-wigs to reply but perhaps a reindex may get you going.

I'd definately be starting with a fresh database once I got out of the whole 
though...




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] OLE DB

2009-05-10 Thread Glyn Astill

Hi Chaps,

I was just wondering about the state of ole db connectivity for postgresql.  
From what I can see my options are;

http://pgfoundry.org/projects/oledb/

Which doesn't seem to have been updated for 3 years - anyone using it?

Or

http://www.pgoledb.com/

Any others?




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Power outage and funny chars in the logs

2009-05-07 Thread Glyn Astill

--- On Thu, 7/5/09, Massa, Harald Armin  wrote:

> >
> > mentioning those @ symbols ...
> 
> 1,5 weeks ago there was reported on this list the problem
> "postgres service
> not starting on windows"; after consulting event log
> the user reported as
> message "bogus data in postmaster.pid". After
> deleting postmaster.pid the
> service started up fine.
> 
> Soon after a customer of mine reported the same error, also
> on windows; and
> before deleting postmaster.pid I got a copy of that
> "bogus one". AND: there
> where also a lot of  symobols in postmaster.pid (hex 0)
> 
> After reading the answers to the funny chars in the logs
> and no fsync on the
> logs: is there a fsync on postmaster.pid? Or is that file
> not considered
> important enough?
> 
> (just digging for the reason for corrupted data in
> postmaster.pid)...
> 

Aha, nice one Harald,

So the @ symbols are hex 0. Perhaps all the @ symbols are the pattern of the 
text that was written to the log - but since ext3 is in data=writeback mode it 
knows that there should be some data there *but* it doesn't know what that data 
is, so it just ends up as 0's.

With regards to your question, if the .pid is not fsynced I agree doing so 
would perhaps be a good idea, is there any reason why not to?


 

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Power outage and funny chars in the logs

2009-05-07 Thread Glyn Astill

> From: Albe Laurenz 
> Subject: RE: [GENERAL] Power outage and funny chars in the logs
> To: glynast...@yahoo.co.uk, pgsql-general@postgresql.org
> Date: Thursday, 7 May, 2009, 2:44 PM
> Glyn Astill wrote:
> > We had a power outage today when a couple of computer
> 
> > controlled power strips crashed (my secondary
> psu's will stay 
> > firmly in the wall sockets now though).
> > 
> > I'd had a lot of fun pulling plugs out under load
> before we 
> > went into production so I wasn't particularly
> worried, and 
> > the databases came back up and appled the redo logs as
> expected.
> > 
> > What did make me scratch my head was a short stream of
> @ 
> > symbols (well they show up as @ symbols in vi) in the
> log 
> > file of the main server (others are slony
> subscribers).
> > 
> > My only reasoning so far is that it's just garbage
> from 
> > postgres as the power died? The contorllers have BBU
> cache 
> > and drive caches are off. The only other thing I can
> think is 
> > it's something to do with me using data=writeback
> on the data 
> > partition, and relying on the wal for journaling of
> the data. 
> > The logs are on that same partition...
> > 
> > Just wondered what you chaps thought about this?
> 
> You mean the error log and not the transaction log, right?
> 

Yes just the text based server logs.

> I would say that the file system suffered data loss in the
> system crash, and what you see is something that happened
> during file system recovery.
> 
> The strange characters are towards the end of the file,
> right?

Yeah right at the end

> Can you find anything about file system recovery in the
> operating system log files?

As tom said in his post, I think this is just down to os cache of the server 
log etc - it's not actually flushed to disk with fsync like the wal.




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Upgrading from postgres 8.1 to 8.3

2009-05-07 Thread Glyn Astill

> From: S Arvind 
> Subject: [GENERAL] Upgrading from postgres 8.1 to 8.3
> To: pgsql-general@postgresql.org
> Date: Thursday, 7 May, 2009, 11:42 AM
> Our 600GB data was currently loaded in postgres 8.1 , we
> want to upgrade
> from postgres 8.1 to 8.3 . Can we able to point the data
> directly or should
> we have to do any porting work for transfering data from
> 8.1 to 8.3.
> 

You need to do a dump and reload. There was a project that did an in-place 
upgrade of datafiles (pg_migrator???) but I think it only supports -> 8.2




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Power outage and funny chars in the logs

2009-05-07 Thread Glyn Astill

Hi chaps,

We had a power outage today when a couple of computer controlled power strips 
crashed (my secondary psu's will stay firmly in the wall sockets now though).

I'd had a lot of fun pulling plugs out under load before we went into 
production so I wasn't particularly worried, and the databases came back up and 
appled the redo logs as expected.

What did make me scratch my head was a short stream of @ symbols (well they 
show up as @ symbols in vi) in the log file of the main server (others are 
slony subscribers).

My only reasoning so far is that it's just garbage from postgres as the power 
died? The contorllers have BBU cache and drive caches are off. The only other 
thing I can think is it's something to do with me using data=writeback on the 
data partition, and relying on the wal for journaling of the data. The logs are 
on that same partition...

Just wondered what you chaps thought about this?

Glyn





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] bizgres

2009-05-05 Thread Glyn Astill

Hi chaps,

I'm looking at building an olap reporting environment and I came across this 
project on pgfoundry. However it was last updated over 3 years ago, am I 
correct in assuming that this probably isn't something I should be looking at?

Can anyone point me at interesting tools they've used?

Thanks
Glyn




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Trigger function cost

2009-04-09 Thread Glyn Astill

> From: Tom Lane 
> 
> > Is there any reason to mess with this?
> 
> No.  The planner doesn't actually bother to figure the
> cost of triggers
> anyway, since presumably every correct plan will fire the
> same set of
> triggers.  So even if you had a more accurate cost estimate
> than that
> one, it wouldn't get used for anything.
> 

Excellent, that's good with me.

> Now, for ordinary non-trigger functions, it might be worth
> paying
> some attention to the cost estimate.  "1" is
> intended to denote the
> cost of a reasonably simple C function, so PL functions
> should pretty
> much always have costs that are large multiples of that. 
> 100 is a
> reasonable default, but if you know better you can put
> something else.
> 

Cool, I'll leave it alone for now then, interesting stuff, thanks Tom.




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Trigger function cost

2009-04-09 Thread Glyn Astill

Hi Chaps,

Can anyone point me to docs for trigger function estimated cost?

I see that when I create a volatile plpgsql trigger function it gets given a 
cost of 100 and a c function gets given a cost of 1.

Is there any reason to mess with this?

Thanks
Glyn




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] writing c functions for postgres

2009-04-07 Thread Glyn Astill




--- On Tue, 7/4/09, Albe Laurenz  wrote:

> I can find no VARATT_SIZEP in the PostgreSQL 8.3 headers.
> Where did you get that from?
> 
> Yours,
> Laurenz Albe
> 

I think it's depreciated and he should be using SET_VARSIZE instead ...




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] debugging in pgadmin

2009-03-23 Thread Glyn Astill


--- On Mon, 23/3/09, josep porres  wrote:

> A lot of time since the last debugging activity.
> I don't remember how to debug. I thought I had to set a
> breaking point in
> the function i want to debug,
> and then call that function.
> I'm doing this, and from another query window, i call
> the function. But it
> returns me the result, but it doesn't stop inside the
> function in order to
> debug it.
> What am i doing wrong?

Perhaps you've not got the pldebugger contrib module installed and run the sql 
script to create the debug functions?




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to configure on a machine with a lot of memory?

2009-03-17 Thread Glyn Astill

Start by looking here 
http://www.postgresql.org/docs/8.3/static/runtime-config-resource.html#RUNTIME-CONFIG-RESOURCE-MEMORY

And if you search the lists you'll find whole discussions on this topic that 
have been repeated over and over.

Without generalising too much, for a dedicated machine you should be looking at 
setting effective_cache_size to approx 2/3 of ram, shared_buffers to something 
like 10-15% of ram, and work_mem to something reasonable dependant on amount of 
connections and types of queries, but remember to keep this small-ish, mine is 
set at 2mb.



--- On Tue, 17/3/09, A B  wrote:

> From: A B 
> Subject: [GENERAL] How to configure on a machine with a lot of memory?
> To: pgsql-general@postgresql.org
> Date: Tuesday, 17 March, 2009, 9:20 AM
> Hi there!
> 
> If I have a database that have a size of a few GB and run
> this on a
> machine with 48 GB  of ram, What parameters should I set to
> make
> Postgres use all the memory that is available? For a not so
> very
> technical person it is a little cryptic to find out.
> 
> Thanks.
> 
> -- 
> Sent via pgsql-general mailing list
> (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Query palns and tug-of-war with enable_sort

2009-02-19 Thread Glyn Astill
--- On Thu, 19/2/09, Tom Lane  wrote:
> 
> Also, it'd be worth revisiting the question of whether
> you really still
> need enable_sort off ... personally, I'd think that
> reducing
> random_page_cost is a much saner way of nudging the planner
> in the
> direction of preferring indexscans.
> 

We have relatively quick storage and most of our data fits in ram, so I've 
dropped random_page_cost a little more and at some point I'll flick enable_sort 
back on and see how it goes.

> BTW, it might be a bit late for this, but you'd be a
> lot better off
> performance-wise with bigint join keys instead of
> numeric(8,0).
> Numeric is slow, and at that field width it's not
> buying you anything at
> all.
> 

This may be a little out of my control, there's a lot of things wrong with how 
our tables are set up and I generally have to swim through lots of 20+ year old 
code to discover how changes will affect it.  That said there's a lot of these 
numeric(8,0) fields and I doubt switching them for bigint would cause any 
problems.

Thanks Tom.




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Query palns and tug-of-war with enable_sort

2009-02-18 Thread Glyn Astill
> 
> No, those aren't the same plans.  In particular
> what's bothering me is
> the lack of any sort in the first plan you showed (the one
> with
> HashAggregate at the top).  That shouldn't be possible
> because of the
> ORDER BY --- a hash aggregate will deliver unsorted output
> so there
> should be a sort step above it.

Ah, retracing my steps forget that; there's no sort because it's not the same 
query at all.

explain Select
   mult_ord.mult_ref
 From
   credit Inner Join
   mult_ord On mult_ord.transno = credit.transno
 Where
   (credit.show = 45 Or
 credit.show = 450001) And
   credit."date" >= '2009-02-16' And
   credit."date" <= '2009-02-16' And
   credit.cancelled = ' ' group by mult_ref;


Lets just say it's been a long day.

Going back to my original point though, is there any way to get the planner to 
choose a better plan for the original distinct query? Or is it just a simple no 
because I set enable_sort to off?




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Query palns and tug-of-war with enable_sort

2009-02-18 Thread Glyn Astill

> >  Group  (cost=0.00..11149194.48 rows=1 width=9)
> 
> That's just bizarre.  Can you put together a
> self-contained test case
> for this?  Also, what version is it exactly? 
> ("8.3" is the wrong
> answer.)
> 

Thanks Tom,

It's 8.3.5, and I get the same results on all my servers (3 replicated servers 
and one "daily restore" server).

I'll be moving forward to 8.3.6 as soon as I get time...

I've slapped together a quick test case that gives the same results with 
explain even when I have no data in the tables and haven't analyzed them.  I'm 
not sure how silly I am for not putting any data in the tables for this test, 
however seeing as it gave me the same explains what I did follows:

1) Create my database and schema

# su postgres
$ /usr/local/pgsql/bin/createdb test --encoding='LATIN1'
$ exit
# psql -U postgres -d test

CREATE SCHEMA customers;
ALTER DATABASE "test" SET enable_sort TO off;

2) Create my tables

CREATE TABLE customers.credit
(
  recnum bigint NOT NULL DEFAULT 
nextval(('"customers"."credit_dfseq"'::text)::regclass),
  transno numeric(8) NOT NULL DEFAULT 0,
  "number" character varying(20) NOT NULL DEFAULT ' '::character varying,
  exmon character varying(2) NOT NULL DEFAULT ' '::character varying,
  exyear character varying(2) NOT NULL DEFAULT ' '::character varying,
  oldtick numeric(2) NOT NULL DEFAULT 0,
  coaches numeric(2) NOT NULL DEFAULT 0,
  "value" numeric(10,2) NOT NULL DEFAULT 0,
  postage numeric(6,2) NOT NULL DEFAULT 0,
  deposit numeric(6,2) NOT NULL DEFAULT 0,
  paid numeric(6,2) NOT NULL DEFAULT 0,
  amt_due numeric(6,2) NOT NULL DEFAULT 0,
  insur numeric(6,2) NOT NULL DEFAULT 0,
  sing_supp numeric(6,2) NOT NULL DEFAULT 0,
  date date NOT NULL DEFAULT '0001-01-01'::date,
  "time" character varying(5) NOT NULL DEFAULT ' '::character varying,
  seconds numeric(4) NOT NULL DEFAULT 0,
  due_by date NOT NULL DEFAULT '0001-01-01'::date,
  "user" character varying(10) NOT NULL DEFAULT ' '::character varying,
  "show" numeric(8) NOT NULL DEFAULT 0,
  show_name character varying(25) NOT NULL DEFAULT ' '::character varying,
  venue numeric(4) NOT NULL DEFAULT 0,
  tbook numeric(4) NOT NULL DEFAULT 0,
  printed character varying(1) NOT NULL DEFAULT ' '::character varying,
  source numeric(2) NOT NULL DEFAULT 0,
  source2 numeric(2) NOT NULL DEFAULT 0,
  tickets_all character varying(21) NOT NULL DEFAULT ' '::character varying,
  allocated_by character varying(10) NOT NULL DEFAULT ' '::character varying,
  allocated_date date NOT NULL DEFAULT '0001-01-01'::date,
  narrative character varying(30) NOT NULL DEFAULT ' '::character varying,
  title character varying(4) NOT NULL DEFAULT ' '::character varying,
  forename character varying(5) NOT NULL DEFAULT ' '::character varying,
  "name" character varying(15) NOT NULL DEFAULT ' '::character varying,
  add1 character varying(25) NOT NULL DEFAULT ' '::character varying,
  add2 character varying(25) NOT NULL DEFAULT ' '::character varying,
  add3 character varying(25) NOT NULL DEFAULT ' '::character varying,
  town character varying(15) NOT NULL DEFAULT ' '::character varying,
  postcode character varying(4) NOT NULL DEFAULT ' '::character varying,
  postcode2 character varying(4) NOT NULL DEFAULT ' '::character varying,
  county character varying(15) NOT NULL DEFAULT ' '::character varying,
  country_code character varying(2) NOT NULL DEFAULT ' '::character varying,
  phone character varying(20) NOT NULL DEFAULT ' '::character varying,
  authourisation numeric(8) NOT NULL DEFAULT 0,
  vat numeric(2,2) NOT NULL DEFAULT 0,
  ticonly numeric(2) NOT NULL DEFAULT 0,
  origin numeric(2) NOT NULL DEFAULT 0,
  price_type numeric(2) NOT NULL DEFAULT 0,
  show_date date NOT NULL DEFAULT '0001-01-01'::date,
  hole character varying(3) NOT NULL DEFAULT ' '::character varying,
  msort_code character varying(6) NOT NULL DEFAULT ' '::character varying,
  marker character varying(1) NOT NULL DEFAULT ' '::character varying,
  alloc_time numeric(4,2) NOT NULL DEFAULT 0,
  recorded_number character varying(10) NOT NULL DEFAULT ' '::character varying,
  allocated_mark character varying(1) NOT NULL DEFAULT ' '::character varying,
  tickets numeric(6) NOT NULL DEFAULT 0,
  date_posted date NOT NULL DEFAULT '0001-01-01'::date,
  cancelled character varying(1) NOT NULL DEFAULT ' '::character varying,
  date_printed date NOT NULL DEFAULT '0001-01-01'::date,
  shop_code numeric(2) NOT NULL DEFAULT 0,
  agent_code numeric(4) NOT NULL DEFAULT 0,
  pc character varying(8) NOT NULL DEFAULT ' '::character varying,
  spareasc1 character varying(20) NOT NULL DEFAULT ' '::character varying,
  spareasc2 character varying(20) NOT NULL DEFAULT ' '::character varying,
  sparenum1 numeric(10,2) NOT NULL DEFAULT 0,
  sparenum2 numeric(10,2) NOT NULL DEFAULT 0,
  sparedat1 date NOT NULL DEFAULT '0001-01-01'::date,
  sparedat2 date NOT NULL DEFAULT '0001-01-01'::date,
  CONSTRAINT credit_index01 PRIMARY KEY (number, transno, recnum)
)
WITH (OIDS=TRUE);

CREATE UNIQUE INDE

[GENERAL] Query palns and tug-of-war with enable_sort

2009-02-18 Thread Glyn Astill
Hi Chaps,

We have a legacy application that used to have it's own sequential database 
backend, and to fetch data out of it's tables commands such as "find gt table 
by index" would be used. 

What we have now is a driver in the middle that constructs sql to access the 
data on pg8.3, typically of the form "SELECT ... FROM ... ORDER BY ... LIMIT n" 
and since we always have indexes that match the ORDER BY it creates I set 
enable_sort to off because in some rare cases the planner would choose a slower 
plan.

Reply with suitable comment about my foot-gun now if you're so inclined. But 
seeing as the purpose of our postgres installation is to replace that legacy 
backend for this application, and seeing as all the other queries I put 
together outside of thae application still picked good plans, I really wasn't 
too worried about this. We've been building lots of queries for over 5 months 
now, and this is the first time I've seen a non-ideal plan.

Here's the query:

  SELECT DISTINCT mult_ref
  FROM creditINNER JOIN mult_ord ON mult_ord.transno = credit.transno
  WHERE (credit.show = 45 OR credit.show = 450001) 
  AND credit."date" >= '2009-02-16' 
  AND credit."date" <= '2009-02-16' 
  AND credit.cancelled = ' '
  ORDER BY mult_ref

With enable_sort on this is the plan it chooses:

 HashAggregate  (cost=14.72..14.73 rows=1 width=9)
   ->  Nested Loop  (cost=0.00..14.72 rows=1 width=9)
 ->  Index Scan using credit_index02 on credit  (cost=0.00..7.04 rows=1 
width=9)
   Index Cond: ((date >= '2009-02-16'::date) AND (date <= 
'2009-02-16'::date))
   Filter: (((cancelled)::text = ' '::text) AND ((show = 
45::numeric) OR (show = 450
001::numeric)))
 ->  Index Scan using mult_ord_index02 on mult_ord  (cost=0.00..7.67 
rows=1 width=17)
   Index Cond: (mult_ord.transno = credit.transno)

That's what I want, good. Now with enable_sort off this is the plan it chooses:

 Group  (cost=0.00..11149194.48 rows=1 width=9)
   ->  Nested Loop  (cost=0.00..11149194.47 rows=1 width=9)
 ->  Index Scan using mult_ord_index01 on mult_ord  
(cost=0.00..442888.78 rows=9307812 width=17)
 ->  Index Scan using credit_index02 on credit  (cost=0.00..1.14 rows=1 
width=9)
   Index Cond: ((credit.date >= '2009-02-16'::date) AND 
(credit.date <= '2009-02-16'::date) AND (credit.transno = mult_ord.transno))
   Filter: (((credit.cancelled)::text = ' '::text) AND 
((credit.show = 45::numeric) OR (credit.show = 450001::numeric)))


With enable_sort off if I get rid of the distinct and swap the order by for a 
group by it picks a good plan, however once I stick the order by in there to 
try and sort it we go back to the plan above.  Now I know to a degree the 
planner is really just doing what I've told it to do, but is there anything 
else I can tweek to try and get a ballance?

I've upped the statistics target from it's default of 10 to 100, which I think 
is probably a good idea anyway but it doesn't affect this quey plan.

Any ideas?

My initial testing was done on 8.2 and this , are there any factors that might 
mean I'm better off with enable_sort on in 8.3?

Regards
Glyn




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Inheritance question

2009-01-16 Thread Glyn Astill
Hi chaps,

I've got a question about inheritance here, and I think I may have gotten the 
wrong end of the stick as to how it works, or at least when to use it.

What I intended to do was have a schema "audit" with an empty set of tables in 
it, then each quarter restore our audit data into schemas such as 
"audit_Q1_2009" etc. Then alter the tables in the audit_Q1_2009 schema to 
inherit the audit schema, etc and so on for audit_Q2_2009.

This appears to work so the audit schema appears as if it contains everything 
in the other schemas.

However this isn't very efficient as soon as I try to order the data, even with 
only one table getting inherited it does a sort rather than using the index on 
the child table.

Is this because the inheritance works like a view, and it basically has to 
build the view before ordering it?

For example in audit_Q1_2009 the table at_price has an index on trigger_id

SEE=# explain select * from audit.at_price order by trigger_id limit 100;
 QUERY PLAN

 Limit  (cost=100095726.71..100095726.96 rows=100 width=820)
   ->  Sort  (cost=100095726.71..100098424.83 rows=1079251 width=820)
 Sort Key: audit.at_price.trigger_id
 ->  Result  (cost=0.00..54478.51 rows=1079251 width=820)
   ->  Append  (cost=0.00..54478.51 rows=1079251 width=820)
 ->  Seq Scan on at_price  (cost=0.00..10.90 rows=90 
width=820)
 ->  Seq Scan on at_price  (cost=0.00..54467.61 
rows=1079161 width=280)


SEE=# explain select * from "audit_Q1_2009".at_price order by trigger_id limit 
100;
   QUERY PLAN

 Limit  (cost=0.00..7.37 rows=100 width=280)
   ->  Index Scan using at_price_pkey on at_price  (cost=0.00..79537.33 
rows=1079161 width=280)
(2 rows)


Any suggestions would be appreciated.






-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Diff tool for two schema

2009-01-16 Thread Glyn Astill
Anyone know of a decent diff tool for comparing two schemas?

I Had a go with 

http://apgdiff.sourceforge.net/

but it appears it doesn't quote it's sql properly. A shame, otherwise it'd be 
just what I need.




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Planner picking topsey turvey plan?

2008-12-08 Thread Glyn Astill

> From: Tom Lane <[EMAIL PROTECTED]>
> 
> You've provided no evidence that this is a bad plan.
> 

Looks like I didn't take the time to understand properly what the explains were 
showing.

> In particular, the plan you seem to think would be better
> would involve
> an estimated 153 iterations of the cost-15071 hash
> aggregation, which
> simple arithmetic shows is more expensive than the plan it
> did choose.
> 

I'd totally missed that all the cost was in the view that I'd created.

Thanks tom




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [ADMIN] [GENERAL] Planner picking topsey turvey plan?

2008-12-06 Thread Glyn Astill
Explain analyze below, 


DB=# explain analyze select a.artist, a.date, b.mult_ref, b.items, b.parts from 
(show a inner join orders b on a.code = b.show) where artist = 'ALKALINE TRIO'; 
 QUERY PLAN
---
 Hash Join  (cost=1583955.94..1794350.36 rows=1552 width=70) (actual 
time=231496.678..243243.711 rows=892 loops=1)
   Hash Cond: (a.show = a.code)
   ->  GroupAggregate  (cost=1583418.91..1737354.68 rows=4104954 width=40) 
(actual time=223204.620..241813.067 rows=2856379 loops=1)
 ->  Sort  (cost=1583418.91..1593681.29 rows=4104954 width=40) (actual 
time=223204.567..231296.896 rows=4104964 loops=1)
   Sort Key: b.mult_ref, a.show
   Sort Method:  external merge  Disk: 224328kB
   ->  Hash Left Join  (cost=321999.38..795776.58 rows=4104954 
width=40) (actual time=14850.320..165804.778 rows=4104964 loops=1)
 Hash Cond: (a.transno = b.transno)
 ->  Seq Scan on credit a  (cost=0.00..268740.54 
rows=4104954 width=31) (actual time=7.563..76901.901 rows=4104954 loops=1)
 ->  Hash  (cost=160885.28..160885.28 rows=8775528 
width=18) (actual time=14831.810..14831.810 rows=8775528 loops=1)
   ->  Seq Scan on mult_ord b  (cost=0.00..160885.28 
rows=8775528 width=18) (actual time=4.716..4952.254 rows=8775528 loops=1)
   ->  Hash  (cost=535.28..535.28 rows=140 width=26) (actual 
time=228.599..228.599 rows=54 loops=1)
 ->  Bitmap Heap Scan on show a  (cost=9.49..535.28 rows=140 width=26) 
(actual time=77.723..228.488 rows=54 loops=1)
   Recheck Cond: ((artist)::text = 'ALKALINE TRIO'::text)
   ->  Bitmap Index Scan on show_index07  (cost=0.00..9.46 rows=140 
width=0) (actual time=62.228..62.228 rows=54 loops=1)
 Index Cond: ((artist)::text = 'ALKALINE TRIO'::text)
 Total runtime: 243367.640 ms



--- On Sat, 6/12/08, Scott Marlowe <[EMAIL PROTECTED]> wrote:

> From: Scott Marlowe <[EMAIL PROTECTED]>
> Subject: Re: [ADMIN] [GENERAL] Planner picking topsey turvey plan?
> To: [EMAIL PROTECTED]
> Cc: pgsql-general@postgresql.org, [EMAIL PROTECTED]
> Date: Saturday, 6 December, 2008, 8:35 PM
> what does explain analyze yourqueryhere say?
> 
> On Sat, Dec 6, 2008 at 1:33 PM, Glyn Astill
> <[EMAIL PROTECTED]> wrote:
> > Anyone?
> >
> >
> > --- On Fri, 5/12/08, Glyn Astill
> <[EMAIL PROTECTED]> wrote:
> >
> >> From: Glyn Astill <[EMAIL PROTECTED]>
> >> Subject: [GENERAL] Planner picking topsey turvey
> plan?
> >> To: pgsql-general@postgresql.org
> >> Date: Friday, 5 December, 2008, 2:23 PM
> >> Hi people,
> >>
> >> Does anyone know how I can change what I'm
> doing to get
> >> pgsql to pick a better plan?
> >>
> >> I'll explain what I've done below but
> please
> >> forgive me if I interpret the plans wrong as I try
> to
> >> describe, I've split it into 4 points to try
> and ease
> >> the mess of pasting in the plans..
> >>
> >>
> >> 1) I've created a view "orders" that
> joins
> >> two tables "credit" and
> "mult_ord"
> >> together as below:
> >>
> >> CREATE VIEW orders AS
> >>   SELECT b.mult_ref, a.show, MIN(a.transno) AS
> >> "lead_transno", COUNT(a.transno) AS
> >> "parts", SUM(a.tickets) AS
> "items",
> >> SUM(a.value) AS "value"
> >>   FROM (credit a LEFT OUTER JOIN mult_ord b ON
> a.transno =
> >> b.transno)
> >>   GROUP BY b.mult_ref, a.show;
> >>
> >>
> >>
> >> 2) And an explain on that view comes out as below,
> it's
> >> using the correct index for the field show on
> >> "credit" which doesn't look too bad
> to me:
> >>
> >> DB=# explain select a.artist, a.date, b.mult_ref,
> b.items,
> >> b.parts from (show a inner join orders b on a.code
> = b.show)
> >> where b.show = 357600;
> >>   
> QUERY PLAN
> >>
> 
> >>  Nested Loop  (cost=15050.79..15099.68 rows=1013
> width=70)
> >>->  Index Scan using show_index01 on show a
> >> (cost=0.00..8.37 rows=1 width=26)
> >>  Index Co

Re: [GENERAL] Planner picking topsey turvey plan?

2008-12-06 Thread Glyn Astill
Anyone?


--- On Fri, 5/12/08, Glyn Astill <[EMAIL PROTECTED]> wrote:

> From: Glyn Astill <[EMAIL PROTECTED]>
> Subject: [GENERAL] Planner picking topsey turvey plan?
> To: pgsql-general@postgresql.org
> Date: Friday, 5 December, 2008, 2:23 PM
> Hi people,
> 
> Does anyone know how I can change what I'm doing to get
> pgsql to pick a better plan?
> 
> I'll explain what I've done below but please
> forgive me if I interpret the plans wrong as I try to
> describe, I've split it into 4 points to try and ease
> the mess of pasting in the plans..
> 
> 
> 1) I've created a view "orders" that joins
> two tables "credit" and "mult_ord"
> together as below:
> 
> CREATE VIEW orders AS
>   SELECT b.mult_ref, a.show, MIN(a.transno) AS
> "lead_transno", COUNT(a.transno) AS
> "parts", SUM(a.tickets) AS "items",
> SUM(a.value) AS "value"
>   FROM (credit a LEFT OUTER JOIN mult_ord b ON a.transno =
> b.transno) 
>   GROUP BY b.mult_ref, a.show;
> 
> 
> 
> 2) And an explain on that view comes out as below, it's
> using the correct index for the field show on
> "credit" which doesn't look too bad to me:
> 
> DB=# explain select a.artist, a.date, b.mult_ref, b.items,
> b.parts from (show a inner join orders b on a.code = b.show)
> where b.show = 357600;
>QUERY PLAN
> 
>  Nested Loop  (cost=15050.79..15099.68 rows=1013 width=70)
>->  Index Scan using show_index01 on show a 
> (cost=0.00..8.37 rows=1 width=26)
>  Index Cond: (code = 357600::numeric)
>->  HashAggregate  (cost=15050.79..15071.05 rows=1013
> width=39)
>  ->  Nested Loop Left Join  (cost=0.00..15035.60
> rows=1013 width=39)
>->  Index Scan using credit_index04 on
> credit a  (cost=0.00..4027.30 rows=1013 width=31)
>  Index Cond: (show = 357600::numeric)
>->  Index Scan using mult_ord_index02 on
> mult_ord b  (cost=0.00..10.85 rows=1 width=17)
>  Index Cond: (a.transno = b.transno)
> (9 rows)
> 
> 
> 
> 3) Then I have a table called "show" that is
> indexed on the artist field, and a plan for listing the
> shows for an artist is as below, again this doesn't look
> too bad to me, as it's using the index on artist.
> 
> DB=# explain select * from show where artist =
> 'ALKALINE TRIO';
>  QUERY PLAN
> -
>  Bitmap Heap Scan on show  (cost=9.59..582.41 rows=153
> width=348)
>Recheck Cond: ((artist)::text = 'ALKALINE
> TRIO'::text)
>->  Bitmap Index Scan on show_index07 
> (cost=0.00..9.56 rows=153 width=0)
>  Index Cond: ((artist)::text = 'ALKALINE
> TRIO'::text)
> (4 rows)
> 
> 
> 
> 4) So.. I guess I can join "show" ->
> "orders", expecting an index scan on
> "show" for the artist, then an index scan on
> "orders" for each show.
> 
> However it seems the planner has other ideas, it just looks
> backwards to me:
> 
> DB=# explain select a.artist, a.date, b.mult_ref, b.items,
> b.parts from (show a inner join orders b on a.code = b.show)
> where artist = 'ALKALINE TRIO';
>  QUERY PLAN
> 
>  Hash Join  (cost=1576872.96..1786175.37 rows=1689
> width=70)
>Hash Cond: (a.show = a.code)
>->  GroupAggregate  (cost=1576288.64..1729424.39
> rows=4083620 width=39)
>  ->  Sort  (cost=1576288.64..1586497.69
> rows=4083620 width=39)
>Sort Key: b.mult_ref, a.show
>->  Hash Left Join 
> (cost=321406.05..792886.22 rows=4083620 width=39)
>  Hash Cond: (a.transno = b.transno)
>  ->  Seq Scan on credit a 
> (cost=0.00..267337.20 rows=4083620 width=31)
>  ->  Hash 
> (cost=160588.80..160588.80 rows=8759380 width=17)
>->  Seq Scan on mult_ord b 
> (cost=0.00..160588.80 rows=8759380 width=17)
>->  Hash  (cost=582.41..582.41 rows=153 width=26)
>  ->  Bitmap Heap Scan on show a 
> (cost=9.59..582.41 rows=153 width=26)
>Recheck Cond: ((artist)::text =
> 'ALKALINE TRIO'::text)
>->  Bitmap Index Scan on show_index07 
> (cost=0.00..9.56 rows=153 width=0)
>  Index Cond: ((artist)::text =
> 'ALKALINE TRIO'::text)
> (15 rows)
> 
> Any idea if I can get around this?
> 
> 
> 
> 
> 
> 
> -- 
> Sent via pgsql-general mailing list
> (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] in transaction - safest way to kill

2008-12-05 Thread Glyn Astill

select pg_cancel_backend();


--- On Fri, 5/12/08, William Temperley <[EMAIL PROTECTED]> wrote:

> From: William Temperley <[EMAIL PROTECTED]>
> Subject: [GENERAL]  in transaction - safest way to kill
> To: pgsql-general@postgresql.org
> Date: Friday, 5 December, 2008, 2:08 PM
> Hi all
> 
> Could anyone tell me what's the best thing to with idle
> transactions
> that are holding locks?
> 
> I just killed the process as I wanted to get on with some
> work. I'm
> just not sure this is a good idea when we go into
> production.
> 
> Cheers
> 
> Will T
> 
> -- 
> Sent via pgsql-general mailing list
> (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Planner picking topsey turvey plan?

2008-12-05 Thread Glyn Astill
Hi people,

Does anyone know how I can change what I'm doing to get pgsql to pick a better 
plan?

I'll explain what I've done below but please forgive me if I interpret the 
plans wrong as I try to describe, I've split it into 4 points to try and ease 
the mess of pasting in the plans..


1) I've created a view "orders" that joins two tables "credit" and "mult_ord" 
together as below:

CREATE VIEW orders AS
  SELECT b.mult_ref, a.show, MIN(a.transno) AS "lead_transno", COUNT(a.transno) 
AS "parts", SUM(a.tickets) AS "items", SUM(a.value) AS "value"
  FROM (credit a LEFT OUTER JOIN mult_ord b ON a.transno = b.transno) 
  GROUP BY b.mult_ref, a.show;



2) And an explain on that view comes out as below, it's using the correct index 
for the field show on "credit" which doesn't look too bad to me:

DB=# explain select a.artist, a.date, b.mult_ref, b.items, b.parts from (show a 
inner join orders b on a.code = b.show)
where b.show = 357600;
   QUERY PLAN

 Nested Loop  (cost=15050.79..15099.68 rows=1013 width=70)
   ->  Index Scan using show_index01 on show a  (cost=0.00..8.37 rows=1 
width=26)
 Index Cond: (code = 357600::numeric)
   ->  HashAggregate  (cost=15050.79..15071.05 rows=1013 width=39)
 ->  Nested Loop Left Join  (cost=0.00..15035.60 rows=1013 width=39)
   ->  Index Scan using credit_index04 on credit a  
(cost=0.00..4027.30 rows=1013 width=31)
 Index Cond: (show = 357600::numeric)
   ->  Index Scan using mult_ord_index02 on mult_ord b  
(cost=0.00..10.85 rows=1 width=17)
 Index Cond: (a.transno = b.transno)
(9 rows)



3) Then I have a table called "show" that is indexed on the artist field, and a 
plan for listing the shows for an artist is as below, again this doesn't look 
too bad to me, as it's using the index on artist.

DB=# explain select * from show where artist = 'ALKALINE TRIO';
 QUERY PLAN
-
 Bitmap Heap Scan on show  (cost=9.59..582.41 rows=153 width=348)
   Recheck Cond: ((artist)::text = 'ALKALINE TRIO'::text)
   ->  Bitmap Index Scan on show_index07  (cost=0.00..9.56 rows=153 width=0)
 Index Cond: ((artist)::text = 'ALKALINE TRIO'::text)
(4 rows)



4) So.. I guess I can join "show" -> "orders", expecting an index scan on 
"show" for the artist, then an index scan on "orders" for each show.

However it seems the planner has other ideas, it just looks backwards to me:

DB=# explain select a.artist, a.date, b.mult_ref, b.items, b.parts from (show a 
inner join orders b on a.code = b.show)
where artist = 'ALKALINE TRIO';
 QUERY PLAN

 Hash Join  (cost=1576872.96..1786175.37 rows=1689 width=70)
   Hash Cond: (a.show = a.code)
   ->  GroupAggregate  (cost=1576288.64..1729424.39 rows=4083620 width=39)
 ->  Sort  (cost=1576288.64..1586497.69 rows=4083620 width=39)
   Sort Key: b.mult_ref, a.show
   ->  Hash Left Join  (cost=321406.05..792886.22 rows=4083620 
width=39)
 Hash Cond: (a.transno = b.transno)
 ->  Seq Scan on credit a  (cost=0.00..267337.20 
rows=4083620 width=31)
 ->  Hash  (cost=160588.80..160588.80 rows=8759380 width=17)
   ->  Seq Scan on mult_ord b  (cost=0.00..160588.80 
rows=8759380 width=17)
   ->  Hash  (cost=582.41..582.41 rows=153 width=26)
 ->  Bitmap Heap Scan on show a  (cost=9.59..582.41 rows=153 width=26)
   Recheck Cond: ((artist)::text = 'ALKALINE TRIO'::text)
   ->  Bitmap Index Scan on show_index07  (cost=0.00..9.56 rows=153 
width=0)
 Index Cond: ((artist)::text = 'ALKALINE TRIO'::text)
(15 rows)

Any idea if I can get around this?






-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 8.3 libpq.dll not working on some versions of windows

2008-11-15 Thread Glyn Astill
--- On Sat, 15/11/08, Tony Caduto <[EMAIL PROTECTED]> wrote:
> Hi,
> We have been running into issues where the 8.3.x versions
> of libpq.dll will not load in certain
> versions of windows and WINE(does not load at all on wine).
> 
> It seems to be hit and miss on Windows XP, mostly seems to
> affect SP3 and some SP2 installs of XP.
> 
> I have only been able to get around this by installing a
> much older version of libpq.dll. 
> And I did have all the dependencies installed along with
> the DLL, it just plain refuses to load.

Pretty sure I've used most 8.3.x versions here on both sp2 and 3.

How have you chacked you have all the dependencies? (I like depends.exe)

http://www.dependencywalker.com/




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [Slony1-general] ERROR: incompatible library

2008-11-13 Thread Glyn Astill

--- On Wed, 12/11/08, Tony Fernandez <[EMAIL PROTECTED]> wrote:

> Date: Wednesday, 12 November, 2008, 10:52 PM
> Hello lists,
> 
>  
> 
> I am trying to run Slony on a Master Postgres 8.1.11
> replicating to a
> Slave same version and 2nd Slave Postgres 8.3.4.

> 
> I am getting the following error:
> 
>  
> 
> :14: PGRES_FATAL_ERROR load
> '$libdir/xxid';  - ERROR:
> incompatible library "/usr/lib/pgsql/xxid.so":
> missing magic block
> 
> HINT:  Extension libraries are required to use the
> PG_MODULE_MAGIC
> macro.
> 
> :14: Error: the extension for the xxid data
> type cannot be loaded
> in database 'dbname=hdap host=10.0.100.234 port=6543
> user=myuser
> password=mp'

I think you've proabably built slony against one version of postgres and then 
tried to use it with another. You must build against 8.1.11 and then separately 
for 8.3.4, using the same version of slony ofcourse.




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] bytea field, a c function and pgcrypto driving me mad

2008-10-30 Thread Glyn Astill

> ISTM that in this line:
> 
> keying = (text *)palloc( keylen + unamelen );
> 
> You forgot to include the length of the header VARHDRSZ.
> 

Aha, that'd be it, it's been a long day.

Thanks Martijn




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] bytea field, a c function and pgcrypto driving me mad

2008-10-30 Thread Glyn Astill
Hi chaps,

I think I'm going to struggle to describe this, but hopefully someone can 
squint and see where I'm going wrong.

I've got a c function called "ftest", all it does is take some text and prepend 
"abcdefghijklmnopqr" onto it. I use it to pass a key into 
pgp_sym_encrypt/decrypt working on a bytea field in a table. The problem is 
that once the string I pass to "ftest" is longer than 10 characters it stops 
working when I use it with the bytea column and pgp_sym_decrypt, but it appears 
to work fine on it's own.

1) The source is here:

http://privatepaste.com/890Bj3FGW0

2) I created a little makefile, as follows:

MODULES = testf
PGXS := $(shell pg_config --pgxs)
include $(PGXS)

3) Then I did make, make install and created the function in the database:

CREATE OR REPLACE FUNCTION
  testf( TEXT )
RETURNS
  TEXT
AS
  'testf.so', 'testf'
LANGUAGE
  C
STRICT
IMMUTABLE;

REVOKE ALL ON FUNCTION testf( TEXT ) FROM PUBLIC;
GRANT EXECUTE ON FUNCTION testf( TEXT ) TO admins;

4) I created a table mytest as follows:

CREATE TABLE mytest(
  username TEXT PRIMARY KEY,
  password BYTEA NOT NULL
);

5) Now with a 10 character string passed to ftest this works:

TEST=# insert into mytest (username,password) values ('short_user', 
pgp_sym_encrypt('testword', testf('short_user')));
INSERT 0 1
TEST=# select pgp_sym_decrypt(password, testf('short_user')) from mytest where 
username = 'short_user';
 pgp_sym_decrypt
-
 testword
(1 row)

6) However if the I make the string longer, the decryption fails:

TEST=# insert into mytest (username,password) values ('longer_user', 
pgp_sym_encrypt('testword', testf('longer_user')));
INSERT 0 1
TEST=# select pgp_sym_decrypt(password, testf('longer_user')) from mytest where 
username = 'longer_user';
ERROR:  Wrong key or corrupt data

But the C function appears to be working on it's own:

TEST=# select testf('longer_user');
 testf
---
 abcdefghijklmnopqrlonger_user
(1 row)

7) But, if I insert the data into the table without using my function it works:

TEST=# insert into mytest (username,password) values ('longer_user', 
pgp_sym_encrypt('testword', 'abcdefghijklmnopqrlonger_user'));
INSERT 0 1
TEST=# select pgp_sym_decrypt(password, testf('longer_user')) from mytest where 
username = 'longer_user';
 pgp_sym_decrypt
-
 testword
(1 row)


So it appears that my function is only working in conjunction with 
pgp_sym_encrypt on an insert when the text value I pass into it is less than 10 
characters long.

It's driving me nuts, can anyone see what I'm doing wrong?

Thanks
Glyn




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Autovacuum and relfrozenxid

2008-10-29 Thread Glyn Astill

> 
> If there's no update activity on that table, this is to
> be expected.
> 

Hmm, there is activity on the table, so I'm guessing I've not got autovacuumm 
tuned aggressively enough.




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Autovacuum and relfrozenxid

2008-10-29 Thread Glyn Astill
Hi chaps,

I've noticed age(relfrozenxid) of some of our tables approaching 
vacuum_freeze_min_age, am I right in thinking this is nothing to worry about, 
autovacuum will just get invoked for those tables?

Even if it isn't, should I be tuning autovacuum so that those tables should 
have been vacuumed before it comes to this?

Glyn




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Tips on how to efficiently debugging PL/PGSQL

2008-10-23 Thread Glyn Astill
> From: [EMAIL PROTECTED] <[EMAIL PROTECTED]>
> Subject: [GENERAL] Tips on how to efficiently debugging PL/PGSQL
> To: pgsql-general@postgresql.org
> Date: Thursday, 23 October, 2008, 6:19 PM
> Just to seek some tips on how to efficiently debug PL/SQL.
> 
> One thing that bugs me in particular is the inability to
> trace a SQL line 
> number in an error message to the line in my PL/PGSQL code.
> 

edb have a debugger that intigrates with pgadmin

http://pgfoundry.org/projects/edb-debugger/




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Using a variable as tablename ins plpgsql?

2008-10-20 Thread Glyn Astill
Hi people,

Hopefully this is a quickie, I want to pass in a table name to a plpgsql 
function and then use that table name in my queries.

Is EXECUTE the only way to do this?

Ta
Glyn







Send instant messages to your online friends http://uk.messenger.yahoo.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PQescapestringConn not found in libpq.dll

2008-10-17 Thread Glyn Astill

> > Apart from lacking functionality, is there anything
> else I should be aware of i.e. could this cause us any
> serious problems?
> 
> You really need to have a word with that application
> vendor.
> 

Thanks Tom, I will do.


Send instant messages to your online friends http://uk.messenger.yahoo.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PQescapestringConn not found in libpq.dll

2008-10-17 Thread Glyn Astill

> It sounds like what you're actually using is an 8.1 or
> older libpq.dll.

Sorry to steer this off the topic a bit, but we have a 3rd party app that 
insists on using libpq.dll version 8.0.1.5031 and we're on pgsql v 8.3.4. 

Apart from lacking functionality, is there anything else I should be aware of 
i.e. could this cause us any serious problems?


Send instant messages to your online friends http://uk.messenger.yahoo.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] WARNING: 25P01: there is no transaction in progress

2008-10-01 Thread Glyn Astill
 
> > Ah,
> > 
> > It just hit me that I probably logged all the wrong
> type of stuff there. I should have been logging statements
> shouldn't I?
> > 
> > http://privatepaste.com/6f1LYISojo
> > 
> > I think this shows up that they're sending an
> extra commit transaction on line 36.
> > 
> > Could someone do us a favour and check I've not
> misread that?
> 
> Looks like it to me.
> 
> Notice that they frequently issue two "start
> transaction" in a row.  Seems
> like a flaw in their programming logic somewhere.
> 

Yeah I agree, I think they connect into template1 initially and get a list of 
the databases, then once you choose the database their software creates another 
connection to that database, then they issue transaction blocks for every 
connection.

A bit mad.




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] WARNING: 25P01: there is no transaction in progress

2008-10-01 Thread Glyn Astill

> 
> If you're using connection pooling it's possible
> that the a connection
> is getting reused and a commit is happening there.
> 
> It's not an uncommon practice to do a rollback when
> first getting a
> shared connection to make sure it's fresh and clean...


That's interesting to hear.

Although we do have a connection pool for our clients to connect through, we 
are not connected through it when we do our conversions. Also the same 
behaviour occours when connecting to my test setup which doesn't have any 
connections going through a connection pool.




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] WARNING: 25P01: there is no transaction in progress

2008-10-01 Thread Glyn Astill

> > I presume it's issuing some sort of commit or
> rollback without a begin, however the programs authors are
> telling me that's not the case and their software is not
> at fault.
> 
> Of course their software can't be at fault, as it is
> entirely bug free ;-)
> 
> You could turn on statement logging in PostgreSQL and
> analyse that.
> That is more useful than trying to guess what the
> proprietary software
> is doing under the hood.

Haha, yes I just reallized I'd logged all the wrong stuff. See the message I 
just posted.

Thanks
Glyn




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] WARNING: 25P01: there is no transaction in progress

2008-10-01 Thread Glyn Astill
Ah,

It just hit me that I probably logged all the wrong type of stuff there. I 
should have been logging statements shouldn't I?

http://privatepaste.com/6f1LYISojo

I think this shows up that they're sending an extra commit transaction on line 
36.

Could someone dous a favour and chack I've not misread that?

Ta
Glyn


--- On Wed, 1/10/08, Glyn Astill <[EMAIL PROTECTED]> wrote:

> From: Glyn Astill <[EMAIL PROTECTED]>
> Subject: [GENERAL] WARNING:  25P01: there is no transaction in progress
> To: pgsql-general@postgresql.org
> Date: Wednesday, 1 October, 2008, 5:23 PM
> Hi Chaps,
> 
> I'm getting the aforementioned warning in my logs from
> a closed source piece of software.  The software helps us
> convert over some old proprietary data files, and it's
> basically just done a COPY into a newly created table, after
> the warning it then goes on to create some indexes.
> 
> I presume it's issuing some sort of commit or rollback
> without a begin, however the programs authors are telling me
> that's not the case and their software is not at fault.
> 
> Does anyone know if something else could be causing it?
> 
> I've posted a detailed log here:
> 
> http://privatepaste.com/cb0iywATbd
> 
> .. if someone would be so kind to have a quick look I'd
> appreciate it. I've posted everything, so it's about
> 1000 lines. The warnings are on lines 459 and 475.
> 
> I've tried to follow the StartTransaction /
> Endtransaction lines and I don't think I can see an
> error there. Is there a possibility its something in
> PortalRunUtility ?
> 
> Thanks
> Glyn
> 
> 
> 
> 
> -- 
> Sent via pgsql-general mailing list
> (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] WARNING: 25P01: there is no transaction in progress

2008-10-01 Thread Glyn Astill
Hi Chaps,

I'm getting the aforementioned warning in my logs from a closed source piece of 
software.  The software helps us convert over some old proprietary data files, 
and it's basically just done a COPY into a newly created table, after the 
warning it then goes on to create some indexes.

I presume it's issuing some sort of commit or rollback without a begin, however 
the programs authors are telling me that's not the case and their software is 
not at fault.

Does anyone know if something else could be causing it?

I've posted a detailed log here:

http://privatepaste.com/cb0iywATbd

.. if someone would be so kind to have a quick look I'd appreciate it. I've 
posted everything, so it's about 1000 lines. The warnings are on lines 459 and 
475.

I've tried to follow the StartTransaction / Endtransaction lines and I don't 
think I can see an error there. Is there a possibility its something in 
PortalRunUtility ?

Thanks
Glyn




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


  1   2   3   >