Re: [HACKERS] planet postgresql issue

2016-10-25 Thread hubert depesz lubaczewski
On Tue, Oct 25, 2016 at 08:36:22AM +0200, hubert depesz lubaczewski wrote:
> Same here. feed url is https://www.depesz.com/tag/postgresql/feed/ and
> as far as I can tell, it works OK.

Magnus is looking into the problem now. Seems to be something related to
networking in the box that hosts planet.

Best regards,

depesz



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] planet postgresql issue

2016-10-25 Thread hubert depesz lubaczewski
On Tue, Oct 25, 2016 at 08:28:00AM +0200, Pavel Stehule wrote:
> Hi
> 
> I got a email about issues with reading feed URL.
> 
> I checked manually URL and it looks well. http://okbob.blogspot.com/
> feeds/posts/default

Same here. feed url is https://www.depesz.com/tag/postgresql/feed/ and
as far as I can tell, it works OK.

Best regards,

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] How to use brin indexes?

2014-11-22 Thread hubert depesz lubaczewski
On Sat, Nov 22, 2014 at 3:29 AM, Alvaro Herrera alvhe...@2ndquadrant.com
wrote:

 I won't push this right away because I want to add the cross-type stuff
 to the tests, to ensure I haven't bollixed anything; I ran a few quick
 manual tests and everything seems to work.  But if Depesz wants to test
 the behavior, be my guest.  Note that you need to initdb after
 rebuilding with this patch.


Tested. Works OK.

depesz


[HACKERS] How to use brin indexes?

2014-11-21 Thread hubert depesz lubaczewski
I tried to add BRIN index on very simple table, to test it for Waiting
for blogpost, btu I can't get it to work.

I got newest git head, and ran:

$ create table t (id int8);
CREATE TABLE
(depesz@[local]:5930) 20:56:22 [depesz]
$ insert into t select generate_series(1,100);
INSERT 0 100
(depesz@[local]:5930) 20:56:31 [depesz]
$ create index b on t using brin (id);
CREATE INDEX
(depesz@[local]:5930) 20:56:42 [depesz]
$ vacuum ANALYZE t;
VACUUM
(depesz@[local]:5930) 20:56:49 [depesz]
$ explain analyze select * from t where id = 1224;
   QUERY
PLAN

 Seq Scan on t  (cost=0.00..16925.00 rows=1 width=8) (actual
time=0.278..59.242 rows=1 loops=1)
   Filter: (id = 1224)
   Rows Removed by Filter: 99
 Planning time: 0.186 ms
 Execution time: 59.272 ms
(5 rows)

(depesz@[local]:5930) 20:56:58 [depesz]
$ drop index b;
DROP INDEX
(depesz@[local]:5930) 20:57:22 [depesz]
$ create index b on t using brin (id) with (pages_per_range=1);
CREATE INDEX
(depesz@[local]:5930) 20:57:35 [depesz]
$ vacuum ANALYZE t;
VACUUM
(depesz@[local]:5930) 20:57:38 [depesz]
$ explain analyze select * from t where id = 1224;
   QUERY
PLAN

 Seq Scan on t  (cost=0.00..16925.00 rows=1 width=8) (actual
time=0.124..56.876 rows=1 loops=1)
   Filter: (id = 1224)
   Rows Removed by Filter: 99
 Planning time: 0.044 ms
 Execution time: 56.886 ms
(5 rows)

What is the problem? Why isn't the brin index used?

depesz


Re: [HACKERS] How to use brin indexes?

2014-11-21 Thread hubert depesz lubaczewski
 The minmax operator families don't include any cross-type operators. I'm
not exactly sure why not.. Alvaro?
 Anyway, try select * from t where id = 1224::int8

It works that way, but it would be great to have it working with non-casted
values too.

depesz


[HACKERS] Cube distance patch?

2014-06-18 Thread hubert depesz lubaczewski
Hi,

In September 2013, there was patch sent by Stas Kelvich (
http://www.postgresql.org/message-id/9e07e159-e405-41e2-9889-a04f534fc...@gmail.com)
that adds indexable kNN searches to cube contrib module.

What is needed so that it could get committed?

Regards,

depesz


[HACKERS] Question about sorting internals

2013-12-11 Thread hubert depesz lubaczewski
Hi,

before I'll go any further - this is only thought-experiment. I do not
plan to use such queries in real-life applications. I was just presented
with a question that I can't answer in any logical way.

There are two simple queries:

#v+
with rok2005 (miesiac,wynik) as (VALUES (1,1),(2,2)  ,(4,4),(5,NULL),(6,6))
,rok2004 (miesiac,wynik) as (VALUES (1,3)  ,(3,3),(4,5) ,(6,6))
SELECT
distinct on (miesiac) *
FROM (
SELECT miesiac, 2005 as rok, wynik FROM rok2005
union all
SELECT miesiac, 2004 as rok, wynik FROM rok2004
) as polaczone
ORDER BY miesiac, wynik desc;
#v-

#v+
with rok2005 (miesiac,wynik) as (VALUES (1,1),(2,2)  ,(4,4),(5,NULL),(6,6))
,rok2004 (miesiac,wynik) as (VALUES (1,3)  ,(3,3),(4,5) ,(6,6))
SELECT
distinct on (miesiac) *
FROM (
SELECT miesiac, 2004 as rok, wynik FROM rok2004
union all
SELECT miesiac, 2005 as rok, wynik FROM rok2005
) as polaczone
ORDER BY miesiac, wynik desc;
#v-

They differ only in order of queries in union all part.

The thing is that they return the same result. Why isn't one of them returning
2005 for 6th miesiac?

I know I'm not sorting using rok, which means I'm getting undefined
functionality. Fine. But what exactly is happening that regardless of
order of rows in subquery, I get the same, always lower, rok in output?

Best regards,

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/


signature.asc
Description: Digital signature


Re: [HACKERS] Question about sorting internals

2013-12-11 Thread hubert depesz lubaczewski
On Wed, Dec 11, 2013 at 03:34:38PM +0530, Ashutosh Bapat wrote:
 Hi deepesz,
 You might want to see their EXPLAIN VERBOSE outputs. Having one of them
 (2004 one) lesser number of rows, might be getting picked up as first
 relation being union and thus ends up having it's rows before the second
 one. Explain output would make it more clear. Also, try having same number
 of rows in both the relations.

Explains:

 QUERY PLAN

 Unique  (cost=0.44..0.48 rows=9 width=12) (actual time=0.030..0.035 rows=6 
loops=1)
   Output: rok2004.miesiac, (2004), rok2004.wynik
   CTE rok2004
 -  Values Scan on *VALUES*  (cost=0.00..0.06 rows=5 width=8) (actual 
time=0.001..0.003 rows=5 loops=1)
   Output: *VALUES*.column1, *VALUES*.column2
   CTE rok2005
 -  Values Scan on *VALUES*_1  (cost=0.00..0.05 rows=4 width=8) (actual 
time=0.000..0.001 rows=4 loops=1)
   Output: *VALUES*_1.column1, *VALUES*_1.column2
   -  Sort  (cost=0.32..0.35 rows=9 width=12) (actual time=0.029..0.031 rows=9 
loops=1)
 Output: rok2004.miesiac, (2004), rok2004.wynik
 Sort Key: rok2004.miesiac, rok2004.wynik
 Sort Method: quicksort  Memory: 25kB
 -  Append  (cost=0.00..0.18 rows=9 width=12) (actual 
time=0.007..0.018 rows=9 loops=1)
   -  CTE Scan on rok2004  (cost=0.00..0.10 rows=5 width=12) 
(actual time=0.006..0.011 rows=5 loops=1)
 Output: rok2004.miesiac, 2004, rok2004.wynik
   -  CTE Scan on rok2005  (cost=0.00..0.08 rows=4 width=12) 
(actual time=0.002..0.004 rows=4 loops=1)
 Output: rok2005.miesiac, 2005, rok2005.wynik
 Total runtime: 0.077 ms
(18 rows)

 QUERY PLAN

 Unique  (cost=0.44..0.48 rows=9 width=12) (actual time=0.024..0.027 rows=6 
loops=1)
   Output: rok2005.miesiac, (2005), rok2005.wynik
   CTE rok2004
 -  Values Scan on *VALUES*  (cost=0.00..0.06 rows=5 width=8) (actual 
time=0.001..0.003 rows=5 loops=1)
   Output: *VALUES*.column1, *VALUES*.column2
   CTE rok2005
 -  Values Scan on *VALUES*_1  (cost=0.00..0.05 rows=4 width=8) (actual 
time=0.001..0.003 rows=4 loops=1)
   Output: *VALUES*_1.column1, *VALUES*_1.column2
   -  Sort  (cost=0.32..0.35 rows=9 width=12) (actual time=0.023..0.024 rows=9 
loops=1)
 Output: rok2005.miesiac, (2005), rok2005.wynik
 Sort Key: rok2005.miesiac, rok2005.wynik
 Sort Method: quicksort  Memory: 25kB
 -  Append  (cost=0.00..0.18 rows=9 width=12) (actual 
time=0.004..0.015 rows=9 loops=1)
   -  CTE Scan on rok2005  (cost=0.00..0.08 rows=4 width=12) 
(actual time=0.003..0.006 rows=4 loops=1)
 Output: rok2005.miesiac, 2005, rok2005.wynik
   -  CTE Scan on rok2004  (cost=0.00..0.10 rows=5 width=12) 
(actual time=0.001..0.006 rows=5 loops=1)
 Output: rok2004.miesiac, 2004, rok2004.wynik
 Total runtime: 0.053 ms
(18 rows)

So, it looks like rowcount is the one thing that's different. Not
entirely sure how the logic would be to make rowcount differ.

After some more talk on #postgresql, it looks like I will have to spend
some time with debugger to see what's happening there.

Best regards,

depesz



signature.asc
Description: Digital signature


[HACKERS] What are multixactids?

2013-12-09 Thread hubert depesz lubaczewski
Hi,
when working on fixing the bug related to vacuum freeze, I found out
that there is something called MultiXactId.

Searching docs showed that it is mentioned only once, in release notes
to 9.3.2:
http://www.postgresql.org/search/?u=%2Fdocs%2F9.3%2Fq=multixactid

What's more - I found that Peter Eisentraut already once asked about
them, and lack of documentation:
http://postgresql.1045698.n5.nabble.com/MultiXactId-concept-underdocumented-td5766754.html

So, my question is - what are multixactids, what are they used for,
where can I find any documentation/explanation/whatever?

It seems to be related in some way to the relfrozenxid/vacuum bug, but
I can't comprehend the relation without knowing what multixactid
actually is.

Best regards,

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/


signature.asc
Description: Digital signature


Re: [HACKERS] What are multixactids?

2013-12-09 Thread hubert depesz lubaczewski
On Mon, Dec 09, 2013 at 07:59:10PM +0100, Andreas Karlsson wrote:
 As you can see from Peter's message it is explained in
 README.tuplock[1]. Basically it is used whenever more than one lock
 is acquired on the same tuples as a reference to where the locks are
 stored. It can store updated/deleted Xid for the tuple so it needs
 to be persisted.
 I recommend you read the section in README.tuplock.
 1. 
 https://github.com/postgres/postgres/blob/d9250da032e723d80bb0150b9276cc544df6a087/src/backend/access/heap/README.tuplock#L68

Thanks. Read that. Still, it would be good to have some information in
normal docs, but I guess this has to do for now.

Best regards,

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/


signature.asc
Description: Digital signature


[HACKERS] Re: [COMMITTERS] pgsql: WITH CHECK OPTION support for auto-updatable VIEWs

2013-07-30 Thread hubert depesz lubaczewski
On Tue, Jul 30, 2013 at 09:23:19AM +0100, Dean Rasheed wrote:
create table some_data (id int4 primary key, payload text);
create view first as select * from some_data where 0 = id % 2 with 
local check option;
create view second as select * from first where 0 = id with local 
check option;
  [...]
  the check is 0 = id % 3 - i.e. id has to be multiply of 3. Sorry if my
  way of writing conditionals is confusing.
 Yes it definitely looks like a typo in the test --- the definition of
 first has id % 2, so it is checking for even numbers, not for
 numbers divisible by 3.

Sorry, my bad - must have screwed copy/paste.
the second view is:
select * from first where 0 = id % 3 with local check option;

 As for the point about which of the checks should be failing, I
 believe that the current behaviour is correct.

In such case, can you show me what is the difference of local check
and cascaded check?
Because I assumed, after reading the commit log, that local checks just
the view definition of the view I'm inserting to, and the cascaded
check, checks all the views upstream.

Given the assumption that current code works correctly - both checks
check also the upstream view.

Best regards,

depesz



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: [COMMITTERS] pgsql: WITH CHECK OPTION support for auto-updatable VIEWs

2013-07-30 Thread hubert depesz lubaczewski
On Tue, Jul 30, 2013 at 11:45:47AM +0100, Dean Rasheed wrote:
 Quoting the manual:
 
 LOCAL:
   New rows are only checked against the conditions defined directly in
 the view itself. Any conditions defined on underlying base views are
 not checked (unless they also specify the CHECK OPTION).
 
 CASCADED:
   New rows are checked against the conditions of the view and all
 underlying base views. If the CHECK OPTION is specified, and neither
 LOCAL nor CASCADED is specified, then CASCADED is assumed.
 
 In particular, note the part about unless they also specify the CHECK 
 OPTION.

Ah. All clear now. Sorry for misreading.

Best regards,

depesz



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: [COMMITTERS] pgsql: WITH CHECK OPTION support for auto-updatable VIEWs

2013-07-29 Thread hubert depesz lubaczewski
On Mon, Jul 29, 2013 at 07:43:53PM -0400, Stephen Frost wrote:
 depesz,
 
 moved to -hackers..
 
 * hubert depesz lubaczewski (dep...@depesz.com) wrote:
  create table some_data (id int4 primary key, payload text);
  create view first as select * from some_data where 0 = id % 2 with local 
  check option;
  create view second as select * from first where 0 = id with local check 
  option;
  
  insert into second (id, payload) values (15, '15 is divisible by 3, but not 
  by 2');
  ERROR:  new row violates WITH CHECK OPTION for view first
  DETAIL:  Failing row contains (15, 15 is divisible by 3, but not by 2).
 
 Interesting.
 
  If I read it correctly, insert to second with id = 15 should work,
  because the where on second matches, and local check shouldn't be
  checking on first?
 
 I think you're half-right, the check should be against 'second', which
 should fail because of the '0 = id' check in that view, but we're
 reporting the failure as being associated with 'first', which I don't
 believe is correct.  Dean, thoughts..?

the check is 0 = id % 3 - i.e. id has to be multiply of 3. Sorry if my
way of writing conditionals is confusing.

Best regards,

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Why are JSON extraction functions STABLE and not IMMUTABLE?

2013-04-15 Thread hubert depesz lubaczewski
In current 9.3, I see:

$ select p.proname, p.provolatile from pg_proc p join pg_namespace n on 
p.pronamespace = n.oid where n.nspname = 'pg_catalog' and p.proname ~ 'json';
  proname  | provolatile 
---+-
 json_in   | s
 json_out  | i
 json_recv | s
 json_send | s
 array_to_json | s
 array_to_json | s
 row_to_json   | s
 row_to_json   | s
 json_agg_transfn  | i
 json_agg_finalfn  | i
 json_agg  | i
 to_json   | s
 json_object_field | s
 json_object_field_text| s
 json_array_element| s
 json_array_element_text   | s
 json_extract_path | s
 json_extract_path_op  | s
 json_extract_path_text| s
 json_extract_path_text_op | s
 json_array_elements   | s
 json_array_length | s
 json_object_keys  | s
 json_each | s
 json_each_text| s
 json_populate_record  | s
 json_populate_recordset   | s
(27 rows)

Is there any particular reason extract functions
(object_field/array_element/...) can't be immutable?

I can't readily imagine a situation where output of these functions would
change for different queries.

Best regards,

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Why are JSON extraction functions STABLE and not IMMUTABLE?

2013-04-15 Thread hubert depesz lubaczewski
On Mon, Apr 15, 2013 at 11:31:39AM -0400, Andrew Dunstan wrote:
 Me either. It's an oversight, really. Unless there is any objection
 I'll change them toot sweet. What about the existing (as of 9.2)
 functions?

I don't think that 9.2 functions are that interesting, since these are
to build json values, and as such are not really candidates to making
index off.

Best regards,

depesz



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] VIP: new format for psql - shell - simple using psql in shell

2012-05-27 Thread hubert depesz lubaczewski
On Sat, May 26, 2012 at 05:39:23PM +0200, Pavel Stehule wrote:
 I proposed new psql's format shell. This format is optimized for
 processing returned result in shell:

While I generally like the idea, please note that safe reading output
from queries is possible, with COPY, and proper IFS, like:

=$ psql -c select * from t
 a  |  b  | c 
+-+---
 a1 | b 2 | c|3
 a +| b  +| c:| 6
 4  | 5  +| 
| | 
(2 rows)


=$ psql -qAtX -c copy (select * from t) to stdout | while IFS=$'\t' read -r a 
b c; do echo -e a=[$a] b=[$b] c=[$c]; done
a=[a1] b=[b 2] c=[c|3]
a=[a
4] b=[b
5
] c=[c:|6]

that being said - I would love to get more functional psql.

Best regards,

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Our regex vs. POSIX on longest match

2012-03-04 Thread hubert depesz lubaczewski
On Sun, Mar 04, 2012 at 12:34:22PM -0500, Tom Lane wrote:
 Well, that's just an arbitrary example.  The cases I remember people
 complaining about in practice were the other way round: greedy
 quantifier followed by non-greedy, and they were unhappy that the
 non-greediness was effectively not respected (because the overall RE was
 taken as greedy).  So you can't fix the issue by pointing to POSIX and
 saying overall greedy is always the right thing.

I was one of the complaining, and my point was that deciding for whole
regexp whether it's greedy or non-greedy is a bug (well, it might be
documented, but it's still *very* unexpected).

I stand on position that mixing greedy and non-greedy operators should
be possible, and that it should work according to POLA - i.e. greedines
of given atom shouldn't be influenced by other atoms.

Best regards,

depesz


-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] check constraint validation takes access exclusive locks

2012-02-27 Thread hubert depesz lubaczewski
On Sun, Feb 26, 2012 at 03:04:28PM +0100, Pavel Stehule wrote:
 Hello
 
 I rechecked Depesz's article -
 http://www.depesz.com/2011/07/01/waiting-for-9-2-not-valid-checks/
 
 The behave of current HEAD is different than behave described in article.
 
 alter table a validate constraint a_a_check needs a access exclusive
 locks and blocks table modification - I tested inserts.
 
 Is it expected behave.
 
 session one:
 
 postgres=# create table a(a int);
 CREATE TABLE
 postgres=# alter table a add check (a  0) not valid;
 ALTER TABLE
 postgres=# begin;
 BEGIN
 postgres=# alter table a validate constraint a_a_check;
 ALTER TABLE
 
 session two:
 
 postgres=# update a set a = 100; -- it waits to commit in session one

yes, looks like we have revert to access exclusive lock:

$ begin;
BEGIN
Time: 0.352 ms

*$ ALTER TABLE test2 ADD CHECK ( field = 0 ) NOT VALID;
ALTER TABLE
Time: 0.662 ms

*$ select * from pg_locks where pid = pg_backend_pid();
   locktype│ database │ relation │  page  │ tuple  │ virtualxid │ 
transactionid │ classid │ objid  │ objsubid │ virtualtransaction │ pid  │   
 mode │ granted │ fastpath
───┼──┼──┼┼┼┼───┼─┼┼──┼┼──┼─┼─┼──
 relation  │16387 │11070 │ [null] │ [null] │ [null] │
[null] │  [null] │ [null] │   [null] │ 2/174  │ 8975 │ 
AccessShareLock │ t   │ t
 virtualxid│   [null] │   [null] │ [null] │ [null] │ 2/174  │
[null] │  [null] │ [null] │   [null] │ 2/174  │ 8975 │ 
ExclusiveLock   │ t   │ t
 transactionid │   [null] │   [null] │ [null] │ [null] │ [null] │   
854 │  [null] │ [null] │   [null] │ 2/174  │ 8975 │ ExclusiveLock   
│ t   │ f
 relation  │16387 │18653 │ [null] │ [null] │ [null] │
[null] │  [null] │ [null] │   [null] │ 2/174  │ 8975 │ 
AccessExclusiveLock │ t   │ f
(4 rows)

Time: 0.921 ms

Relation 18653 is table test2, of course.

*$ commit;
COMMIT

$ begin;
BEGIN
Time: 0.271 ms

*$ ALTER TABLE test2 VALIDATE CONSTRAINT test2_field_check;
ALTER TABLE
Time: 286.035 ms

*$ select * from pg_locks where pid = pg_backend_pid();
   locktype│ database │ relation │  page  │ tuple  │ virtualxid │ 
transactionid │ classid │ objid  │ objsubid │ virtualtransaction │ pid  │   
 mode │ granted │ fastpath
───┼──┼──┼┼┼┼───┼─┼┼──┼┼──┼─┼─┼──
 relation  │16387 │11070 │ [null] │ [null] │ [null] │
[null] │  [null] │ [null] │   [null] │ 2/175  │ 8975 │ 
AccessShareLock │ t   │ t
 virtualxid│   [null] │   [null] │ [null] │ [null] │ 2/175  │
[null] │  [null] │ [null] │   [null] │ 2/175  │ 8975 │ 
ExclusiveLock   │ t   │ t
 transactionid │   [null] │   [null] │ [null] │ [null] │ [null] │   
855 │  [null] │ [null] │   [null] │ 2/175  │ 8975 │ ExclusiveLock   
│ t   │ f
 relation  │16387 │18653 │ [null] │ [null] │ [null] │
[null] │  [null] │ [null] │   [null] │ 2/175  │ 8975 │ 
AccessExclusiveLock │ t   │ f
(4 rows)

Time: 0.631 ms

And it clearly shows that validation of constraint did lock the table
using AccessExclusiveLock, which kinda defeats the purpose of
INVALID/VALIDATE.

Best regards,

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] pg_dump -s dumps data?!

2012-02-02 Thread hubert depesz lubaczewski
On Wed, Feb 01, 2012 at 10:02:14PM +0100, Dimitri Fontaine wrote:
 The case for a table that is partly user data and partly extension data
 is very thin, I think that if I had this need I would use inheritance
 and a CHECK(user_data is true/false) constraint to filter the data.

definitely agree. i.e. i don't really see a case when we'd have data
from both extension, and normal usage, in the same table.
and the overhead of tracking source of data seems to be excessive.

Best regards,

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] pg_dump -s dumps data?!

2012-01-31 Thread hubert depesz lubaczewski
On Mon, Jan 30, 2012 at 11:18:31PM -0500, Tom Lane wrote:
 I don't recall that we thought very hard about what should happen when
 pg_dump switches are used to produce a selective dump, but ISTM
 reasonable that if it's user data then it should be dumped only if
 data in a regular user table would be.  So I agree it's pretty broken
 that pg_dump -t foo will dump data belonging to a config table not
 selected by the -t switch.  I think this should be changed in both HEAD
 and 9.1 (note that HEAD will presumably return to the 9.1 behavior once
 that --exclude-table-data patch gets fixed).
 
 What's not apparent to me is whether there's an argument for doing more
 than that.  It strikes me that the current design is not very friendly
 towards the idea of an extension that creates a table that's meant
 solely to hold user data --- you'd have to mark it as config which
 seems a bit unfortunate terminology for that case.  Is it important to
 do something about that, and if so what?

Currently we are migrating away from using extensions. But - recently
on planet.postgresql.org there were some (more than 2) posts about
schema versioning.
EXTENSIONS, with their versions, upgrades, dependency tracking, would be
*perfect* for storing application structures, if:
1. we could use them from arbitrary location, and not only
   install-root/share/postgresql/extension/ which usually shouldn't be
   writtable by users
2. they do not interfere with pg_dump

2nd point means that I still need to be able to get:
1. full database schema dump - which can use create extension
2. single table schema dump - which, in my opinion, should use create
   table, and only schema of requested table(s) should be shown, no
   schema or data for other tables should be dumped.
3. full database data dump
4. single table data dump - in which case neither structure, nor data of
   other tables (than requested) should be emitted.

personally, I think that the marking of extension tables should be
reversed - by default they should normally dump data - just like any
other table. Just in case of some specific tables you'd mark them with
do not dump data by default which would exclude their data from normal
database wide pg_dump.

that's how I envision working extensions, and that's how I'd like them
to work. of course your needs/opinions can be different - especially in
case when we consider extensions to be only a tool to simplify
dump/restore of contrib modules.

Best regards,

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] Why extract( ... from timestamp ) is not immutable?

2012-01-30 Thread hubert depesz lubaczewski
On Wed, Jan 25, 2012 at 11:30:49AM -0500, Tom Lane wrote:
 hubert depesz lubaczewski dep...@depesz.com writes:
  anyway - the point is that in \df date_part(, timestamp) says it's
  immutable, while it is not.
 
 Hmm, you're right.  I thought we'd fixed that way back when, but
 obviously not.  Or maybe the current behavior of the epoch case
 postdates that.

is there a chance something will happen with/about it?

preferably I would see extract( epoch from timestamp ) to be really
immutable, i.e. (in my opinion) it should treat incoming data as UTC
- for epoch calculation.
Alternatively - perhaps epoch extraction should be moved to specialized
function, which would have swapped mutability:

get_epoch(timestamptz) would be immutable
while
get_epoch(timestamp) would be stable

Best regards,

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] Why extract( ... from timestamp ) is not immutable?

2012-01-30 Thread hubert depesz lubaczewski
On Mon, Jan 30, 2012 at 10:35:21AM -0800, Josh Berkus wrote:
 
  preferably I would see extract( epoch from timestamp ) to be really
  immutable, i.e. (in my opinion) it should treat incoming data as UTC
  - for epoch calculation.
  Alternatively - perhaps epoch extraction should be moved to specialized
  function, which would have swapped mutability:
 
 We can't have functions which are immutable or not depending on their
 inputs.  That way lies madness.

but this is exactly what's happening now.
extract( ... from timestamp) is marked as immutable, while in some cases
(namely when you want epoch) it should be stable because the return from
function changes.

Best regards,

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: [GENERAL] Strange problem with create table as select * from table;

2011-11-12 Thread hubert depesz lubaczewski
On Sun, Nov 06, 2011 at 09:34:24AM -0500, Tom Lane wrote:
 hubert depesz lubaczewski dep...@depesz.com writes:
  Any chance of getting the fix in patch format so we could test it on
  this system?
 
 http://git.postgresql.org/gitweb/?p=postgresql.git;a=patch;h=23998fe99c1220ba3a9eefee194e37ec1f14ae07

hi
just to close the loop - finally today I could restart the database with
patched 8.4. bug fixed, all works fine. thanks a lot.

Best regards,

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: [GENERAL] Strange problem with create table as select * from table;

2011-11-06 Thread hubert depesz lubaczewski
On Fri, Nov 04, 2011 at 09:04:02PM -0400, Tom Lane wrote:
 that.  And that they are the only rows that, in addition to the above
 conditions, contain data fields wide enough to require out-of-line
 toasting.

checked lengths of the text/varchar columns in database.

there are 16 such columns in the table.
full report of lengths is in
http://www.depesz.com/various/lengths.report.gz

it was obtained using:
select length( first_text_column ) as length_1, count(*) from 
etsy_v2.receipts group by 1 order by 1;
and so on for every text column, and at the end I also made summary of
sum-of-lengths.

there is also:
http://www.depesz.com/various/lengths2.report.gz
which has the same summary, but only of the damaged rows.

As you can see the length of columns is not really special - somewhere
in the middle of all other rows. summarized length is also not special
in any way.

 These conditions together are enough to break the assumption in
 toast_insert_or_update that the old and new tuples must have the same
 value of t_hoff.  But it can only happen when the source tuple is an
 original on-disk tuple, which explains why only INSERT ... SELECT *
 causes the problem, not any variants that require projection of a new
 column set.  When it does happen, toast_insert_or_update correctly
 computes the required size of the new tuple ... but then it tells
 heap_fill_tuple to fill the data part at offset olddata-t_hoff, which
 is wrong (too small) and so the nulls bitmap that heap_fill_tuple
 concurrently constructs will overwrite the first few data bytes.  In
 your example, the table contains 49 columns so the nulls bitmap requires
 7 bytes, just enough to overwrite the first 6 data bytes as observed.
 (In fact, given the values we see being filled in, I can confidently say
 that you have two added-since-creation null columns, no more, no less.)
 
 I can reproduce the problem with the attached test case (using the
 regression database).  With asserts enabled, the 
   Assert(new_len == olddata-t_hoff);
 fails.  With asserts off, corrupt data.

How can I make the onek table for the test? is it standard table from
something?

 This is trivial to fix, now that we know there's a problem --- the
 function is only using that assumption to save itself a couple lines
 of code.  Penny wise, pound foolish :-(

Any chance of getting the fix in patch format so we could test it on
this system?

Best regards,

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] pg_upgrade problem

2011-09-07 Thread hubert depesz lubaczewski
On Tue, Sep 06, 2011 at 09:21:02PM -0400, Bruce Momjian wrote:
 Tom Lane wrote:
  hubert depesz lubaczewski dep...@depesz.com writes:
   Worked a bit to get the ltree problem down to smallest possible, 
   repeatable, situation.
  
  I looked at this again and verified that indeed, commit
  8eee65c996048848c20f6637c1d12b319a4ce244 introduced an incompatible
  change into the on-disk format of ltree columns: it widened
  ltree_level.len, which is one component of an ltree on disk.
  So the crash is hardly surprising.  I think that the only thing
  pg_upgrade could do about it is refuse to upgrade when ltree columns
  are present in an 8.3 database.  I'm not sure though how you'd identify
  contrib/ltree versus some random user-defined type named ltree.
 
 It is actually easy to do using the attached patch.  I check for the
 functions that support the data type and check of they are from an
 'ltree' shared object.  I don't check actual user table type names in
 this case.

While it will prevent failures in future, it doesn't solve my problem
now :(

Will try to do it via:
- drop indexes on ltree
- convert ltree to text
- upgrade
- convert text to ltree
- create indexes on ltree

Best regards,

depesz


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] pg_upgrade problem

2011-09-06 Thread hubert depesz lubaczewski
On Mon, Sep 05, 2011 at 05:26:00PM -0400, Tom Lane wrote:
 Bruce Momjian br...@momjian.us writes:
  Odd it is dying in the memory freeing at executor close --- not in the
  ltree code.
 
 Doesn't seem odd.  The glibc complaint previously shown already
 indicates this is a memory stomp problem.
 
 --enable-cassert might (or might not) provide additional help.

recompiled with cassert.

result:

=# select * from categories where category_id = 177;
The connection to the server was lost. Attempting reset: Succeeded.

which is interesting, as the error is different.

logs show:

2011-09-06 10:28:58 UTC () [21986]: [1-1] user=[unknown],db=[unknown] LOG:  
connection received: host=[local]
2011-09-06 10:28:58 UTC ([local]) [21986]: [2-1] user=postgres,db=xxx LOG:  
connection authorized: user=postgres database=xxx
2011-09-06 10:28:58 UTC () [21977]: [2-1] user=,db= LOG:  server process (PID 
21985) was terminated by signal 11: Segmentation fault
2011-09-06 10:28:58 UTC () [21977]: [3-1] user=,db= LOG:  terminating any other 
active server processes
2011-09-06 10:28:58 UTC ([local]) [21986]: [3-1] user=postgres,db=xxx 
WARNING:  terminating connection because of crash of another server process
2011-09-06 10:28:58 UTC ([local]) [21986]: [4-1] user=postgres,db=xxx 
DETAIL:  The postmaster has commanded this server process to roll back the 
current transaction and exit, because another server process exited abnormally 
and possibly corrupted shared memory.
2011-09-06 10:28:58 UTC ([local]) [21986]: [5-1] user=postgres,db=xxx HINT: 
 In a moment you should be able to reconnect to the database and repeat your 
command.

gdb backtrace is even less helpful:

$ gdb -q -c core* /opt/pgsql-9.0.5a-int/bin/postgres

warning: Can't read pathname for load map: Input/output error.
Reading symbols from /usr/lib/libxml2.so.2...done.
Loaded symbols for /usr/lib/libxml2.so.2
Reading symbols from /lib/libdl.so.2...done.
Loaded symbols for /lib/libdl.so.2
Reading symbols from /lib/libm.so.6...done.
Loaded symbols for /lib/libm.so.6
Reading symbols from /lib/libc.so.6...done.
Loaded symbols for /lib/libc.so.6
Reading symbols from /usr/lib/libz.so.1...done.
Loaded symbols for /usr/lib/libz.so.1
Reading symbols from /lib/ld-linux-x86-64.so.2...done.
Loaded symbols for /lib64/ld-linux-x86-64.so.2
Reading symbols from /lib/libnss_files.so.2...done.
Loaded symbols for /lib/libnss_files.so.2
Core was generated by `postgres: postgres xxx [local] SELECT '.
Program terminated with signal 11, Segmentation fault.
[New process 21985]
#0  0x7fe18c235e4b in memcpy () from /lib/libc.so.6
(gdb) bt
#0  0x7fe18c235e4b in memcpy () from /lib/libc.so.6
#1  0x7fe1897532e4 in ?? ()
#2  0x in ?? ()
(gdb) 

Best regards,

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] pg_upgrade problem

2011-09-06 Thread hubert depesz lubaczewski
Hi,

Worked a bit to get the ltree problem down to smallest possible, repeatable, 
situation.

Initial setup:

1. PostgreSQL 8.3.11, configured with:
./configure\
--prefix=/opt/pgsql-8.3.11-int \
--disable-rpath\
--without-perl \
--without-python   \
--without-tcl  \
--without-openssl  \
--without-pam  \
--without-krb5 \
--without-gssapi   \
--enable-nls   \
--enable-integer-datetimes \
--enable-thread-safety \
--with-libxml  \
--with-libxslt \
--without-ldap
Built and installed with contrib modules.

2. PostgreSQL 9.0.5 (pre), from git checkout, head of 9.0 branch. Configured 
with:
./configure \
--prefix=/opt/pgsql-9.0.5a-int \
--enable-debug \
--enable-cassert \
--disable-rpath \
--without-perl \
--without-python \
--without-tcl \
--without-openssl \
--without-pam \
--without-krb5 \
--without-gssapi \
--enable-nls \
--enable-integer-datetimes \
--enable-thread-safety \
--with-libxml \
--with-libxslt \
--without-ldap

Now with these two in place, I can make the test:

=$ mkdir /var/tmp/test
=$ cd /var/tmp/test/
=$ export LD_LIBRARY_PATH=/opt/pgsql-8.3.11-int/lib/
=$ export PATH=/opt/pgsql-8.3.11-int/bin:$PATH
=$ mkdir data-8.3
=$ initdb -D data-8.3
=$ pg_ctl -D data-8.3 -l logfile-8.3 start
=$ psql -d postgres -f /opt/pgsql-8.3.11-int/share/contrib/ltree.sql
=$ psql -d postgres -c create table z (x ltree)
=$ psql -d postgres -c insert into z (x) values ('a.b')
=$ pg_ctl -D data-8.3/ stop
=$ export LD_LIBRARY_PATH=/opt/pgsql-9.0.5a-int/lib/:/opt/pgsql-8.3.11-int/lib/
=$ export 
PATH=/opt/pgsql-9.0.5a-int/bin/:/opt/pgsql-8.3.11-int/bin/:~/bin:/usr/local/bin:/usr/bin:/bin
=$ mkdir data-9.0
=$ initdb -D data-9.0/
=$ perl -pi -e 's/#port = 5432/port=7654/' data-9.0/postgresql.conf
=$ pg_upgrade -v -b /opt/pgsql-8.3.11-int/bin/ -B /opt/pgsql-9.0.5a-int/bin/ -d 
$( pwd )/data-8.3/ -D $( pwd )/data-9.0 -k -l pg_upgrade.log -p 5432 -P 7654
=$ pg_ctl -D data-9.0 -l logfile-9.0 start
=$ psql -p 7654 -d postgres -c select * from z
WARNING:  detected write past chunk end in ExecutorState 0xc500a0
WARNING:  problem in alloc set ExecutorState: req size  alloc size for chunk 
0xc500d8 in block 0xc4dfd0
WARNING:  problem in alloc set ExecutorState: bad single-chunk 0xc500d8 in 
block 0xc4dfd0
WARNING:  problem in alloc set ExecutorState: bogus aset link in block 
0xc4dfd0, chunk 0xc500d8
WARNING:  problem in alloc set ExecutorState: found inconsistent memory block 
0xc4dfd0
connection to server was lost

Hope it helps.

Best regards,

depesz


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] pg_upgrade problem

2011-09-05 Thread hubert depesz lubaczewski
On Thu, Sep 01, 2011 at 08:05:51AM +0200, hubert depesz lubaczewski wrote:
 On Wed, Aug 31, 2011 at 09:54:20PM -0400, Bruce Momjian wrote:
  Working with depesz, I have found the cause.  The code I added to fix
  pg_upgrade in 9.0.4 and earlier releases didn't handle old 8.3 servers
  properly.  I mistakenly processed toast table with the same pg_dump
  query as used for pre-8.4 toast tables, not realizing those were not
  functional because there were no reloptions for toast tables in pre-8.4.
 
 Thanks a lot. Will test and post results (around sunday/monday I guess).

All worked.
pg_upgrade/vacuum didn't raise any errors. Will check some random
queries too, but don't expect anything to break.

thanks again for quick help.

Best regards,

depesz


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] pg_upgrade problem

2011-09-05 Thread hubert depesz lubaczewski
On Mon, Sep 05, 2011 at 05:48:50PM +0200, hubert depesz lubaczewski wrote:
 On Thu, Sep 01, 2011 at 08:05:51AM +0200, hubert depesz lubaczewski wrote:
  On Wed, Aug 31, 2011 at 09:54:20PM -0400, Bruce Momjian wrote:
   Working with depesz, I have found the cause.  The code I added to fix
   pg_upgrade in 9.0.4 and earlier releases didn't handle old 8.3 servers
   properly.  I mistakenly processed toast table with the same pg_dump
   query as used for pre-8.4 toast tables, not realizing those were not
   functional because there were no reloptions for toast tables in pre-8.4.
  
  Thanks a lot. Will test and post results (around sunday/monday I guess).
 
 All worked.
 pg_upgrade/vacuum didn't raise any errors. Will check some random
 queries too, but don't expect anything to break.

Hmm .. got breakage.

Have table with ltree column, and any select to it causes:

=# select * from categories limit 1;
The connection to the server was lost. Attempting reset: Failed.

strace shows that backend read table, then it opened correct ltree.so,
but then:

29293 17:49:00.667865 stat(/opt/pgsql-9.0.5a-int/lib/ltree, 0x7fffb026ceb0) = 
-1 ENOENT (No such file or directory) 0.13
29293 17:49:00.667935 stat(/opt/pgsql-9.0.5a-int/lib/ltree.so, 
{st_mode=S_IFREG|0755, st_size=72966, ...}) = 0 0.10
29293 17:49:00.668007 stat(/opt/pgsql-9.0.5a-int/lib/ltree.so, 
{st_mode=S_IFREG|0755, st_size=72966, ...}) = 0 0.09
29293 17:49:00.668135 open(/opt/pgsql-9.0.5a-int/lib/ltree.so, O_RDONLY) = 46 
0.12
29293 17:49:00.668181 read(46, 
\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0\0\1\0\0\0\240.\0\0..., 832) = 832 
0.08
29293 17:49:00.668227 fstat(46, {st_mode=S_IFREG|0755, st_size=72966, ...}) = 0 
0.06
29293 17:49:00.668294 mmap(NULL, 2153248, PROT_READ|PROT_EXEC, 
MAP_PRIVATE|MAP_DENYWRITE, 46, 0) = 0x7feba4abc000 0.13
29293 17:49:00.668341 mprotect(0x7feba4aca000, 2093056, PROT_NONE) = 0 
0.12
29293 17:49:00.668381 mmap(0x7feba4cc9000, 4096, PROT_READ|PROT_WRITE, 
MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 46, 0xd000) = 0x7feba4cc9000 0.12
29293 17:49:00.668429 close(46) = 0 0.07
29293 17:49:00.668715 open(/dev/tty, O_RDWR|O_NOCTTY|O_NONBLOCK) = -1 ENXIO 
(No such device or address) 0.17
29293 17:49:00.668771 writev(2, [{*** glibc detected *** , 23}, {postgres: 
postgres xxx [loca..., 41}, {: , 2}, {double free or corruption 
(!prev..., 33}, {: 0x, 4}, {00be67a0, 16}, { ***\n, 5}], 7) = 
124 0.14
29293 17:49:00.668863 open(/opt/pgsql-9.0.5a-int/lib/libgcc_s.so.1, O_RDONLY) 
= -1 ENOENT (No such file or directory) 0.10
29293 17:49:00.668907 open(/opt/pgsql-8.3.11-int/lib/libgcc_s.so.1, O_RDONLY) 
= -1 ENOENT (No such file or directory) 0.12
29293 17:49:00.668952 open(/etc/ld.so.cache, O_RDONLY) = 46 0.10
29293 17:49:00.668990 fstat(46, {st_mode=S_IFREG|0644, st_size=17400, ...}) = 0 
0.06
29293 17:49:00.669044 mmap(NULL, 17400, PROT_READ, MAP_PRIVATE, 46, 0) = 
0x7feba80d7000 0.08
29293 17:49:00.669077 close(46) = 0 0.06
29293 17:49:00.669110 access(/etc/ld.so.nohwcap, F_OK) = -1 ENOENT (No such 
file or directory) 0.09
29293 17:49:00.669156 open(/lib/libgcc_s.so.1, O_RDONLY) = 46 0.12
29293 17:49:00.669197 read(46, 
\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0\0\1\0\0\0\240!\0\0..., 832) = 832 
0.09
29293 17:49:00.669244 mmap(NULL, 134217728, PROT_NONE, 
MAP_PRIVATE|MAP_ANONYMOUS|MAP_NORESERVE, -1, 0) = 0x7feb9cabc000 0.07
29293 17:49:00.669278 munmap(0x7feb9cabc000, 55853056) = 0 0.11
29293 17:49:00.669313 munmap(0x7feba400, 11255808) = 0 0.08
29293 17:49:00.669347 mprotect(0x7feba000, 135168, PROT_READ|PROT_WRITE) = 
0 0.08
29293 17:49:00.669387 fstat(46, {st_mode=S_IFREG|0644, st_size=56072, ...}) = 0 
0.06
29293 17:49:00.669451 mmap(NULL, 2151816, PROT_READ|PROT_EXEC, 
MAP_PRIVATE|MAP_DENYWRITE, 46, 0) = 0x7feba48ae000 0.09
29293 17:49:00.669487 mprotect(0x7feba48bb000, 2097152, PROT_NONE) = 0 
0.09
29293 17:49:00.669522 mmap(0x7feba4abb000, 4096, PROT_READ|PROT_WRITE, 
MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 46, 0xd000) = 0x7feba4abb000 0.09
29293 17:49:00.669565 close(46) = 0 0.06
29293 17:49:00.669614 munmap(0x7feba80d7000, 17400) = 0 0.12
29293 17:49:00.669765 write(2, === Backtrace: =\n, 29) = 29 
0.11
29293 17:49:00.669852 writev(2, [{/lib/libc.so.6, 14}, {[0x, 3}, 
{7feba759908a, 12}, {]\n, 2}], 4) = 31 0.11
29293 17:49:00.669937 writev(2, [{/lib/libc.so.6, 14}, {(, 1}, {cfree, 
5}, {+0x, 3}, {8c, 2}, {), 1}, {[0x, 3}, {7feba759cc1c, 12}, {]\n, 
2}], 9) = 43 0.12
29293 17:49:00.670128 writev(2, [{postgres: postgres xxx [loca..., 41}, 
{[0x, 3}, {6c18c9, 6}, {]\n, 2}], 4) = 52 0.11
29293 17:49:00.670289 writev(2, [{postgres: postgres xxx [loca..., 41}, 
{(, 1}, {MemoryContextDelete, 19}, {+0x, 3}, {54, 2}, {), 1}, {[0x, 
3}, {6c1e54, 6}, {]\n, 2}], 9) = 78 0.12
29293 17:49:00.670453 writev(2, [{postgres: postgres xxx [loca..., 41}, 
{(, 1

Re: [HACKERS] [GENERAL] pg_upgrade problem

2011-09-05 Thread hubert depesz lubaczewski
On Mon, Sep 05, 2011 at 02:18:18PM -0400, Bruce Momjian wrote:
 hubert depesz lubaczewski wrote:
  I'm not sure if it's upgrade thing, or is it because of error in
  ltree/compilation, but it looks bad.
  
  Is there any more info I could show/gather to help debug the issue?
 
 I am confused by the error --- is it not loading, or can you get a
 backtrace of the crash?

The one in logs is not sufficient?
If not - could you tell me how to make the backtrace? I'm by far not a c
programmer, so for this I'd need some tutoring.

 If I had to take a guess, it would be that there is some ltree
 incompatibility from PG 8.3 that we didn't know about.

it's possible.

Best regards,

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] pg_upgrade problem

2011-09-05 Thread hubert depesz lubaczewski
On Mon, Sep 05, 2011 at 02:51:12PM -0400, Bruce Momjian wrote:
 hubert depesz lubaczewski wrote:
  On Mon, Sep 05, 2011 at 02:18:18PM -0400, Bruce Momjian wrote:
   hubert depesz lubaczewski wrote:
I'm not sure if it's upgrade thing, or is it because of error in
ltree/compilation, but it looks bad.

Is there any more info I could show/gather to help debug the issue?
   
   I am confused by the error --- is it not loading, or can you get a
   backtrace of the crash?
  
  The one in logs is not sufficient?
  If not - could you tell me how to make the backtrace? I'm by far not a c
  programmer, so for this I'd need some tutoring.
 
 I think you want this:
 
   
 http://wiki.postgresql.org/wiki/Getting_a_stack_trace_of_a_running_PostgreSQL_backend_on_Linux/BSD
 
 While strace is useful, it doesn't show us where the C code is failing.

ok.
got this:

(gdb) bt
#0  0x7fdc28605095 in raise () from /lib/libc.so.6
#1  0x7fdc28606af0 in abort () from /lib/libc.so.6
#2  0x7fdc2863fa7b in ?? () from /lib/libc.so.6
#3  0x7fdc2864708a in ?? () from /lib/libc.so.6
#4  0x7fdc2864ac1c in free () from /lib/libc.so.6
#5  0x006c18c9 in AllocSetDelete (context=value optimized out) at 
aset.c:551
#6  0x006c1e54 in MemoryContextDelete (context=0xbdae80) at mcxt.c:196
#7  0x0054913e in standard_ExecutorEnd (queryDesc=0xbbb4f0) at 
execMain.c:360
#8  0x0051c88f in PortalCleanup (portal=0xbb7a70) at portalcmds.c:268
#9  0x006c26fc in PortalDrop (portal=0xbb7a70, isTopCommit=0 '\0') at 
portalmem.c:434
#10 0x005f8c95 in exec_simple_query (query_string=0xb9b980 select * 
from categories limit 1;) at postgres.c:1067
#11 0x005f95de in PostgresMain (argc=value optimized out, argv=value 
optimized out, username=value optimized out) at postgres.c:3936
#12 0x005c94f6 in ServerLoop () at postmaster.c:3555
#13 0x005ca0fe in PostmasterMain (argc=3, argv=0xaf0870) at 
postmaster.c:1092
#14 0x00574070 in main (argc=3, argv=0xaf0870) at main.c:188

Hope it helps.

Best regards,

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] pg_upgrade problem

2011-09-05 Thread hubert depesz lubaczewski
On Mon, Sep 05, 2011 at 04:43:47PM -0400, Bruce Momjian wrote:
 hubert depesz lubaczewski wrote:
  On Mon, Sep 05, 2011 at 02:51:12PM -0400, Bruce Momjian wrote:
   hubert depesz lubaczewski wrote:
On Mon, Sep 05, 2011 at 02:18:18PM -0400, Bruce Momjian wrote:
 hubert depesz lubaczewski wrote:
  I'm not sure if it's upgrade thing, or is it because of error in
  ltree/compilation, but it looks bad.
  
  Is there any more info I could show/gather to help debug the issue?
 
 I am confused by the error --- is it not loading, or can you get a
 backtrace of the crash?

The one in logs is not sufficient?
If not - could you tell me how to make the backtrace? I'm by far not a c
programmer, so for this I'd need some tutoring.
   
   I think you want this:
   
 
   http://wiki.postgresql.org/wiki/Getting_a_stack_trace_of_a_running_PostgreSQL_backend_on_Linux/BSD
   
   While strace is useful, it doesn't show us where the C code is failing.
  
  ok.
  got this:
  
  (gdb) bt
  #0  0x7fdc28605095 in raise () from /lib/libc.so.6
  #1  0x7fdc28606af0 in abort () from /lib/libc.so.6
  #2  0x7fdc2863fa7b in ?? () from /lib/libc.so.6
  #3  0x7fdc2864708a in ?? () from /lib/libc.so.6
  #4  0x7fdc2864ac1c in free () from /lib/libc.so.6
  #5  0x006c18c9 in AllocSetDelete (context=value optimized out) at 
  aset.c:551
  #6  0x006c1e54 in MemoryContextDelete (context=0xbdae80) at 
  mcxt.c:196
  #7  0x0054913e in standard_ExecutorEnd (queryDesc=0xbbb4f0) at 
  execMain.c:360
  #8  0x0051c88f in PortalCleanup (portal=0xbb7a70) at 
  portalcmds.c:268
  #9  0x006c26fc in PortalDrop (portal=0xbb7a70, isTopCommit=0 '\0') 
  at portalmem.c:434
  #10 0x005f8c95 in exec_simple_query (query_string=0xb9b980 select 
  * from categories limit 1;) at postgres.c:1067
  #11 0x005f95de in PostgresMain (argc=value optimized out, 
  argv=value optimized out, username=value optimized out) at 
  postgres.c:3936
  #12 0x005c94f6 in ServerLoop () at postmaster.c:3555
  #13 0x005ca0fe in PostmasterMain (argc=3, argv=0xaf0870) at 
  postmaster.c:1092
  #14 0x00574070 in main (argc=3, argv=0xaf0870) at main.c:188
 
 Good.  Is it possible to compile with debug symbols, -g?  Odd you are
 crashing in libc.

this had debug:

./configure \
--prefix=/opt/pgsql-9.0.5a-int \
--enable-debug \
--disable-rpath \
--without-perl \
--without-python \
--without-tcl \
--without-openssl \
--without-pam \
--without-krb5 \
--without-gssapi \
--enable-nls \
--enable-integer-datetimes \
--enable-thread-safety \
--with-libxml \
--with-libxslt \
--without-ldap

Best regards,

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] pg_upgrade problem

2011-09-01 Thread hubert depesz lubaczewski
On Wed, Aug 31, 2011 at 09:54:20PM -0400, Bruce Momjian wrote:
 Working with depesz, I have found the cause.  The code I added to fix
 pg_upgrade in 9.0.4 and earlier releases didn't handle old 8.3 servers
 properly.  I mistakenly processed toast table with the same pg_dump
 query as used for pre-8.4 toast tables, not realizing those were not
 functional because there were no reloptions for toast tables in pre-8.4.

Thanks a lot. Will test and post results (around sunday/monday I guess).

Best regards,

depesz


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] pg_upgrade problem

2011-08-31 Thread hubert depesz lubaczewski
On Wed, Aug 31, 2011 at 12:16:03PM -0400, Bruce Momjian wrote:
 hubert depesz lubaczewski wrote:
  On Fri, Aug 26, 2011 at 12:18:55AM -0400, Bruce Momjian wrote:
   
   OK, this was very helpful.  I found out that there is a bug in current
   9.0.X, 9.1.X, and HEAD that I introduced recently when I excluded temp
   tables.  (The bug is not in any released version of pg_upgrade.)  The
   attached, applied patches should fix it for you.  I assume you are
   running 9.0.X, and not 9.0.4.
  
  pg_upgrade worked. Now I'm doing reindex and later on vacuumdb -az.
  
  will keep you posted.
 
 FYI, this pg_upgrade bug exists in PG 9.1RC1, but not in earlier betas. 
 Users can either wait for 9.1 RC2 or Final, or use the patch I posted. 
 The bug is not in 9.0.4 and will not be in 9.0.5.

I assume you mean the bug that caused pg_upgrade to fail.

But there still is (existing in 9.0.4 too) bug which causes vacuum to
fail.

Best regards,

depesz


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] pg_upgrade problem

2011-08-31 Thread hubert depesz lubaczewski
On Wed, Aug 31, 2011 at 01:23:05PM -0400, Bruce Momjian wrote:
 Can you get me the 9.0.X pg_class.relfrozenxid for the toast and heap
 tables involved?

Sure:

=# select oid::regclass, relfrozenxid from pg_class  where relname in 
('transactions', 'pg_toast_106668498');
 oid | relfrozenxid 
-+--
 pg_toast.pg_toast_106668498 |   3673553926
 transactions|   3623560321
(2 rows)

Best regards,

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] pg_upgrade problem

2011-08-29 Thread hubert depesz lubaczewski
On Fri, Aug 26, 2011 at 05:28:35PM +0200, hubert depesz lubaczewski wrote:
 On Fri, Aug 26, 2011 at 12:18:55AM -0400, Bruce Momjian wrote:
  
  OK, this was very helpful.  I found out that there is a bug in current
  9.0.X, 9.1.X, and HEAD that I introduced recently when I excluded temp
  tables.  (The bug is not in any released version of pg_upgrade.)  The
  attached, applied patches should fix it for you.  I assume you are
  running 9.0.X, and not 9.0.4.
 
 pg_upgrade worked. Now I'm doing reindex and later on vacuumdb -az.

vacuumdb failed. The fail looks very similar to the one I had on 9.0.4.

After long vacuum I got:
INFO:  vacuuming pg_toast.pg_toast_106668498
vacuumdb: vacuuming of database etsy_v2 failed: ERROR:  could not access 
status of transaction 3429738606
DETAIL:  Could not open file pg_clog/0CC6: No such file or directory.

Unfortunately at the moment, I no longer have the old (8.3) setup, but I do
have the 9.0.X and will be happy to provide any info you might need to help me
debug/fix the problem.

Best regards,

depesz


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] pg_upgrade problem

2011-08-29 Thread hubert depesz lubaczewski
On Mon, Aug 29, 2011 at 06:54:41PM +0200, hubert depesz lubaczewski wrote:
 On Fri, Aug 26, 2011 at 05:28:35PM +0200, hubert depesz lubaczewski wrote:
  On Fri, Aug 26, 2011 at 12:18:55AM -0400, Bruce Momjian wrote:
   
   OK, this was very helpful.  I found out that there is a bug in current
   9.0.X, 9.1.X, and HEAD that I introduced recently when I excluded temp
   tables.  (The bug is not in any released version of pg_upgrade.)  The
   attached, applied patches should fix it for you.  I assume you are
   running 9.0.X, and not 9.0.4.
  
  pg_upgrade worked. Now I'm doing reindex and later on vacuumdb -az.
 
 vacuumdb failed. The fail looks very similar to the one I had on 9.0.4.
 
 After long vacuum I got:
 INFO:  vacuuming pg_toast.pg_toast_106668498
 vacuumdb: vacuuming of database etsy_v2 failed: ERROR:  could not access 
 status of transaction 3429738606
 DETAIL:  Could not open file pg_clog/0CC6: No such file or directory.
 
 Unfortunately at the moment, I no longer have the old (8.3) setup, but I do
 have the 9.0.X and will be happy to provide any info you might need to help me
 debug/fix the problem.

this pg_toast is related to table transactions, which was vacuumed
like this:

INFO:  vacuuming public.transactions
INFO:  index transaction_id_pkey now contains 50141303 row versions in 144437 
pages
DETAIL:  0 index row versions were removed.
0 index pages have been deleted, 0 are currently reusable.
CPU 1.08s/0.13u sec elapsed 173.04 sec.
INFO:  index transactions_creation_tsz_idx now contains 50141303 row versions 
in 162634 pages
DETAIL:  0 index row versions were removed.
0 index pages have been deleted, 0 are currently reusable.
CPU 1.19s/0.23u sec elapsed 77.45 sec.
INFO:  index fki_transactions_xx_fkey now contains 50141303 row 
versions in 163466 pages
DETAIL:  0 index row versions were removed.
0 index pages have been deleted, 0 are currently reusable.
CPU 1.13s/0.29u sec elapsed 65.45 sec.
INFO:  index fki_transactions__fkey now contains 50141303 row 
versions in 146528 pages
DETAIL:  0 index row versions were removed.
0 index pages have been deleted, 0 are currently reusable.
CPU 1.15s/0.24u sec elapsed 50.28 sec.
INFO:  index fki_transactions_x_fkey now contains 50141303 row 
versions in 190914 pages
DETAIL:  0 index row versions were removed.
5 index pages have been deleted, 0 are currently reusable.
CPU 1.49s/0.17u sec elapsed 67.95 sec.
INFO:  index transactions_xx_id now contains 50141303 
row versions in 164669 pages
DETAIL:  0 index row versions were removed.
2 index pages have been deleted, 0 are currently reusable.
CPU 1.36s/0.18u sec elapsed 62.83 sec.
INFO:  transactions: found 0 removable, 39644831 nonremovable row versions in 
5978240 out of 7312036 pages
DETAIL:  0 dead row versions cannot be removed yet.
There were 8209452 unused item pointers.
0 pages are entirely empty.
CPU 75.75s/18.57u sec elapsed 9268.19 sec.
INFO:  vacuuming pg_toast.pg_toast_106668498
vacuumdb: vacuuming of database etsy_v2 failed: ERROR:  could not access 
status of transaction 3429738606
DETAIL:  Could not open file pg_clog/0CC6: No such file or directory.

Interestingly.

In old dir there is pg_clog directory with files:
0AC0 .. 0DAF (including 0CC6, size 262144)
but new pg_clog has only:
0D2F .. 0DB0

File content - nearly all files that exist in both places are the same, with 
exception of 2 newest ones in new datadir:
3c5122f3e80851735c19522065a2d12a  0DAF
8651fc2b9fa3d27cfb5b496165cead68  0DB0

0DB0 doesn't exist in old, and 0DAF has different md5sum: 
7d48996c762d6a10f8eda88ae766c5dd

one more thing. I did select count(*) from transactions and it worked.

that's about it. I can probably copy over files from old datadir to new (in
pg_clog/), and will be happy to do it, but I'll wait for your call - retry with
copies files might destroy some evidence.

Best regards,

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] pg_upgrade problem

2011-08-26 Thread hubert depesz lubaczewski
On Fri, Aug 26, 2011 at 12:18:55AM -0400, Bruce Momjian wrote:
 
 OK, this was very helpful.  I found out that there is a bug in current
 9.0.X, 9.1.X, and HEAD that I introduced recently when I excluded temp
 tables.  (The bug is not in any released version of pg_upgrade.)  The
 attached, applied patches should fix it for you.  I assume you are
 running 9.0.X, and not 9.0.4.

pg_upgrade worked. Now I'm doing reindex and later on vacuumdb -az.

will keep you posted.

Best regards,

depesz


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] pg_upgrade problem

2011-08-25 Thread hubert depesz lubaczewski
On Thu, Aug 25, 2011 at 04:33:07PM -0400, Bruce Momjian wrote:
 The problem appears to be that the Postgres catalogs think there is a
 toast table for 'actions', while the file system doesn't seem to have
 such a file.  I can you look in pg_class and verify that?
 
   SELECT reltoastrelid FROM pg_class WHERE relname  = 'actions';

$ SELECT reltoastrelid FROM pg_class WHERE relname  = 'actions';
 reltoastrelid 
---
(0 rows)

This is done not on the pg from backup, but on normal production, as the test
pg instance doesn't work anymore.

I can re-set the test instance, but extracting from backup, and making it apply
all xlogs usually takes 2-3 days.

  One more thing - one of earlier tests actually worked through
  pg_upgrade, but when running vacuumdb -az on newly started 9.0.4, I got
  error about missing transaction/clog - don't remember exactly what it
  was, though.
 
 THere was a bug in how how pg_upgrade worked in pre-9.0.4 --- could it
 have been that?

It was done definitely using 9.0.4.

Best regards,

depesz


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] pg_upgrade problem

2011-08-25 Thread hubert depesz lubaczewski
On Thu, Aug 25, 2011 at 04:43:02PM -0400, Bruce Momjian wrote:
 Please check the old cluster.

Sure:

=# SELECT reltoastrelid FROM pg_class WHERE relname  = 'actions';   

  
 reltoastrelid 
---
  82510395
  71637071
(2 rows)

=# SELECT oid::regclass, reltoastrelid FROM pg_class WHERE relname  = 
'actions';  

   
  oid  | reltoastrelid 
---+---
 x.actions |  82510395
 y.actions |  71637071
(2 rows)

=# select oid, relfilenode from pg_class where oid in (SELECT reltoastrelid 
FROM pg_class WHERE relname  = 'actions');
   oid| relfilenode 
--+-
 82510395 |82510395
 71637071 |71637071
(2 rows)

=# select oid from pg_database where datname = current_database();
   oid
--
 71635381
(1 row)

$ ls -l /base/71635381/{71637071,82510395}
-rw--- 1 postgres postgres 0 2009-10-12 06:49 /base/71635381/71637071
-rw--- 1 postgres postgres 0 2010-08-19 14:02 /base/71635381/82510395

One more thing - one of earlier tests actually worked through
pg_upgrade, but when running vacuumdb -az on newly started 9.0.4, I got
error about missing transaction/clog - don't remember exactly what it
was, though.
   THere was a bug in how how pg_upgrade worked in pre-9.0.4 --- could it
   have been that?
  It was done definitely using 9.0.4.
 Good.

Not sure if it's good, since it was after the clog error was fixed, and
I still got it :/

but anyway - the problem with 71637071 is more important now.

Best regards,

depesz


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Enhanced psql in core?

2011-07-09 Thread hubert depesz lubaczewski
hi,
would it be possible to incorporate
http://www.postgres.cz/index.php/Enhanced-psql in core PostgreSQL/psql?

This patch adds lots of nice functionalities, which we could definitely
use.

Best regards,

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Problem with streaming replication, backups, and recovery (9.0.x)

2011-03-29 Thread hubert depesz lubaczewski
On Mon, Mar 28, 2011 at 05:29:22PM -0500, Kevin Grittner wrote:
 I have a theory.  Can you try it in what would be the failure case,
 but run an explicit a CHECKPOINT on the master, wait for
 pg_controldata to show that checkpoint on the slave, and (as soon as
 you see that) try to trigger the slave to come up in production?

=$ ( pg_controldata master/; pg_controldata slave2/ ) | grep Latest checkpoint 
location:
Latest checkpoint location:   0/2D58
Latest checkpoint location:   0/2C58

=$ psql -p 54001 -c checkpoint
CHECKPOINT

=$ ( pg_controldata master/; pg_controldata slave2/ ) | grep Latest checkpoint 
location:
Latest checkpoint location:   0/2E58
Latest checkpoint location:   0/2C58

... ~ 1.5 minute later

=$ ( pg_controldata master/; pg_controldata slave2/ ) | grep Latest checkpoint 
location:
Latest checkpoint location:   0/2E58
Latest checkpoint location:   0/2E58

=$ touch /home/depesz/slave2/finish.recovery

it worked. now the slave2 is working as stand alone.

what does it tell us? will any work happening after checkpoint break it anyway?

Best regards,

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Problem with streaming replication, backups, and recovery (9.0.x)

2011-03-29 Thread hubert depesz lubaczewski
On Tue, Mar 29, 2011 at 11:20:48AM +0900, Fujii Masao wrote:
 On Tue, Mar 29, 2011 at 12:11 AM, hubert depesz lubaczewski
 dep...@depesz.com wrote:
  On Mon, Mar 28, 2011 at 01:48:13PM +0900, Fujii Masao wrote:
  In 9.0, recovery doesn't read a backup history file. That FATAL error 
  happens
  if recovery ends before it reads the WAL record which was generated by
  pg_stop_backup(). IOW, recovery gets the recovery ending location from WAL
  record not backup history file. Since you didn't run pg_stop_backup() and 
  there
  is no WAL record containing the recovery ending location, you got that 
  error.
 
  If you want to take hot backup from the standby, you need to do the 
  procedure
  explained in
  http://wiki.postgresql.org/wiki/Incrementally_Updated_Backups
 
  one more question. how come that I can use this backup to make
  standalone pg, and it starts without any problem, but when I start it as
  sr slave, let it run for some time, and then promote to standalone, it
  breaks?
 
 Did you use recovery.conf to start standalone PostgreSQL? If not,
 recovery doesn't check whether it reaches the recovery ending position
 or not. So I guess no problem didn't happen.

no, i don't use.

hmm .. i am nearly 100% certain that previous pgs did in fact check if the end
of recovery is reached.

Best regards,

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Problem with streaming replication, backups, and recovery (9.0.x)

2011-03-29 Thread hubert depesz lubaczewski
On Tue, Mar 29, 2011 at 11:13:07AM +0900, Fujii Masao wrote:
 Yes, it's intentional. In streaming replication, at first the master must 
 stream
 a backup history file to the standby in order to let it know the recovery 
 ending
 position. But streaming replication doesn't have ability to send a text file, 
 so
 we changed the code so that the recovery ending position was also written as
 WAL record which can be streamed.

ok, this makes sense.

 BTW, in my system, I use another trick to take a base backup from the
 standby:
 
 (All of these operations are expected to be performed on the standby)
 (1) Run CHECKPOINT
 (2) Copy pg_control to temporary area
 (3) Take a base backup of $PGDATA
 (4) Copy back pg_control from temporary area to the backup taken in (2).
 (5) Calculate the recovery ending position from current pg_control in
  $PGDATA by using pg_controldata
 
 When recovery starts from that backup, it doesn't automatically check
 whether it has reached the ending position or not. So the user needs to
 check that manually.
 Yeah, this trick is very fragile and complicated. I'd like to improve the way
 in 9.2.

I know about it, but I feel very worried about doing stuff like this -
i.e. meddling with internal files of pg.

Best regards,

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Problem with streaming replication, backups, and recovery (9.0.x)

2011-03-28 Thread hubert depesz lubaczewski
On Mon, Mar 28, 2011 at 01:48:13PM +0900, Fujii Masao wrote:
 In 9.0, recovery doesn't read a backup history file. That FATAL error happens
 if recovery ends before it reads the WAL record which was generated by
 pg_stop_backup(). IOW, recovery gets the recovery ending location from WAL
 record not backup history file. Since you didn't run pg_stop_backup() and 
 there
 is no WAL record containing the recovery ending location, you got that error.
 
 If you want to take hot backup from the standby, you need to do the procedure
 explained in
 http://wiki.postgresql.org/wiki/Incrementally_Updated_Backups

Is it intentional and/or does it serve some greater good? I mean -
ability to make backups on slave without ever bothering master was
pretty interesting.

Best regards,

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Problem with streaming replication, backups, and recovery (9.0.x)

2011-03-28 Thread hubert depesz lubaczewski
On Mon, Mar 28, 2011 at 01:48:13PM +0900, Fujii Masao wrote:
 In 9.0, recovery doesn't read a backup history file. That FATAL error happens
 if recovery ends before it reads the WAL record which was generated by
 pg_stop_backup(). IOW, recovery gets the recovery ending location from WAL
 record not backup history file. Since you didn't run pg_stop_backup() and 
 there
 is no WAL record containing the recovery ending location, you got that error.
 
 If you want to take hot backup from the standby, you need to do the procedure
 explained in
 http://wiki.postgresql.org/wiki/Incrementally_Updated_Backups

one more question. how come that I can use this backup to make
standalone pg, and it starts without any problem, but when I start it as
sr slave, let it run for some time, and then promote to standalone, it
breaks?

Best regards,

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Problem with streaming replication, backups, and recovery (9.0.x)

2011-03-28 Thread hubert depesz lubaczewski
On Mon, Mar 28, 2011 at 04:24:23PM -0500, Kevin Grittner wrote:
 hubert depesz lubaczewski dep...@depesz.com wrote:
  
  how come that I can use this backup to make standalone pg, and it
  starts without any problem, but when I start it as sr slave, let
  it run for some time, and then promote to standalone, it breaks?
  
 We need more detail to make much of a guess about that.

what details can I provide?

I can provide scripts that I use to test it, and also access to test
machine that I was testing it on.

if you'd need something else - just tell me what, i'll do my best to
provide.

Best regards,

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Problem with streaming replication, backups, and recovery (9.0.x)

2011-03-28 Thread hubert depesz lubaczewski
On Mon, Mar 28, 2011 at 04:53:37PM -0500, Kevin Grittner wrote:
 hubert depesz lubaczewski dep...@depesz.com wrote:
  On Mon, Mar 28, 2011 at 04:24:23PM -0500, Kevin Grittner wrote:
  hubert depesz lubaczewski dep...@depesz.com wrote:
   
  how come that I can use this backup to make standalone pg, and
  it starts without any problem, but when I start it as sr slave,
  let it run for some time, and then promote to standalone, it
  breaks?
   
  We need more detail to make much of a guess about that.
  
  what details can I provide?
  
  I can provide scripts that I use to test it, and also access to
  test machine that I was testing it on.
  
 For starters, what do you mean by it breaks?  What, exactly
 happens?  What is in the logs?  What version of PostgreSQL?  Are you
 using pg_standby or custom scripts?

hmm ... i thought that all details are in the first mail in thread.

I can probably repost it, but it seems to me that it includes all of the
information - which scripts, how it fails, in what cases, and what
exactly i'm doing.

have you seen this mail -
http://archives.postgresql.org/pgsql-hackers/2011-03/msg01490.php ?

Best regards,

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Problem with streaming replication, backups, and recovery (9.0.x)

2011-03-28 Thread hubert depesz lubaczewski
On Mon, Mar 28, 2011 at 05:29:22PM -0500, Kevin Grittner wrote:
 hubert depesz lubaczewski dep...@depesz.com wrote:
  
  have you seen this mail -
  http://archives.postgresql.org/pgsql-hackers/2011-03/msg01490.php
  
 Ah, OK.
  
 I have a theory.  Can you try it in what would be the failure case,
 but run an explicit a CHECKPOINT on the master, wait for
 pg_controldata to show that checkpoint on the slave, and (as soon as
 you see that) try to trigger the slave to come up in production?

yes. will check, but it will happen in ~ 10 hours.

Best regards,

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Problem with streaming replication, backups, and recovery (9.0.x)

2011-03-28 Thread hubert depesz lubaczewski
On Mon, Mar 28, 2011 at 05:43:15PM -0500, Kevin Grittner wrote:
 hubert depesz lubaczewski dep...@depesz.com wrote:
  
  have you seen this mail -
  http://archives.postgresql.org/pgsql-hackers/2011-03/msg01490.php
  
 One more thing: Am I correct in understanding that you are trying to
 do a PITR-style backup without using pg_start_backup() and
 pg_stop_backup()?  If so, why?

because this is backup on slave, and the point was to make the backup
work without *any* bothering master.

so far it worked fine. and generally even with 9.0 it still works, and
backup *can* be used to setup new pg instance. but it cannot be used to
make sr slave, which we could later on promote.

Best regards,

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Problem with streaming replication, backups, and recovery (9.0.x)

2011-03-25 Thread hubert depesz lubaczewski
hi,

So, I hit a strange problem with Streaming Replication, that I cannot explain.

Executive summary:

when using hot backup made on straming replication slave, sometimes
(depending on load) generated backup is created in such a way, that while it
can be brough back as standalone Pg, and it can be brough back as streaming
slave, such slave (created off the backup) cannot be promoted to standalone.

Disclaimer:

I know that making hot backups on slave is not the suggested way, yet I was
doing it without any problem on earlier Postgres versions (8.2,8.3,8.4), and do
not have this problem with backups generated from the masters, so the problem I
hit now is so peculiar, that I thought that it might be just an effect of some
underlying, more serious, condition.

Longer explanation:

First, let me explain how omnipitr-backup-slave works, because it's the tool
that I use to make backups on slave.

Steps that it does:

1. gets pg_controldata for $PGDATADIR
2. compresses $PGDATA to data tar.gz, putting inside backup_label file,
   which contains:
START WAL LOCATION: %s (file %s)
CHECKPOINT LOCATION: %s
START TIME: %s
LABEL: OmniPITR_Slave_Hot_Backup
   where START WAL LOCATION uses value from Latest checkpoint's REDO
   location from pg_controldata from step #1, CHECKPOINT LOCATION is
   taken from Latest checkpoint location from pg_controldata taken in step
   #1, and START TIME is based on current (before starting compression of
   $PGDATA) timestamp.
3. gets another copy of pg_controldata for $PGDATA
4. repeats step #3 until value in Latest checkpoint location will change
5. wait until file that contains WAL location, from Minimum recovery ending
   location from pg_controldata from step #4, will be available
6. creates .backup file which is named based on START WAL
   LOCATION (from step #2), and contains the same lines as backup_label file
   from step #2, plus two more lines:
STOP WAL LOCATION: %s (file %s)
STOP TIME: %s
   where STOP WAL LOCATION is taken from Minimum recovery ending location
   from pg_controldata from step #4, and STOP time is current timestamp as
   of before starting compression of wal segments.
7. compresses xlogs plus the .backup file generated in step #6.

This approach worked for a long time on various hosts, systems, versions, etc.

But now, it fails.

I'm using for tests PostgreSQL 9.0.2 and 9.0.3 (mostly 9.0.2 as this is the
most critical for me, but I tested on 9.0.3 too, and the problem is the same),
on linux (ubuntu), 64bit.  

I do the procedure as always, and it produces backup. With this backup I can
setup new standalone server, and it works.

I can also setup streaming slave, and it also works, but when I create
trigger file to promote this slave to master it fails with error:
2011-03-24 21:01:58.051 CET @ 9680  LOG:  trigger file found: 
/home/depesz/slave2/finish.recovery
2011-03-24 21:01:58.051 CET @ 9930  FATAL:  terminating walreceiver process due 
to administrator command
2011-03-24 21:01:58.151 CET @ 9680  LOG:  redo done at 0/1F58
2011-03-24 21:01:58.151 CET @ 9680  LOG:  last completed transaction was at log 
time 2011-03-24 20:58:25.836333+01
2011-03-24 21:01:58.238 CET @ 9680  FATAL:  WAL ends before consistent recovery 
point

Which is interesting, because this particular backup was done using .backup 
file containing:

START WAL LOCATION: 0/A20 (file 0001000A)
STOP WAL LOCATION: 0/12C9D7E8 (file 00010012)
CHECKPOINT LOCATION: 0/B803050
START TIME: 2011-03-24 20:52:46 CET
STOP TIME: 2011-03-24 20:53:41 CET
LABEL: OmniPITR_Slave_Hot_Backup

Which means that minimum recovery ending location was in fact reached (it was
on 0/12C9D7E8, and recovery continued till 0/1F58).

I have set of script that can be used to replicate the problem, but the test
takes some time (~ 30 minutes).

What's most interesting is that this problem does not happen always. It
happens only when there was non-trivial load on db server - this is in my tests
where both master and slave are the same machine. I think that in normal cases
load on slave is more important.

If anyone would be able to help, I can give you access to test machine
and/or provide set of script which replicate (usually) the problem.

Alternatively - if there is anything I can do to help you solve the mystery
- I'd be very willing to.

Best regards,

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] REVIEW: EXPLAIN and nfiltered

2011-01-20 Thread hubert depesz lubaczewski
On Thu, Jan 20, 2011 at 02:48:59PM -0500, Stephen Frost wrote:
 * Robert Haas (robertmh...@gmail.com) wrote:
  How much has that code been updated from one release to the next?
 
 Just an FYI, I talked to depesz on IRC (please chime in if you disagree
 with any of this) and he indicated that he's had to update the code
 from time to time, mostly because the parser was too strict.
 
 He also mentioned that he didn't feel it was terribly complicated or
 that it'd be difficult to update for this.  Looking over the code, it's
 got a simple regex for matching that line which would have to be
 updated, but I don't think it'd require much more than that.

i'll be happy to update the Pg::Explain to handle new elements of
textual plans, so if this would be of concern - please don't treat
compatibility with explain.depesz.com as your responsibility/problem.

I'll fix the parser (have to add json/xml parsing too anyway), and I,
too, would love to get more information.

Best regards,

depesz

-- 
Linkedin: http://www.linkedin.com/in/depesz  /  blog: http://www.depesz.com/
jid/gtalk: dep...@depesz.com / aim:depeszhdl / skype:depesz_hdl / gg:6749007

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Generating Lots of PKs with nextval(): A Feature Proposal

2010-05-14 Thread hubert depesz lubaczewski
On Fri, May 14, 2010 at 02:07:27PM -0500, Kenneth Marshall wrote:
 Hi Peter,
 
 All you need to do is define your own sequence with an
 increment of 500. Look at:
 
 http://www.postgresql.org/docs/8.4/static/sql-createsequence.html

This is often not enough. For example - I want standard increment of 1,
but right now I'm importing 1 objects, and it would be simpler for
me to get 1 ids. Preferably in one block.

This is not achievable now. I know I can 'alter sequence set increment
by' - but this will also affect concurrent sessions. which might not be
a problem, but it's a side effect that I don't want.

+1 for original proposition, would love to get it.

depesz

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Dyamic updates of NEW with pl/pgsql

2010-03-10 Thread hubert depesz lubaczewski
On Tue, Mar 09, 2010 at 06:59:31PM +0100, Pavel Stehule wrote:
 2010/3/9 strk s...@keybit.net:
  How can a pl/pgsql trigger change the
  values of dynamic fields in NEW record ?
 
  By dynamic I mean that the field name
  is a variable in the trigger context.
 
  I've been told it's easy to do with pl/perl but
  I'd like to delive a pl/pgsql solution to have
  less dependencies.
 
 It isn't possible yet

well, it's possible. it's just not nice.

http://www.depesz.com/index.php/2010/03/10/dynamic-updates-of-fields-in-new-in-plpgsql/

depesz

-- 
Linkedin: http://www.linkedin.com/in/depesz  /  blog: http://www.depesz.com/
jid/gtalk: dep...@depesz.com / aim:depeszhdl / skype:depesz_hdl / gg:6749007

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] postgresql regular expr bug?

2009-06-14 Thread hubert depesz lubaczewski
On Sun, Jun 14, 2009 at 08:15:55AM +0200, Pavel Stehule wrote:
 postgres=# select '10' ~ '[0..9]+$';
  ?column?
 --
  t
 (1 row)

regexp '[0..9]+$' tests is given strings containst at the end substring
containing only characters 0, 9 and ..
and yes, it does - the last character is 0, so it matches.

 postgres=# select '10' ~ '^[0..9]+$';
  ?column?
 --
  f
 (1 row)

this regexp checks if whole string is built only with characters 0,
9, and .. and it is not - the first character is 1 which is not
the character list you provided.

basically .. has no special meaning in regexps, especially within [] -
i.e. it is just a dot.

Best regards,

depesz

-- 
Linkedin: http://www.linkedin.com/in/depesz  /  blog: http://www.depesz.com/
jid/gtalk: dep...@depesz.com / aim:depeszhdl / skype:depesz_hdl / gg:6749007

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] generate_series from now to infinity...

2009-05-17 Thread hubert depesz lubaczewski
On Sun, May 17, 2009 at 01:38:35PM -0300, Dickson S. Guedes wrote:
 I suggested him to use LIMIT.

interesting. I just tested (on beta1), that while limit works in this
query:
SELECT generate_series(now(), CAST('infinity'::date AS timestamp), interval '1 
hour') limit 3;

i.e. it returns 3 rows instantly, 
it doesn't for this query:

SELECT i from generate_series(now(), CAST('infinity'::date AS timestamp), 
interval '1 hour') as x (i) limit 3;

which (as far as i understand it) should be the same.

why is it not limiting generate_series in the second example? is it
intentional?

Best regards,

depesz

-- 
Linkedin: http://www.linkedin.com/in/depesz  /  blog: http://www.depesz.com/
jid/gtalk: dep...@depesz.com / aim:depeszhdl / skype:depesz_hdl / gg:6749007

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] display previous query string of idle-in-transaction

2009-03-25 Thread hubert depesz lubaczewski
On Wed, Mar 25, 2009 at 06:08:43PM +0900, Tatsuhito Kasahara wrote:
 So, I sometimes want to know what query (main cause) was done before
 transaction which have been practiced for a long time.
 Thoughts?

I would love to get it, but when I suggested it some time in the past
Tom shot it down as bad idea.

http://archives.postgresql.org/message-id/20071016132131.ga4...@depesz.com

To be honest - I have yet to see case described by Ton (commit; begin;).

Best regards,

depesz

-- 
Linkedin: http://www.linkedin.com/in/depesz  /  blog: http://www.depesz.com/
jid/gtalk: dep...@depesz.com / aim:depeszhdl / skype:depesz_hdl / gg:6749007

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] 8.4 - psql output for \l

2009-01-04 Thread hubert depesz lubaczewski
is it going to stay that way? i find it actually worse than 8.3
behaviour:

(dep...@[local]:5840) 11:43:40 [depesz]
# \l
List of databases
   Name|  Owner   | Encoding |  Collation  |Ctype| Access privileges
---+--+--+-+-+---
 depesz| depesz   | UTF8 | pl_PL.UTF-8 | pl_PL.UTF-8 |
 postgres  | pgdba| UTF8 | pl_PL.UTF-8 | pl_PL.UTF-8 |
 template0 | pgdba| UTF8 | pl_PL.UTF-8 | pl_PL.UTF-8 | =c/pgdba
 : pgdba=CTc/pgdba
 template1 | pgdba| UTF8 | pl_PL.UTF-8 | pl_PL.UTF-8 | pgdba=CTc/pgdba
 : =c/pgdba
 test  | depesz   | UTF8 | pl_PL.UTF-8 | pl_PL.UTF-8 |
(7 rows)

do we really have to introduce newlines even when it would (without any 
problem) fit in single line?

I mean - I udnerstand that if the grants are long, and complex - it makes sense
to display them like this, displaying extra lines just for superuser privileges
looks weird.

Best regards,

depesz

-- 
Linkedin: http://www.linkedin.com/in/depesz  /  blog: http://www.depesz.com/
jid/gtalk: dep...@depesz.com / aim:depeszhdl / skype:depesz_hdl / gg:6749007

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] visibility map - what do i miss?

2008-12-06 Thread hubert depesz lubaczewski
--- repost to hackers as suggested by RhodiumToad ---

hi,
i tried to test new visibility map feature.

to do so i:
1. fetched postgresql sources from cvs
2. compiled
3. turned autovacuum off
4. started pg
5. ran this queries:
  - CREATE TABLE test_1 (i INT4);
  - CREATE TABLE test_2 (i INT4);
  - CREATE TABLE test_3 (i INT4);
  - CREATE TABLE test_4 (i INT4);
  - INSERT INTO test_1 SELECT generate_series(1, 1);
  - INSERT INTO test_2 SELECT generate_series(1, 1);
  - INSERT INTO test_3 SELECT generate_series(1, 1);
  - INSERT INTO test_4 SELECT generate_series(1, 1);
  - UPDATE test_2 SET i = i + 1 WHERE i  1000;
  - UPDATE test_3 SET i = i + 1 WHERE i  5000;
  - UPDATE test_4 SET i = i + 1 WHERE i  9000;
  - VACUUM test_1;
  - VACUUM test_2;
  - VACUUM test_3;
  - VACUUM test_4;

I did it 2 times, first with sources of pg from 1st of november, and
second - with head from yesterday evening (warsaw, poland time).

results puzzled me.

First run - without visibility maps, timing of vacuums:
Time: 267844.822 ms
Time: 138854.592 ms
Time: 305467.950 ms
Time: 487133.179 ms

Second run - on head:

Time: 252218.609 ms
Time: 234388.763 ms
Time: 334016.413 ms
Time: 575698.750 ms

Now - as I understand the change - visilibity maps should make second run much 
faster?

Tests were performed on laptop. During first test I used it to browse the web,
read mail. During second test - nobody used the laptop.

Relation forms seem to exist:
# select oid from pg_database where datname = 'depesz';
  oid
---
 16389
(1 row)
# select relfilenode from pg_class where relname ~ 'test_';
 relfilenode
-
   26756
   26759
   26762
   26765
(4 rows)

= ls -l {26756,26759,26762,26765}*
-rw--- 1 pgdba pgdba 1073741824 2008-12-06 01:31 26756
-rw--- 1 pgdba pgdba 1073741824 2008-12-06 01:33 26756.1
-rw--- 1 pgdba pgdba 1065066496 2008-12-06 01:34 26756.2
-rw--- 1 pgdba pgdba 811008 2008-12-06 01:34 26756_fsm
-rw--- 1 pgdba pgdba  57344 2008-12-06 01:34 26756_vm
-rw--- 1 pgdba pgdba 1073741824 2008-12-06 01:35 26759
-rw--- 1 pgdba pgdba 1073741824 2008-12-06 01:36 26759.1
-rw--- 1 pgdba pgdba 1073741824 2008-12-06 01:37 26759.2
-rw--- 1 pgdba pgdba  312582144 2008-12-06 01:39 26759.3
-rw--- 1 pgdba pgdba 892928 2008-12-06 01:39 26759_fsm
-rw--- 1 pgdba pgdba  57344 2008-12-06 01:39 26759_vm
-rw--- 1 pgdba pgdba 1073741824 2008-12-06 01:39 26762
-rw--- 1 pgdba pgdba 1073741824 2008-12-06 01:49 26762.1
-rw--- 1 pgdba pgdba 1073741824 2008-12-06 01:41 26762.2
-rw--- 1 pgdba pgdba 1073741824 2008-12-06 01:42 26762.3
-rw--- 1 pgdba pgdba  523862016 2008-12-06 01:43 26762.4
-rw--- 1 pgdba pgdba1204224 2008-12-06 01:43 26762_fsm
-rw--- 1 pgdba pgdba  81920 2008-12-06 01:53 26762_vm
-rw--- 1 pgdba pgdba 1073741824 2008-12-06 02:01 26765
-rw--- 1 pgdba pgdba 1073741824 2008-12-06 02:08 26765.1
-rw--- 1 pgdba pgdba 1073741824 2008-12-06 02:18 26765.2
-rw--- 1 pgdba pgdba 1073741824 2008-12-06 01:50 26765.3
-rw--- 1 pgdba pgdba 1073741824 2008-12-06 01:51 26765.4
-rw--- 1 pgdba pgdba  735141888 2008-12-06 02:00 26765.5
-rw--- 1 pgdba pgdba1523712 2008-12-06 02:00 26765_fsm
-rw--- 1 pgdba pgdba  98304 2008-12-06 02:18 26765_vm

What do I miss?

Best regards,

depesz

-- 
Linkedin: http://www.linkedin.com/in/depesz  /  blog: http://www.depesz.com/
jid/gtalk: [EMAIL PROTECTED] / aim:depeszhdl / skype:depesz_hdl / gg:6749007

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] trivial patch for pg_freespacemap documentation

2008-10-02 Thread hubert depesz lubaczewski
while reading documentation for pg_freespacemap contrib module i found a
small mistake - the functions are names pg_freespace and not
pg_freespacemap.

attached patch changes the sgml file with documentation.

best regards,

depesz

-- 
Linkedin: http://www.linkedin.com/in/depesz  /  blog: http://www.depesz.com/
jid/gtalk: [EMAIL PROTECTED] / aim:depeszhdl / skype:depesz_hdl / gg:6749007
*** pgfreespacemap.sgml.orig	2008-10-02 11:55:44.0 +0200
--- pgfreespacemap.sgml	2008-10-02 11:56:09.0 +0200
***
*** 10,16 
   para
The filenamepg_freespacemap/ module provides a means for examining the
free space map (FSM). It provides a function called
!   functionpg_freespacemap/function, or two overloaded functions, to be
precise. The functions show the value recorded in the free space map for
a given page, or for all pages in the relation.
   /para
--- 10,16 
   para
The filenamepg_freespacemap/ module provides a means for examining the
free space map (FSM). It provides a function called
!   functionpg_freespace/function, or two overloaded functions, to be
precise. The functions show the value recorded in the free space map for
a given page, or for all pages in the relation.
   /para
***
*** 26,32 
variablelist
 varlistentry
  term
!  functionpg_freespacemap(rel regclass IN, blkno bigint IN) returns int2/function
  /term
  
  listitem
--- 26,32 
variablelist
 varlistentry
  term
!  functionpg_freespace(rel regclass IN, blkno bigint IN) returns int2/function
  /term
  
  listitem
***
*** 41,47 
  
 varlistentry
  term
!  functionpg_freespacemap(rel regclass IN, blkno OUT int4, avail OUT int2)/function
  /term
  
  listitem
--- 41,47 
  
 varlistentry
  term
!  functionpg_freespace(rel regclass IN, blkno OUT int4, avail OUT int2)/function
  /term
  
  listitem

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Fast REVERSE() function?

2008-09-08 Thread hubert depesz lubaczewski
On Mon, Sep 08, 2008 at 11:20:18AM -0400, Chris Browne wrote:
 I've got a case where I need to reverse strings, and find that, oddly
 enough, there isn't a C-based reverse() function.
 A search turns up pl/pgsql and SQL implementations:

just for completenes - there is also pl/perl and c versions freely
available:
http://www.depesz.com/index.php/2007/07/30/indexable-field-like-something/
(pl/perl)
http://blog.frosties.org/post/2007/08/28/Fonction-reverse-C-avec-PostgreSQL
(c)

Best regards,

depesz

-- 
Linkedin: http://www.linkedin.com/in/depesz  /  blog: http://www.depesz.com/
jid/gtalk: [EMAIL PROTECTED] / aim:depeszhdl / skype:depesz_hdl / gg:6749007

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] plperl and regexps with accented characters - incompatible?

2007-11-29 Thread hubert depesz lubaczewski
On Thu, Nov 29, 2007 at 12:39:30AM -0500, Andrew Dunstan wrote:
 The attached patch works for me to eliminate the errors. Please test ASAP.

tested, works for me:
#v+
# CREATE OR REPLACE FUNCTION test(TEXT) RETURNS bool language plperl as $$
return (shift =~ /[a-ząćęłńóśźżĄĆĘŁŃŚÓŹŻ0-9_-]+/i) || 0;
$$;
CREATE FUNCTION

# select test('depesz');
 test
--
 t
(1 row)

# select test('depesząćęł');
 test
--
 t
(1 row)

# select test('depesząćęł$');
 test
--
 t
(1 row)

# select test('dePEsząĆęł$');
 test
--
 t
(1 row)
#v-

depesz

-- 
quicksil1er: postgres is excellent, but like any DB it requires a
highly paid DBA.  here's my CV! :)
http://www.depesz.com/ - blog dla ciebie (i moje CV)

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] Stats collector on rampage (8.2.3)

2007-10-29 Thread hubert depesz lubaczewski
On Mon, Oct 29, 2007 at 09:52:24AM +0100, Gaetano Mendola wrote:
 it seems that the stats collector on my box is using more CPU than
 it did in the past.

it's well known bug, and it was fixed in 8.2.4:
http://www.postgresql.org/docs/current/interactive/release-8-2-4.html
...
Prevent the statistics collector from writing to disk too frequently
(Tom)
...

best regards,

depesz

-- 
quicksil1er: postgres is excellent, but like any DB it requires a
highly paid DBA.  here's my CV! :)
http://www.depesz.com/ - blog dla ciebie (i moje CV)

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] stored procedure stats in collector

2007-09-26 Thread hubert depesz lubaczewski
On Thu, Sep 20, 2007 at 05:34:32PM -0700, Neil Conway wrote:
 That seems a confusing set of values. Perhaps off, pl, and all
 would be clearer?
 I'm curious if you've measured the performance overhead of enabling this
 functionality.

i'm quite worried about all setting.
all operators are functions as well, so tracking literally all function
calls might be too heavy.

on the other hand i would like to be able to track some non-pl functions
like RI%.

depesz

-- 
quicksil1er: postgres is excellent, but like any DB it requires a
highly paid DBA.  here's my CV! :)
http://www.depesz.com/ - blog dla ciebie (i moje CV)

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] crypting prosrc in pg_proc

2007-08-09 Thread hubert depesz lubaczewski
On Thu, Aug 09, 2007 at 03:39:06PM +0200, Hans-Juergen Schoenig wrote:
so the idea we had to add one more column to pg_proc telling us whether
prosrc is encrypted or not. people could chose then whether to crypt codes
there or not (speed of decryption can be an issue).
should not be hard to implement ...
what do people think about this feature?

how would you decrypt the source before using the function?

depesz

-- 
quicksil1er: postgres is excellent, but like any DB it requires a
highly paid DBA.  here's my CV! :)
http://www.depesz.com/ - blog dla ciebie (i moje CV)

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] request for feature: psql 'DSN' option

2006-07-08 Thread hubert depesz lubaczewski
On 7/7/06, Andrew Dunstan [EMAIL PROTECTED] wrote:
Why is this better than using the service file?service file is not well described in manual. quick grep showed that we have PGSERVICE variable, and pg_service.conf.sample, which doesn't even include all possible parameter names.
and - comments in this file suggest it is to be used globally only: Copy this to your sysconf directory (typically /usr/local/pgsql/etc) and..., and not per user.i would really love to see pg_service.conf extended in such a way that i could:
1. store it in $HOME/.pg_service.conf2. use name of service on psql invocation (not as env variable). preferably using psql service_name. this will conflict with current behaviour (treating argument as dbname), but with some basic logic, i belive it can be done.
just my $0.02.depesz


[HACKERS] inclusion of hstore software in main tarball

2006-05-03 Thread hubert depesz lubaczewski
hisome time ago i was told on postgresql-general about existence of hstore package for postgresql.as i understand it is being developed by the same guys that are behind tsearch2, gin and (at least) recent changes in gist.
would it be possible to include this software in main postgresql tarball? it would be best if it came as standard feature, but contrib module would also work.if you are not familiar with hstore - this is new datatype for postgresql which allows you to store (and browse and search) any number of pairs of (key, value) in one field.
i have number of projects coming to my mind which could take advantage of something like this. the simplest thing is e-commerce site, where all custom fields (fields dependand on product type) can be stored in one, easily retrievable, field - thus reducing database load by skipping additional table scans for custom field table(s).
if you would like to check it: url is: http://www.sai.msu.su/~megera/postgres/gist/hstore/README.hstore, and the software itself is downloadable from 
http://www.sai.msu.su/~megera/postgres/gist/best regardshubert


Re: [HACKERS] problem with large maintenance_work_mem settings and

2006-03-04 Thread hubert depesz lubaczewski
On 3/4/06, Stefan Kaltenbrunner [EMAIL PROTECTED] wrote:
 forgot to mention that this is 8.1.3 compiled from source. Further
 testing shows that not only CREATE INDEX has some issue with large
 maintenance_work_mem settings :

what does it show:
cat /proc/sys/kernel/shmmax
?

depesz

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[HACKERS] when can we get better partitioning?

2006-01-05 Thread hubert depesz lubaczewski
hii was really more than happy when i saw table partitioning in release info for 8.1.then i tried to use it, and hit some serious problem (described on pgsql-general).basically the question is - is anybody at the moment working on improving partitioning capabilities? like improving queries to partitioned tables with ORDER BY and LIMIT statements?
if yes, when can we start thinking about release with this functionality?i, my own, do not have enough c knowledge to do it by myself, yet i would *really* like to have this feature improved, as it would save me and my coworkers a lot of work and hassle that we face right now with rewriting queries to ask directly partitions instead of master table, that got partitioned.
best regards,depesz


Re: [HACKERS] inet to bigint?

2005-12-05 Thread hubert depesz lubaczewski
On 12/6/05, Michael Fuhr [EMAIL PROTECTED] wrote:
How about:CREATE FUNCTION inet2num(inet) RETURNS numeric AS $$use Socket;return unpack(N, inet_aton($_[0]));$$ LANGUAGE plperlu IMMUTABLE STRICT;
you can use this one:
... AS $$
return unpack(N, pack(C4, split(/\./, $_[0])));
$$ language plperl IMMUTABLE STRICT;

to avoid the need to use untrusted languages.
it is less readable thought :(

depesz