Re: [HACKERS] pg_basebackup --progress output for batch execution

2017-11-10 Thread Martín Marqués
Hi,

Thanks for having a look at this patch.

2017-11-09 20:55 GMT-03:00 Jeff Janes :
> On Fri, Sep 29, 2017 at 4:00 PM, Martin Marques
>  wrote:
>>
>> Hi,
>>
>> Some time ago I had to work on a system where I was cloning a standby
>> using pg_basebackup, that didn't have screen or tmux. For that reason I
>> redirected the output to a file and ran it with nohup.
>>
>> I normally (always actually ;) ) run pg_basebackup with --progress and
>> --verbose so I can follow how much has been done. When done on a tty you
>> get a nice progress bar with the percentage that has been cloned.
>>
>> The problem came with the execution and redirection of the output, as
>> the --progress option will write a *very* long line!
>>
>> Back then I thought of writing a patch (actually someone suggested I do
>> so) to add a --batch-mode option which would change the behavior
>> pg_basebackup has when printing the output messages.
>
>
>
> While separate lines in the output file is better than one very long line,
> it still doesn't seem so useful.  If you aren't watching it in real time,
> then you really need to have a timestamp on each line so that you can
> interpret the output.  The lines are about one second apart, but I don't
> know robust that timing is under all conditions.

I kind of disagree with your view here.

If the cloning process takes many hours to complete (in my case, it
was around 12 hours IIRC) you might want to peak at the log every now
and then with tail.

I do agree on adding a timestamp prefix to each line, as it's not
clear from the code how often progress_report() is called.

> I think I agree with Arthur that I'd rather have the decision made by
> inspecting whether output is going to a tty, rather than by adding another
> command line option.  But maybe that is not detected robustly enough across
> all platforms and circumstances?

In this case, Arthur's idea is good, but would make the patch less
generic/flexible for the end user.

That's why I tried to reproduce what top does when executed with -b
(Batch mode operation). There, it's the end user who decides how the
output is formatted (well, saying it decides on formatting a bit of an
overstatement, but you get it ;) )

An example where using isatty() might fail is if you run pg_basebackup
from a tty but redirect the output to a file, I believe that in that
case isatty() will return true, but it's very likely that the user
might want batch mode output.

But maybe we should also add Arthurs idea anyway (when not in batch
mode), as it seems pretty lame to output progress in one line if you
are not in a tty.

Thoughts?

-- 
Martín Marquéshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bug with pg_basebackup and 'shared' tablespace

2017-09-26 Thread Martín Marqués
El 13/09/17 a las 14:17, Pierre Ducroquet escribió:
> + boolfirstfile = 1;

You are still assigning 1 to a bool (which is not incorrect) instead of
true or false as Michael mentioned before.

P.D.: I didn't go though all the thread and patch in depth so will not
comment further.

-- 
Martín Marquéshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Index corruption with CREATE INDEX CONCURRENTLY

2017-02-06 Thread Martín Marqués
El 05/02/17 a las 21:57, Tomas Vondra escribió:
> 
> +1 to not rushing fixes into releases. While I think we now finally
> understand the mechanics of this bug, the fact that we came up with
> three different fixes in this thread, only to discover issues with each
> of them, warrants some caution.

I agree also with Robert on not rushing the patch. My point was if we
had to rush the release.

> OTOH I disagree with the notion that bugs that are not driven by user
> reports are somehow less severe. Some data corruption bugs cause quite
> visible breakage - segfaults, immediate crashes, etc. Those are pretty
> clear bugs, and are reported by users.
> 
> Other data corruption bugs are much more subtle - for example this bug
> may lead to incorrect results to some queries, failing to detect values
> violating UNIQUE constraints, issues with foreign keys, etc.

I was recalling just yesterday after sending the mail a logical
replication setup we did on a 9.3 server of a customer which brought up
data inconsistencies on the primary key of one of the tables. The table
had duplicate values.

As Tomas says, it's subtle and hard to find unless you constantly run
index checks (query a sample of the data from the heap and from the
index and check they match). In our case, the customer was not aware of
the dups until we found them.

Regards,

-- 
Martín Marquéshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Index corruption with CREATE INDEX CONCURRENTLY

2017-02-05 Thread Martín Marqués
El 05/02/17 a las 10:03, Michael Paquier escribió:
> On Sun, Feb 5, 2017 at 6:53 PM, Pavel Stehule  wrote:
>> I agree with Pavan - a release with known important bug is not good idea.
> 
> This bug has been around for some time, so I would recommend taking
> the time necessary to make the best fix possible, even if it means
> waiting for the next round of minor releases.

The fact that the bug has been around for a long time doesn't mean we
shouldn't take it seriously.

IMO any kind of corruption (heap or index) should be prioritized.
There's also been comments about maybe this being the cause of old
reports about index corruption.

I ask myself if it's a good idea to make a point release with a know
corruption bug in it.

Regards,

-- 
Martín Marquéshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Sample configuration files

2016-09-02 Thread Martín Marqués
El 02/09/16 a las 04:19, Vik Fearing escribió:
> 
>> 2. But I'm not sure that this will actually be useful to people.  It
>> seems like it might just be one more thing for patch authors to
>> maintain.  I think that if somebody wants to set a parameter defined
>> for a contrib module, it's easy enough to just add an entry into
>> postgresql.conf, or use ALTER SYSTEM .. SET.  Going and finding the
>> sample file (which only sets the value to the default) and then
>> putting that into your postgresql.conf seems like an extra step.
> 
> I was imagining just using the "include" directive.  I have heard the
> desire for annotated sample conf files for the contrib modules twice now
> from different people which is why I wrote the patch.  If we decide that
> the extra documentation is too much of a burden, I can understand that,
> also.

I think having a sample configuration file which we can include and make
the specific configuration changes for that contrib has it's logic, and
I believe it's a good idea to include.

We suggest having a bdr.conf file separated with all the BDR
configuration parameters, so this makes total sense.

I will have a better check at the patch and get back (didn't find
anything wrong at the first look at it).

Regards,

-- 
Martín Marquéshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_dump with tables created in schemas created by extensions

2016-08-30 Thread Martín Marqués
2016-08-30 2:02 GMT-03:00 Michael Paquier :
> On Tue, Aug 30, 2016 at 5:43 AM, Martín Marqués  
> wrote:
>> This is v4 of the patch, which is actually a cleaner version from the
>> v2 one Michael sent.
>>
>> I stripped off the external index created from the tests as that index
>> shouldn't be dumped (table it belongs to isn't dumped, so neither
>> should the index). I also took off a test which was duplicated.
>>
>> I think this patch is a very good first approach. Future improvements
>> can be made for indexes, but we need to get the extension dependencies
>> right first. That could be done later, on a different patch.
>>
>> Thoughts?
>
> Let's do as you suggest then, and just focus on the schema issue. I
> just had an extra look at the patch and it looks fine to me. So the
> patch is now switched as ready for committer.

That's great. Thanks for all Michael


-- 
Martín Marquéshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_dump with tables created in schemas created by extensions

2016-08-29 Thread Martín Marqués
Hi,

This is v4 of the patch, which is actually a cleaner version from the
v2 one Michael sent.

I stripped off the external index created from the tests as that index
shouldn't be dumped (table it belongs to isn't dumped, so neither
should the index). I also took off a test which was duplicated.

I think this patch is a very good first approach. Future improvements
can be made for indexes, but we need to get the extension dependencies
right first. That could be done later, on a different patch.

Thoughts?

-- 
Martín Marquéshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


pgdump-extension-v4.patch
Description: invalid/octet-stream

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_dump with tables created in schemas created by extensions

2016-08-29 Thread Martín Marqués
2016-08-29 4:51 GMT-03:00 Michael Paquier :
>
>> I see the current behavior is documented, and I do understand why global
>> objects can't be part of the extension, but for indexes it seems to violate
>> POLA a bit.
>>
>> Is there a reason why we don't want the extension/index dependencies?
>
> I think that we could do a better effort for indexes at least, in the
> same way as we do for sequences as both are referenced in pg_class. I
> don't know the effort to get that done for < 9.6, but if we can do it
> at least for 9.6 and 10, which is where pg_dump is a bit smarter in
> the way it deals with dependencies, we should do it.

ATM I don't have a strong opinion one way or the other regarding the
dependency of indexes and extensions. I believe we have to put more
thought into it, and at the end we might just leave it as it is.

What I do believe is that this requires a separate thread, and if
agreed, a separate patch from this issue.

I'm going to prepare another patch where I'm going to strip the tests
for external indexes which are failing now. They actually fail
correctly as the table they depend on will not be dumped, so it's the
developer/DB designer who has to take care of these things.

If in the near or not so near future we provide a patch to deal with
these missing dependencies, we can easily patch pg_dump so it deals
with this correctly.

Regards,

-- 
Martín Marquéshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_dump with tables created in schemas created by extensions

2016-08-26 Thread Martín Marqués
2016-08-26 19:37 GMT-03:00 Tom Lane :
> =?UTF-8?B?TWFydMOtbiBNYXJxdcOpcw==?=  writes:
>> Looking at this issue today, I found that we are not setting a
>> dependency for an index created inside an extension.
>
> Surely the index has a dependency on a table, which depends on the
> extension?
>
> If you mean that you want an extension to create an index on a table that
> doesn't belong to it, but it's assuming pre-exists, I think that's just
> stupid and we need not support it.

Well, there's still the second pattern I mentioned before (which
actually came up while trying to fix this patch).

Extension creates a table and an index over one of the columns:

CREATE TABLE regress_pg_dump_schema.test_table (
col1 int,
col2 int
);

CREATE INDEX test_extension_index ON regress_pg_dump_schema.test_table (col2);


Later, some application (or a user, doesn't matter really) creates a
second index over col1:

CREATE INDEX test_index ON regress_pg_dump_schema.test_table (col1);

What we are doing (or at least it's what I understand from the code)
is checking if the table depends on an extension, and so we don't dump
it.

We should be able to use the same procedure (and reuse the code we
already have) to decide if an index should be dumped or not. But we
are missing the dependency, and so it's not possible to know that
regress_pg_dump_schema.test_extension_index depends on the extension
and regress_pg_dump_schema.test_index doesn't.

Or is this something we shouldn't support (in that case we should document it).

-- 
Martín Marquéshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_dump with tables created in schemas created by extensions

2016-08-26 Thread Martín Marqués
Hi,

2016-08-26 10:53 GMT-03:00 Martín Marqués :
>
> There's still one issue, which I'll add a test for as well, which is
> that if the index was created by the extension, it will be dumped
> anyway. I'll have a look at that as well.

Looking at this issue today, I found that we are not setting a
dependency for an index created inside an extension. I don't know if
it's deliberate or an overlook.

The thing is that we can't check pg_depend for dependency of an index
and the extension that creates it.

I was talking with other developers, and we kind of agree this is a
bug, for 2 reasons we thought of:

*) If the extension only creates an index over an existing table, a
drop extension will not drop that index

*) We need to have the dependency for this patch as well, else we'll
end up with an inconsistent dump, or at least one that could restore
with a != 0 return error code.

I'll open a separate bug report for this.

Regards,

-- 
Martín Marquéshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_dump with tables created in schemas created by extensions

2016-08-26 Thread Martín Marqués
Hi,

2016-08-25 8:10 GMT-03:00 Michael Paquier :
> On Thu, Aug 25, 2016 at 10:25 AM, Martín Marqués  
> wrote:
>> 2016-08-24 21:34 GMT-03:00 Michael Paquier :
>>>
>>> Yes, you are right. If I look at the diffs this morning I am seeing
>>> the ACLs being dumped for this aggregate. So we could just fix the
>>> test and be done with it. I did not imagine the index issue though,
>>> and this is broken for some time, so that's not exclusive to 9.6 :)
>>
>> Do you see any easier way than what I mentioned earlier (adding a
>> selectDumpableIndex() function) to fix the index dumping issue?
>
> Yes, we are going to need something across those lines. And my guess
> is that this is going to be rather close to getOwnedSeqs() in terms of
> logic.

I was able to get this fixed without any further new functions (just
using the dump/dump_contains and applying the same fix on
selectDumpableTable).

Main problem relied here in getIndexes()

@@ -6158,7 +6167,7 @@ getIndexes(Archive *fout, TableInfo tblinfo[],
int numTables)
continue;

/* Ignore indexes of tables whose definitions are not
to be dumped */
-   if (!(tbinfo->dobj.dump & DUMP_COMPONENT_DEFINITION))
+   if (!(tbinfo->dobj.dump_contains & DUMP_COMPONENT_DEFINITION))
continue;

if (g_verbose)

But we have to set dump_contains with correct values.

There's still one issue, which I'll add a test for as well, which is
that if the index was created by the extension, it will be dumped
anyway. I'll ave a look at that as well.

One other thing I found was that one of the CREATE INDEX tests had
incorrectly set like and unlike for pre_data and post_data. (indexes
are dumped in section post_data)

That's been fixes as well.

I've cleaned up the patch a bit, so this is v3 with all checks passing.

I'll add that new test regarding dumping an index created by the
extension (which will fail) and look for ways to fix it.

Regards,

-- 
Martín Marquéshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


pgdump-extension-v3.patch
Description: invalid/octet-stream

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_dump with tables created in schemas created by extensions

2016-08-24 Thread Martín Marqués
2016-08-24 21:34 GMT-03:00 Michael Paquier :
>
> Yes, you are right. If I look at the diffs this morning I am seeing
> the ACLs being dumped for this aggregate. So we could just fix the
> test and be done with it. I did not imagine the index issue though,
> and this is broken for some time, so that's not exclusive to 9.6 :)

Hi Michael,

Do you see any easier way than what I mentioned earlier (adding a
selectDumpableIndex() function) to fix the index dumping issue?

Regards,

-- 
Martín Marquéshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_dump with tables created in schemas created by extensions

2016-08-24 Thread Martín Marqués
2016-08-24 17:01 GMT-03:00 Martín Marqués :
> 2016-08-24 11:15 GMT-03:00 Stephen Frost :
>> Michael,
>>
>> * Michael Paquier (michael.paqu...@gmail.com) wrote:
>>> The patch attached includes all those tests and they are failing. We
>>> are going to need a patch able to pass all that, and even for master
>>> this is going to need more thoughts, and let's focus on HEAD/9.6
>>> first.
>>
>> Are you sure you have the tests correct..?   At least for testagg(), it
>> looks like you're testing for:
>>
>> GRANT ALL ON FUNCTION test_agg(int2) TO regress_dump_test_role;
>>
>> but what's in the dump is (equivilantly):
>>
>> GRANT ALL ON FUNCTION test_agg(smallint) TO regress_dump_test_role;
>
> Yes, that was the problem there.
>
>> I've not looked into all the failures, but at least this one seems like
>> an issue in the test, not an issue in pg_dump.
>
> I see the other 12 failures regarding the CREATE INDEX that Michael
> reported but can't quite find where it's originated. (or actually
> where the problem is)

OK, I see where the problem is.

Indexes don't have a selectDumpableIndex() function to see if we dump
it or not. We just don't gather indexes from tables for which we are
dumping their definition:

/* Ignore indexes of tables whose definitions are not
to be dumped */
if (!(tbinfo->dobj.dump & DUMP_COMPONENT_DEFINITION))
continue;

This means we have to perform the same change we did on
selectDumpableNamespace for selectDumpableTable, and also assign the
correct value to dump_contains, which is not set there.

The problem will come when we have to decide on which indexes were
created by the extension (primary key indexes, other indexes created
by the extension) and which were created afterwards over a table which
depends on the extension (the test_table from the extension).

Right now, I'm in an intermediate state, where I got getIndexes() to
query for the indexes of these tables, but dumpIndexes is not dumping
the indexes that were queried.

I wonder if we should have a selectDumpableIndexes to set the
appropriate dobj.dump for the Indexes.

-- 
Martín Marquéshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_dump with tables created in schemas created by extensions

2016-08-24 Thread Martín Marqués
2016-08-24 11:15 GMT-03:00 Stephen Frost :
> Michael,
>
> * Michael Paquier (michael.paqu...@gmail.com) wrote:
>> The patch attached includes all those tests and they are failing. We
>> are going to need a patch able to pass all that, and even for master
>> this is going to need more thoughts, and let's focus on HEAD/9.6
>> first.
>
> Are you sure you have the tests correct..?   At least for testagg(), it
> looks like you're testing for:
>
> GRANT ALL ON FUNCTION test_agg(int2) TO regress_dump_test_role;
>
> but what's in the dump is (equivilantly):
>
> GRANT ALL ON FUNCTION test_agg(smallint) TO regress_dump_test_role;

Yes, that was the problem there.

> I've not looked into all the failures, but at least this one seems like
> an issue in the test, not an issue in pg_dump.

I see the other 12 failures regarding the CREATE INDEX that Michael
reported but can't quite find where it's originated. (or actually
where the problem is)

-- 
Martín Marquéshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_dump with tables created in schemas created by extensions

2016-08-23 Thread Martín Marqués
Hi,

2016-08-23 16:46 GMT-03:00 Martín Marqués :
>
> I will add tests for sequence and functions as you mention and test again.
>
> Then I'll check if other tests should be added as well.

I found quite some other objects we should be checking as well, but
this will add some duplication to the tests, as I'd just copy (with
minor changes) what's in src/bin/pg_dump/t/002_pg_dump.pl

I can't think of a way to avoid this duplication, not that it really
hurts. We would have to make sure that any new objects added to one
test, if needed, are added to the other (that's a bit cumbersome).

Other things to check:

CREATE AGGREGATE
CREATE DOMAIN
CREATE FUNCTION
CREATE TYPE
CREATE MATERIALIZED VIEW
CREATE POLICY

Maybe even CREATE INDEX over a table created in the schema.

Also, ACLs have to be tested against objects in the schema.

I hope I didn't miss anything there.

-- 
Martín Marquéshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_dump with tables created in schemas created by extensions

2016-08-23 Thread Martín Marqués
Hi Michael,

2016-08-23 5:02 GMT-03:00 Michael Paquier :
> On Sat, Aug 13, 2016 at 6:58 AM, Martín Marqués  
> wrote:
>> I believe the fix will be simple after the back and forth mails with
>> Michael, Stephen and Tom. I will work on that later, but preferred to
>> have the tests the show the problem which will also make testing the fix
>> easier.
>>
>> Thoughts?
>
> It seems to me that we are going to need a bit more coverage for more
> object types depending on the code paths that are being changed. For
> example, sequences or functions should be checked for as well, and not
> only tables. By the way, do you need help to build a patch or should I
> jump in?

I wanted to test what I had in mind with one object, and then see if
any replication is needed for other objects.

I was struggling the last days as what I was reading in my patched
pg_dump.c had to work as expected, and dump the tables not created by
the test_pg_dump extension but inside the schema
regress_pg_dump_schema.

Today I decided to go over the test I wrote, and found a bug there,
reason why I couldn't get a successful make check.

Here go 2 patches. One is a fix for the test I sent earlier. The other
is the proposed idea Tom had using the dump_contains that Stephan
committed on 9.6.

So far I've checked that it fixes the dumpable for Tables, but I think
it should work for all other objects as well, as all this patch does
is leave execution of checkExtensionMembership at the end of
selectDumpableNamespace, leaving the dump_contains untouched.

Checks pass ok.

I will add tests for sequence and functions as you mention and test again.

Then I'll check if other tests should be added as well.

-- 
Martín Marquéshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
From 6cbba9e87d1733ccb03d3d705e135e607be63cba Mon Sep 17 00:00:00 2001
From: Martin 
Date: Tue, 23 Aug 2016 16:17:38 -0300
Subject: [PATCH 1/2] New tests for pg_dump which make sure that tables from a
 schema depending on an extension will get dumped when they don't depend on
 the extension.

The only objects we shouldn't dump are the ones created by the extension.

We will provide a patch that fixes this bug.
---
 src/test/modules/test_pg_dump/t/001_base.pl| 25 ++
 .../modules/test_pg_dump/test_pg_dump--1.0.sql |  9 
 2 files changed, 34 insertions(+)

diff --git a/src/test/modules/test_pg_dump/t/001_base.pl b/src/test/modules/test_pg_dump/t/001_base.pl
index fb4f573..7bb92e7 100644
--- a/src/test/modules/test_pg_dump/t/001_base.pl
+++ b/src/test/modules/test_pg_dump/t/001_base.pl
@@ -283,6 +283,31 @@ my %tests = (
 			schema_only=> 1,
 			section_pre_data   => 1,
 			section_post_data  => 1, }, },
+	'CREATE TABLE regress_test_schema_table' => {
+		create_order => 3,
+		create_sql   => 'CREATE TABLE regress_pg_dump_schema.test_schema_table (
+		   col1 serial primary key,
+		   CHECK (col1 <= 1000)
+	   );',
+		regexp => qr/^
+			\QCREATE TABLE test_schema_table (\E
+			\n\s+\Qcol1 integer NOT NULL,\E
+			\n\s+\QCONSTRAINT test_schema_table_col1_check CHECK \E
+			\Q((col1 <= 1000))\E
+			\n\);/xm,
+		like => {
+			binary_upgrade => 1,
+			clean  => 1,
+			clean_if_exists=> 1,
+			createdb   => 1,
+			defaults   => 1,
+			no_privs   => 1,
+			no_owner   => 1,
+			schema_only=> 1,
+			section_pre_data   => 1, },
+		unlike => {
+			pg_dumpall_globals => 1,
+			section_post_data  => 1, }, },
 	'CREATE ACCESS METHOD regress_test_am' => {
 		regexp => qr/^
 			\QCREATE ACCESS METHOD regress_test_am TYPE INDEX HANDLER bthandler;\E
diff --git a/src/test/modules/test_pg_dump/test_pg_dump--1.0.sql b/src/test/modules/test_pg_dump/test_pg_dump--1.0.sql
index c2fe90d..3f88e6c 100644
--- a/src/test/modules/test_pg_dump/test_pg_dump--1.0.sql
+++ b/src/test/modules/test_pg_dump/test_pg_dump--1.0.sql
@@ -10,6 +10,15 @@ CREATE TABLE regress_pg_dump_table (
 
 CREATE SEQUENCE regress_pg_dump_seq;
 
+-- We want to test that schemas and objects created in the schema by the
+-- extension are not dumped, yet other objects created afterwards will be
+-- dumped.
+CREATE SCHEMA regress_pg_dump_schema
+   CREATE TABLE regress_pg_dump_schema_table (
+  col1 serial,
+  col2 int
+	);
+
 GRANT USAGE ON regress_pg_dump_seq TO regress_dump_test_role;
 
 GRANT SELECT ON regress_pg_dump_table TO regress_dump_test_role;
-- 
2.5.5

From 0dfe2224fae86ab2b6f8da4fc25b96fb13ca0f6c Mon Sep 17 00:00:00 2001
From: Martin 
Date: Tue, 23 Aug 2016 16:23:44 -0300
Subject: [PATCH 2/2] This patch fixes a bug reported against pg_dump, and
 makes pg_dump dump the objects contained in schemas depending on an
 extension.

Schema will not be dumped, a

[HACKERS] pg_dump with tables created in schemas created by extensions

2016-08-12 Thread Martín Marqués
Hi,

About a month or two ago I reported a pg_dump bug regarding tables (and
other objects) created inside a schema from an extension.

Objects created by the extensions are not dumped, as they will be
created once again with the CREATE EXTENSION call, but and other objects
which might live inside an object created by the extension should be
dumped so they get created inside the same schema.

The problem showed up when dumping a DB with PgQ installed as an
extension. Check here:

https://www.postgresql.org/message-id/d86dd685-1870-cfa0-e5e4-def1f918bec9%402ndquadrant.com

and here:

https://www.postgresql.org/message-id/409fe594-f4cc-89f5-c0d2-0a921987a864%402ndquadrant.com

Some discussion came up on the bugs list on how to fix the issue, and
the fact the new tests were needed.

I'm attaching a patch to provide such test, which if applied now,
returns failure on a number of runs, all expected due to the bug we have
at hand.

I believe the fix will be simple after the back and forth mails with
Michael, Stephen and Tom. I will work on that later, but preferred to
have the tests the show the problem which will also make testing the fix
easier.

Thoughts?

Regards,

-- 
Martín Marquéshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
diff --git a/src/test/modules/test_pg_dump/t/001_base.pl b/src/test/modules/test_pg_dump/t/001_base.pl
new file mode 100644
index fb4f573..6086317
*** a/src/test/modules/test_pg_dump/t/001_base.pl
--- b/src/test/modules/test_pg_dump/t/001_base.pl
*** my %tests = (
*** 283,288 
--- 283,313 
  			schema_only=> 1,
  			section_pre_data   => 1,
  			section_post_data  => 1, }, },
+ 	'CREATE TABLE regress_test_schema_table' => {
+ 		create_order => 3,
+ 		create_sql   => 'CREATE TABLE regress_pg_dump_schema.test_schema_table (
+ 		   col1 serial primary key,
+ 		   CHECK (col1 <= 1000)
+ 	   );',
+ 		regexp => qr/^
+ 			\QCREATE TABLE test_schema_table (\E
+ 			\n\s+\Qcol1 integer NOT NULL,\E
+ 			\n\s+\QCONSTRAINT test_table_col1_check CHECK \E
+ 			\Q((col1 <= 1000))\E
+ 			\n\);/xm,
+ 		like => {
+ 			binary_upgrade => 1,
+ 			clean  => 1,
+ 			clean_if_exists=> 1,
+ 			createdb   => 1,
+ 			defaults   => 1,
+ 			no_privs   => 1,
+ 			no_owner   => 1,
+ 			schema_only=> 1,
+ 			section_pre_data   => 1, },
+ 		unlike => {
+ 			pg_dumpall_globals => 1,
+ 			section_post_data  => 1, }, },
  	'CREATE ACCESS METHOD regress_test_am' => {
  		regexp => qr/^
  			\QCREATE ACCESS METHOD regress_test_am TYPE INDEX HANDLER bthandler;\E
diff --git a/src/test/modules/test_pg_dump/test_pg_dump--1.0.sql b/src/test/modules/test_pg_dump/test_pg_dump--1.0.sql
new file mode 100644
index c2fe90d..3f88e6c
*** a/src/test/modules/test_pg_dump/test_pg_dump--1.0.sql
--- b/src/test/modules/test_pg_dump/test_pg_dump--1.0.sql
*** CREATE TABLE regress_pg_dump_table (
*** 10,15 
--- 10,24 
  
  CREATE SEQUENCE regress_pg_dump_seq;
  
+ -- We want to test that schemas and objects created in the schema by the
+ -- extension are not dumped, yet other objects created afterwards will be
+ -- dumped.
+ CREATE SCHEMA regress_pg_dump_schema
+CREATE TABLE regress_pg_dump_schema_table (
+   col1 serial,
+   col2 int
+ 	);
+ 
  GRANT USAGE ON regress_pg_dump_seq TO regress_dump_test_role;
  
  GRANT SELECT ON regress_pg_dump_table TO regress_dump_test_role;

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] PgQ and pg_dump

2016-06-21 Thread Martín Marqués
2016-06-21 13:08 GMT-03:00 Robert Haas :
> On Thu, Jun 16, 2016 at 1:46 PM, Martín Marqués  
> wrote:
>> The comment is accurate on what is going to be dumpable and what's not
>> from the code. In our case, as the pgq schema is not dumpable becaause
>> it comes from an extension, other objects it contain will not be
>> dumpable as well.
>>
>> That's the reason why the PgQ event tables created by
>> pgq.create_queue() are not dumped.
>
> That sucks.

Yes, and I'm surprised we haven't had any bug report yet on
inconsistent dumps. The patch that changed pg_dump's behavior on
extension objects is more then a year old.

I'll find some time today to add tests and check for other objects
that are not dumped for the same reason.

Cheers,

-- 
Martín Marquéshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] PgQ and pg_dump

2016-06-16 Thread Martín Marqués
Hi,

2016-06-16 9:48 GMT-03:00 Michael Paquier :
> On Thu, Jun 16, 2016 at 8:37 PM, Martín Marqués  
> wrote:
>> El 16/06/16 a las 00:08, Michael Paquier escribió:
>>> On Wed, Jun 15, 2016 at 7:19 PM, Martín Marqués  
>>> wrote:
>>>>
>>>> How would the recovery process work? We expect the schema to be there
>>>> when restoring the tables?
>>>
>>> pg_dump creates the schema first via the CREATE EXTENSION command,
>>> then tables dependent on this schema that are not created by the
>>> extension are dumped individually.
>>
>> That's not the behavior I'm seeing here:
>> [long test]
>
> Yes, that's why I completely agree that this is a bug :)
> I am seeing the same behavior as you do.

That's nice, we agree to agree! :)

So, after reading back and forth, the reason why the tables are not
being dumped is noted here in the code:

/*
 * If specific tables are being dumped, dump just those
tables; else, dump
 * according to the parent namespace's dump flag.
 */
if (table_include_oids.head != NULL)
tbinfo->dobj.dump = simple_oid_list_member(&table_include_oids,

tbinfo->dobj.catId.oid) ?
DUMP_COMPONENT_ALL : DUMP_COMPONENT_NONE;
else
tbinfo->dobj.dump = tbinfo->dobj.namespace->dobj.dump_contains;


The comment is accurate on what is going to be dumpable and what's not
from the code. In our case, as the pgq schema is not dumpable becaause
it comes from an extension, other objects it contain will not be
dumpable as well.

That's the reason why the PgQ event tables created by
pgq.create_queue() are not dumped.

-- 
Martín Marquéshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] PgQ and pg_dump

2016-06-16 Thread Martín Marqués
El 16/06/16 a las 09:48, Michael Paquier escribió:
> On Thu, Jun 16, 2016 at 8:37 PM, Martín Marqués  
> wrote:
> 
>> This problem came up due to a difference between pg_dump on 9.1.12 and
>> 9.1.22 (I believe it was due to a patch on pg_dump that excluded the
>> dependent objects from being dumped), but here I'm using 9.5.3:
> 
> Hm. I don't recall anything in pg_dump lately except ebd092b, but that
> fixed another class of problems.

I believe it was this one:

commit 5108013dbbfedb5e5af6a58cde5f074d895c46bf
Author: Tom Lane 
Date:   Wed Jan 13 18:55:27 2016 -0500

Handle extension members when first setting object dump flags in
pg_dump.

Regards,

-- 
Martín Marquéshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] PgQ and pg_dump

2016-06-16 Thread Martín Marqués
El 16/06/16 a las 00:08, Michael Paquier escribió:
> On Wed, Jun 15, 2016 at 7:19 PM, Martín Marqués  
> wrote:
>>
>> How would the recovery process work? We expect the schema to be there
>> when restoring the tables?
> 
> pg_dump creates the schema first via the CREATE EXTENSION command,
> then tables dependent on this schema that are not created by the
> extension are dumped individually.

That's not the behavior I'm seeing here:

pruebas=# create extension pgq;
CREATE EXTENSION

pruebas=# select pgq.create_queue('personas');
 create_queue
--
1
(1 fila)

pruebas=# select pgq.create_queue('usuarios');
 create_queue
--
1
(1 fila)

pruebas=# select pgq.create_queue('usuarios_activos');
 create_queue
--
1
(1 fila)

pruebas=# select pgq.create_queue('usuarios_inactivos');
 create_queue
--
1
(1 fila)

pruebas=# select count(*) from pgq.tick;
 count
---
 4
(1 fila)

pruebas=# \dt pgq.*
Listado de relaciones
 Esquema | Nombre | Tipo  |  Dueño
-++---+--
 pgq | consumer   | tabla | postgres
 pgq | event_1| tabla | postgres
 pgq | event_1_0  | tabla | postgres
 pgq | event_1_1  | tabla | postgres
 pgq | event_1_2  | tabla | postgres
 pgq | event_2| tabla | postgres
 pgq | event_2_0  | tabla | postgres
 pgq | event_2_1  | tabla | postgres
 pgq | event_2_2  | tabla | postgres
 pgq | event_3| tabla | postgres
 pgq | event_3_0  | tabla | postgres
 pgq | event_3_1  | tabla | postgres
 pgq | event_3_2  | tabla | postgres
 pgq | event_4| tabla | postgres
 pgq | event_4_0  | tabla | postgres
 pgq | event_4_1  | tabla | postgres
 pgq | event_4_2  | tabla | postgres
 pgq | event_template | tabla | postgres
 pgq | queue  | tabla | postgres
 pgq | retry_queue| tabla | postgres
 pgq | subscription   | tabla | postgres
 pgq | tick   | tabla | postgres
(22 filas)

And just to add something else into the whole annoyance, I'll add a user
table:

pruebas=# create table pgq.test_pgq_dumpable (id int primary key);
CREATE TABLE
pruebas=# \dt pgq.test_pgq_dumpable
 Listado de relaciones
 Esquema |  Nombre   | Tipo  |  Dueño
-+---+---+--
 pgq | test_pgq_dumpable | tabla | postgres
(1 fila)


To check that all objects are dumped, I just pipe the pg_dump to psql on
a new DB:

-bash-4.3$ pg_dump  pruebas | psql -d pruebas_pgq

Now, let's check what we have on this new DB:

pruebas_pgq=# \dt pgq.test_pgq_dumpable
No se encontraron relaciones coincidentes.
pruebas_pgq=# \dt pgq.*
Listado de relaciones
 Esquema | Nombre | Tipo  |  Dueño
-++---+--
 pgq | consumer   | tabla | postgres
 pgq | event_template | tabla | postgres
 pgq | queue  | tabla | postgres
 pgq | retry_queue| tabla | postgres
 pgq | subscription   | tabla | postgres
 pgq | tick   | tabla | postgres
(6 filas)


This problem came up due to a difference between pg_dump on 9.1.12 and
9.1.22 (I believe it was due to a patch on pg_dump that excluded the
dependent objects from being dumped), but here I'm using 9.5.3:

pruebas_pgq=# select version();
 version

-
 PostgreSQL 9.5.3 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 5.3.1
20160406 (Red Hat 5.3.1-6), 64-bit
(1 fila)


I'll file a bug report in a moment.
-- 
Martín Marquéshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 10.0

2016-05-14 Thread Martín Marqués
El 13/05/16 a las 15:36, Josh berkus escribió:
> On 05/13/2016 11:31 AM, Alvaro Herrera wrote:
>> Josh berkus wrote:
>>  
>>> Anyway, can we come up with a consensus of some minimum changes it will
>>> take to make the next version 10.0?
>>
>> I think the next version should be 10.0 no matter what changes we put
>> in.
>>
> 
> Well, if we adopt 2-part version numbers, it will be.  Maybe that's the
> easiest thing?  Then we never have to have this discussion again, which
> certainly appeals to me ...

Wasn't there some controversy about switching to major.minor versioning
this in -advocacy?

http://www.postgresql.org/message-id/ee13fd2bb44cb086b457be34e81d5...@biglumber.com

IMO, this versioning is pretty good and people understand it well, with
the other will be using postgres 13 by 2020, which isn't far away. ;)

-- 
Martín Marquéshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 10.0

2016-05-14 Thread Martín Marqués
El 13/05/16 a las 15:31, Alvaro Herrera escribió:
> Josh berkus wrote:
>  
>> Anyway, can we come up with a consensus of some minimum changes it will
>> take to make the next version 10.0?
> 
> I think the next version should be 10.0 no matter what changes we put
> in.

+1

And another +1 on Tom's opinion on it being too late after beta1 has
been released.

-- 
Martín Marquéshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Minor documentation patch

2016-05-11 Thread Martín Marqués
Hi,

Yesterday I was going over some consultancy and went to check some
syntax for CREATE FUNCTION, particularly related to SECURITY DEFINER part.

Reading there I saw a paragraph which had a sentence that wasn't very
clear at first.

The patch's description gives a better idea on the change, and how I got
there, and I believe it gives better meaning to the sentence in question.

I applied the same change on another part which had the same phrase.

Cheers,

-- 
Martín Marquéshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
>From fbf6b9f6df20d38b5f16c6af94424042b41d7fad Mon Sep 17 00:00:00 2001
From: Martin 
Date: Tue, 10 May 2016 21:31:24 -0300
Subject: [PATCH] While reading the CREATE FUNCTION reference docs for some
 reference on SECURITY DEFINER usage I ran on this phrase:

Particularly important in this regard is the temporary-table schema,
which is searched first by default, and is normally writeable by anyone.
A secure arrangement can be had by forcing the temporary schema to be
searched last.

The last sentence there was not clear at first. I feel that the word
*obtained* instead of *had* gives a more clear understanding.

I found a similar phase in the PL/PgSQL documentation as well, and
so applied the same fix.
---
 doc/src/sgml/plpgsql.sgml | 2 +-
 doc/src/sgml/ref/create_function.sgml | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/doc/src/sgml/plpgsql.sgml b/doc/src/sgml/plpgsql.sgml
index a27bbc5..4ecd9e3 100644
--- a/doc/src/sgml/plpgsql.sgml
+++ b/doc/src/sgml/plpgsql.sgml
@@ -528,7 +528,7 @@ $$ LANGUAGE plpgsql;
  
 
  
-  The same effect can be had by declaring one or more output parameters as
+  The same effect can be obtained by declaring one or more output parameters as
   polymorphic types.  In this case the
   special $0 parameter is not used; the output
   parameters themselves serve the same purpose.  For example:
diff --git a/doc/src/sgml/ref/create_function.sgml b/doc/src/sgml/ref/create_function.sgml
index bd11d2b..583cdf5 100644
--- a/doc/src/sgml/ref/create_function.sgml
+++ b/doc/src/sgml/ref/create_function.sgml
@@ -715,7 +715,7 @@ SELECT * FROM dup(42);
 malicious users from creating objects that mask objects used by the
 function.  Particularly important in this regard is the
 temporary-table schema, which is searched first by default, and
-is normally writable by anyone.  A secure arrangement can be had
+is normally writable by anyone.  A secure arrangement can be obtained
 by forcing the temporary schema to be searched last.  To do this,
 write pg_temppg_tempsecuring functions as the last entry in search_path.
 This function illustrates safe usage:
-- 
2.5.5


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] about lob(idea)

2015-05-26 Thread Martín Marqués
El 25/05/15 a las 06:13, alex2010 escribió:
>  Maybe it makes sense to add ability to store large objects in the same table 
> space as the table. 
> Or an opportunity - to specify table space for a large object.
> Do you have anything in todolists about it? 

This is something which has popped up on me more than once when giving
talks about storing files in PostgreSQL (last PgDay Argentina there was
quite a debate about it, particularly when bringing up the bytea <-> LO
comparison). The concerns the people exposed had different end goals.

One of the main concerns was the fact that all LO live in a common
catalog table (pg_largeobjects).

If the LO were stored per-database, with a some alike schema as
pg_largeobjects, then they could be placed on any tablespace available,
and even get dumped on a normal DB dump, which makes administration much
simpler.

Cheers,

-- 
Martín Marquéshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] postgres messages error

2014-12-17 Thread Martín Marqués
Hi there,

I was doing some translation on postgres.po and found a string which
looks mistaken.

#: libpq/auth.c:1593
#, fuzzy, c-format
msgid "could not to look up local user ID %ld: %s"

It looks like there is an extra *to* there , so the string should be:

"could not look up local user ID %ld: %s"

Cheers,

-- 
Martín Marquéshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Regression test errors

2014-03-09 Thread Martín Marqués
OK, noticed how horrible this patch was (thanks for the heads up from
Jaime Casanova). This happens when trying to fetch changes one made on
a test copy after a day of lots of work back to a git repository: you
just make very silly mistakes.

Well, now I got the changes right (tested the patch, because silly
changes should be tested as well ;)).

2014-03-07 21:46 GMT-03:00 Martín Marqués :
> I was testing some builds I was doing and found that the regression
> tests fails when doing the against a Hot Standby server:
>
> $ make standbycheck
> [...]
> == running regression test queries==
> test hs_standby_check ... ok
> test hs_standby_allowed   ... FAILED
> test hs_standby_disallowed... FAILED
> test hs_standby_functions ... ok
>
> ==
>  2 of 4 tests failed.
> ==
>
> The differences that caused some tests to fail can be viewed in the
> file "/usr/local/postgresql-9.3.3/src/test/regress/regression.diffs".
> A copy of the test summary that you see
> above is saved in the file
> "/usr/local/postgresql-9.3.3/src/test/regress/regression.out".
>
> The regression.diffs and patch attached.
>
> I haven't checked how far back those go. I don't think it's even
> important to back patch this, but it's nice for future testing.
>
> Regards,
>
> --
> Martín Marqués http://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7 Support, Training & Services



-- 
Martín Marqués http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
From 82b4d69d3980ad8852bbf2de67abd4105b328d3e Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Mart=C3=ADn=20Marqu=C3=A9s?= 
Date: Sun, 9 Mar 2014 08:58:17 -0300
Subject: [PATCH] Two Hot Standby regression tests failed for various reasons.

- One error was due to the fact that it was checking for a VACUUM error
  on an ANALYZE call in src/test/regress/expected/hs_standby_disallowed.out
- Serializable transactions won't work on a Hot Standby.
---
 src/test/regress/expected/hs_standby_allowed.out| 2 +-
 src/test/regress/expected/hs_standby_disallowed.out | 2 +-
 src/test/regress/sql/hs_standby_allowed.sql | 2 +-
 3 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/src/test/regress/expected/hs_standby_allowed.out b/src/test/regress/expected/hs_standby_allowed.out
index 1abe5f6..c26c982 100644
--- a/src/test/regress/expected/hs_standby_allowed.out
+++ b/src/test/regress/expected/hs_standby_allowed.out
@@ -49,7 +49,7 @@ select count(*)  as should_be_1 from hs1;
 (1 row)
 
 end;
-begin transaction isolation level serializable;
+begin transaction isolation level repeatable read;
 select count(*) as should_be_1 from hs1;
  should_be_1 
 -
diff --git a/src/test/regress/expected/hs_standby_disallowed.out b/src/test/regress/expected/hs_standby_disallowed.out
index e7f4835..bc11741 100644
--- a/src/test/regress/expected/hs_standby_disallowed.out
+++ b/src/test/regress/expected/hs_standby_disallowed.out
@@ -124,7 +124,7 @@ unlisten *;
 ERROR:  cannot execute UNLISTEN during recovery
 -- disallowed commands
 ANALYZE hs1;
-ERROR:  cannot execute VACUUM during recovery
+ERROR:  cannot execute ANALYZE during recovery
 VACUUM hs2;
 ERROR:  cannot execute VACUUM during recovery
 CLUSTER hs2 using hs1_pkey;
diff --git a/src/test/regress/sql/hs_standby_allowed.sql b/src/test/regress/sql/hs_standby_allowed.sql
index 58e2c01..7fc2214 100644
--- a/src/test/regress/sql/hs_standby_allowed.sql
+++ b/src/test/regress/sql/hs_standby_allowed.sql
@@ -28,7 +28,7 @@ begin transaction read only;
 select count(*)  as should_be_1 from hs1;
 end;
 
-begin transaction isolation level serializable;
+begin transaction isolation level repeatable read;
 select count(*) as should_be_1 from hs1;
 select count(*) as should_be_1 from hs1;
 select count(*) as should_be_1 from hs1;
-- 
1.8.3.1


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Regression test errors

2014-03-07 Thread Martín Marqués
I was testing some builds I was doing and found that the regression
tests fails when doing the against a Hot Standby server:

$ make standbycheck
[...]
== running regression test queries==
test hs_standby_check ... ok
test hs_standby_allowed   ... FAILED
test hs_standby_disallowed... FAILED
test hs_standby_functions ... ok

==
 2 of 4 tests failed.
==

The differences that caused some tests to fail can be viewed in the
file "/usr/local/postgresql-9.3.3/src/test/regress/regression.diffs".
A copy of the test summary that you see
above is saved in the file
"/usr/local/postgresql-9.3.3/src/test/regress/regression.out".

The regression.diffs and patch attached.

I haven't checked how far back those go. I don't think it's even
important to back patch this, but it's nice for future testing.

Regards,

-- 
Martín Marqués http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


regression.diffs
Description: Binary data
From b6db8388e37f6afaa431e31239fd972d10140cc1 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Mart=C3=ADn=20Marqu=C3=A9s?= 
Date: Fri, 7 Mar 2014 21:29:29 -0300
Subject: [PATCH] Standby regression checks failed.

Two Hot Standby regression tests failed for various reasones.

- An error in src/test/regress/expected/hs_standby_disallowed.out
  made regression fail (VACUUM should be ANALYZE).
- Serializable transactions won't work on a Hot Standby.
---
 src/test/regress/expected/hs_standby_allowed.out| 2 +-
 src/test/regress/expected/hs_standby_disallowed.out | 2 +-
 src/test/regress/sql/hs_standby_allowed.sql | 2 +-
 3 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/src/test/regress/expected/hs_standby_allowed.out b/src/test/regress/expected/hs_standby_allowed.out
index 1abe5f6..9d18d77 100644
--- a/src/test/regress/expected/hs_standby_allowed.out
+++ b/src/test/regress/expected/hs_standby_allowed.out
@@ -49,7 +49,7 @@ select count(*)  as should_be_1 from hs1;
 (1 row)
 
 end;
-begin transaction isolation level serializable;
+begin transaction isolation level readrepeatable;
 select count(*) as should_be_1 from hs1;
  should_be_1 
 -
diff --git a/src/test/regress/expected/hs_standby_disallowed.out b/src/test/regress/expected/hs_standby_disallowed.out
index e7f4835..bc11741 100644
--- a/src/test/regress/expected/hs_standby_disallowed.out
+++ b/src/test/regress/expected/hs_standby_disallowed.out
@@ -124,7 +124,7 @@ unlisten *;
 ERROR:  cannot execute UNLISTEN during recovery
 -- disallowed commands
 ANALYZE hs1;
-ERROR:  cannot execute VACUUM during recovery
+ERROR:  cannot execute ANALYZE during recovery
 VACUUM hs2;
 ERROR:  cannot execute VACUUM during recovery
 CLUSTER hs2 using hs1_pkey;
diff --git a/src/test/regress/sql/hs_standby_allowed.sql b/src/test/regress/sql/hs_standby_allowed.sql
index 58e2c01..5cd450d 100644
--- a/src/test/regress/sql/hs_standby_allowed.sql
+++ b/src/test/regress/sql/hs_standby_allowed.sql
@@ -28,7 +28,7 @@ begin transaction read only;
 select count(*)  as should_be_1 from hs1;
 end;
 
-begin transaction isolation level serializable;
+begin transaction isolation repeatable read;
 select count(*) as should_be_1 from hs1;
 select count(*) as should_be_1 from hs1;
 select count(*) as should_be_1 from hs1;
-- 
1.8.3.1


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] yum psycopg2 doc package not signed

2014-01-21 Thread Martín Marqués
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

El 21/01/14 20:11, Devrim GÜNDÜZ escribió:
> 
> Hi,
> 
> On Tue, 2014-01-21 at 20:19 -0200, Martín Marqués wrote:
>> I was updating the packages from one of my servers and I got
>> this message:
>> 
>> Package python-psycopg2-doc-2.5.2-1.f19.x86_64.rpm is not signed
>> 
>> If I remove the package (I thought it might be that package
>> alone) I get errors from other packages:
>> 
>> Package python-psycopg2-2.5.2-1.f19.x86_64.rpm is not signed
>> 
>> Something wrong with the packages?
> 
> I thought I fixed that -- but apparently not. Please run
> 
> yum clean metadata
> 
> and try updating the package again.

Thanks, updates are applying now.

- -- 
Martín Marquéshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
-BEGIN PGP SIGNATURE-
Version: GnuPG v1
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBAgAGBQJS3ybiAAoJEHsDtEgBAFTS6LYH/RfMF+E6MCpiz3uTYB1ou0F/
cjeudgW/WuMs/thxPCyf0PE6wgJqiXFh3ZDZrPJ1QSL6poYOLWT80ZW8vyxVMZlr
474HMQPasAm3fgnofkvA9W8XKXRiYTjzQakE/sod5dqcQ5E+L9OKPQ6VOx4XAOQw
zJeggD9LVMm11u5VhDh/2L3eKE/29yOhnI7Ir+sxIVU8H/W7KLYp7N8swUphJUV6
z5uru9gveIrNl2Z0Q3CCmm3PxVUXPU3VRzwCOwXhUZMPi4C+OXQvRhCKL69AOK9E
55e+Qr7L4imA8cFmhGkX8LFKAPbJPP9ZiRMUDTIrdZCJ0MphVFwVR8gj671AFVQ=
=Mair
-END PGP SIGNATURE-


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] yum psycopg2 doc package not signed

2014-01-21 Thread Martín Marqués
I was updating the packages from one of my servers and I got this message:

Package python-psycopg2-doc-2.5.2-1.f19.x86_64.rpm is not signed

If I remove the package (I thought it might be that package alone) I
get errors from other packages:

Package python-psycopg2-2.5.2-1.f19.x86_64.rpm is not signed

Something wrong with the packages?

-- 

Martín Marqués http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] problem with commitfest redirection

2013-06-21 Thread Martín Marqués

El 21/06/13 23:47, Jaime Casanova escribió:

On Fri, Jun 21, 2013 at 8:56 PM, Martín Marqués  wrote:

When ever I try to see the patch from this commit it never loads:

https://commitfest.postgresql.org/action/patch_view?id=1129

Some problem there? I can see other patches, from other commits.



Yes, the URL is wrong. right URL is
http://www.postgresql.org/message-id/CAFjNrYuh=4Vwnv=2n7cj0jjuwc4hool1epxsoflj6s19u02...@mail.gmail.com


Yes, found out that later. Could there be other URL's like that?

--
Martín Marquéshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] problem with commitfest redirection

2013-06-21 Thread Martín Marqués

When ever I try to see the patch from this commit it never loads:

https://commitfest.postgresql.org/action/patch_view?id=1129

Some problem there? I can see other patches, from other commits.

--
Martín Marquéshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Integrated autovacuum

2005-07-27 Thread Martín Marqués
El Mié 27 Jul 2005 18:23, Alvaro Herrera escribió:
> On Wed, Jul 27, 2005 at 06:05:26PM -0300, Martín Marqués wrote:
> > 
> > Will there be a way to ballance the amount of stats the autovacuum gets? 
> > Something like the analyze parameters that the contrib version has, but 
> > integrated in postgresql.conf?
> 
> I'm not sure I understand your question.  If it means what I think, then
> yes, you can set the threshold and scale values per table.

Yes, that's what I was asking. Will those values be in flat files or in the 
cataloge?[1] For what I see, it looks like flat files (presumably 
postgresql.conf)

> > I had a select on my development server that took several minutes to 
complete, 
> > and after running manually analyze on the tables involved the time reduced 
> > dramatically.
> 
> I think everybody mostly agreed that contrib's pg_autovacuum default
> values were too conservative.

Yes, I noticed that. Anyway, the main aplicacion on which we are working has 2 
main data alterations.

1) Mass data update (INSERTs and UPDATEs) on 3 o 4 tables. This doesn't happen 
very frecuently, so I'm thinking about adding an ANALYZE at the end of the 
transaction.
2) Constant data updates and inserts, still at a low rate, on one table. This 
could get analyzed every night with the Backup.

The other tables have so little amount of data, and doesn't get updated that 
usual, so there's nothing to bother about.

[1]: Yes I know Alvaro, I should be testing 8.1beta, but thank God I have 
8.0.3 now. ;-)


-- 
 18:23:45 up 25 days,  3:09,  1 user,  load average: 1.12, 1.01, 1.19
-
Lic. Martín Marqués   | SELECT 'mmarques' || 
Centro de Telemática  | '@' || 'unl.edu.ar';
Universidad Nacional  | DBA, Programador, 
del Litoral   | Administrador
-

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] Integrated autovacuum

2005-07-27 Thread Martín Marqués
El Mié 27 Jul 2005 17:24, Alvaro Herrera escribió:
> On Wed, Jul 27, 2005 at 12:53:40PM -0700, Joshua D. Drake wrote:
> 
> > Just for clarification, will the new integrated autovacuum require that 
> > statistics are on?
> 
> Yes.  Row-level stats too.

Will there be a way to ballance the amount of stats the autovacuum gets? 
Something like the analyze parameters that the contrib version has, but 
integrated in postgresql.conf?

I had a select on my development server that took several minutes to complete, 
and after running manually analyze on the tables involved the time reduced 
dramatically.

Running on a 8.0.3 server with autovacuum running every 5 minutes.

-- 
 17:52:04 up 25 days,  2:37,  1 user,  load average: 0.90, 1.00, 0.97
---------
Lic. Martín Marqués   | SELECT 'mmarques' || 
Centro de Telemática  | '@' || 'unl.edu.ar';
Universidad Nacional  | DBA, Programador, 
del Litoral   | Administrador
-

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] [HACKERS] plPHP in core?

2005-04-05 Thread Martín Marqués
El Lun 04 Abr 2005 18:00, Doug McNaught escribió:
> Robert Treat <[EMAIL PROTECTED]> writes:
> 
> > If by "stripped down" you mean without postgresql database support then
> > I'll grant you that, but it is no different than other any other pl
> > whose parent language requires postgresql to be installed.  If packagers
> > are able to handle those languages than why can't they do the same with
> > PHP ?
> 
> Other languages don't require PG to be installed in order to compile
> them.  For example, you can build Perl (with no Postgres on the
> system), build Postgres and then build DBD::Pg as a completely
> separate step.

The same thing can be done with PHP.

-- 
 09:25:38 up 3 days, 17:54,  1 user,  load average: 0.45, 0.28, 0.38
-
Martín Marqués| select 'mmarques' || '@' || 'unl.edu.ar'
Centro de Telematica  |  DBA, Programador, Administrador
 Universidad Nacional
  del Litoral
-

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] [HACKERS] plPHP in core?

2005-04-05 Thread Martín Marqués
El Lun 04 Abr 2005 17:36, Tom Lane escribió:
> "Joshua D. Drake" <[EMAIL PROTECTED]> writes:
> > Maybe I am just dense, but the argument seems to be completely moot. PHP 
> > is no different than Perl or Python in this case.
> 
> Perl and Python don't have "BuildPrereq: postgresql-devel" in their 
rpmspecs.
> PHP does.

The header files would not be a problem. The real problem is that you also 
need to have postgresql-libs. :-(

Any way, RH deals all the times with circular dependencies.

P.D.: It would be nice to have plPHP in the core, IMHO.

-- 
 09:03:26 up 3 days, 17:32,  1 user,  load average: 0.39, 0.61, 0.64
---------
Martín Marqués| select 'mmarques' || '@' || 'unl.edu.ar'
Centro de Telematica  |  DBA, Programador, Administrador
 Universidad Nacional
  del Litoral
-

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] Disaster!

2004-01-23 Thread Martín Marqués
Mensaje citado por Tom Lane <[EMAIL PROTECTED]>:

> Christopher Kings-Lynne <[EMAIL PROTECTED]> writes:
> > Now I can start it up!  Thanks!
> 
> > What should I do now?
> 
> Go home and get some sleep ;-).  If the WAL replay succeeded, you're up
> and running, nothing else to do.

Tom, could you give a small insight on what occurred here, why those 8k of zeros
fixed it, and what is a "WAL replay"?

I am very curious about it.

-- 
select 'mmarques' || '@' || 'unl.edu.ar' AS email;
---
Martín Marqués  |   Programador, DBA
Centro de Telemática| Administrador
   Universidad Nacional
del Litoral
---

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [HACKERS] Disaster!

2004-01-23 Thread Martín Marqués
Mensaje citado por Christopher Kings-Lynne <[EMAIL PROTECTED]>:

> > I'd suggest extending that file with 8K of zeroes (might need more than
> > that, but probably not).
> 
> How do I do that?  Sorry - I'm not sure of the quickest way, and I'm 
> reading man pages as we speak!

# dd if=/dev/zeros of=somefile
# cat file1 somefile >> newfile
# mv newfile file1

file1 is "/usr/local/pgsql/data/pg_clog/000D"

-- 
select 'mmarques' || '@' || 'unl.edu.ar' AS email;
---
Martín Marqués  |   Programador, DBA
Centro de Telemática| Administrador
   Universidad Nacional
del Litoral
---

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


[HACKERS] Rules and missing inserts

2001-10-05 Thread Martín Marqués
n't get 
to the database?

webunl=> select version();
 version
--
 PostgreSQL 7.1.3 on sparc-sun-solaris2.7, compiled by GCC 2.95.2


TIA!


-- 
Porqué usar una base de datos relacional cualquiera,
si podés usar PostgreSQL?
-
Martín Marqués  |[EMAIL PROTECTED]
Programador, Administrador, DBA |   Centro de Telematica
   Universidad Nacional
del Litoral
-

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] Missing inserts

2001-10-03 Thread Martín Marqués

On Mié 03 Oct 2001 16:43, you wrote:
> On Mar 02 Oct 2001 21:59, you wrote:
> > In 7.1.X and earlier the INSERT rules are executed _before_ the INSERT.
> > This is changed to _after_ in 7.2.
>
> This would mean...??? I haven´t had much trouble until now, so I can´t
> understand why one of the 4 inserts of the rule didn´t get through.

Sorry for answering my own mail, but I found some info.

This is my rule:

CREATE RULE admin_insert AS ON 
INSERT TO admin_view
DO INSTEAD (
   INSERT INTO carrera 
  (carrera,titulo,area,descripcion,incumbencia,director,
  matricula,cupos,informes,nivel,requisitos,duracion,
  categoria)
   VALUES 
  (new.carrera,new.titulo,new.id_subarea,new.descripcion,
  new.incumbencia,new.director,new.matricula,new.cupos,
  new.informes,new.nivel,new.requisitos,new.duracion,
  new.car_categ);

   INSERT INTO inscripcion
  (carrera,fecha_ini,fecha_fin,lugar)
   VALUES
  (currval('carrera_id_curso_seq'),new.fecha_ini,new.fecha_fin,
  new.lugar);

   INSERT INTO resol
  (carr,numero,year,fecha)
   VALUES
  (currval('carrera_id_curso_seq'),new.numero,new.year,new.fecha);

   INSERT INTO log_carrera (accion,tabla,id_col) VALUES 
  ('I','carrera',currval('carrera_id_curso_seq'));
);

All inserts to the tables are done to the view (so this rule is used), but I 
have 39 of a total of 142 inserts that didn't get the second insert of the 
rule to go.

The question is why is this happening, and how can I fix it?

If you need logs or something, I have no problem at all.

Saludos... :-)

-- 
Porqué usar una base de datos relacional cualquiera,
si podés usar PostgreSQL?
-
Martín Marqués  |[EMAIL PROTECTED]
Programador, Administrador, DBA |   Centro de Telematica
   Universidad Nacional
del Litoral
-

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] Missing inserts

2001-10-03 Thread Martín Marqués

On Mar 02 Oct 2001 21:59, you wrote:
> In 7.1.X and earlier the INSERT rules are executed _before_ the INSERT.
> This is changed to _after_ in 7.2.

This would mean...??? I haven´t had much trouble until now, so I can´t 
understand why one of the 4 inserts of the rule didn´t get through.

Is there some logic?

TIA!

-- 
Porqué usar una base de datos relacional cualquiera,
si podés usar PostgreSQL?
-
Martín Marqués  |[EMAIL PROTECTED]
Programador, Administrador, DBA |   Centro de Telematica
   Universidad Nacional
del Litoral
-

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[HACKERS] Missing inserts

2001-10-02 Thread Martín Marqués

For some reason, I seam to feel as if the inserts that should be executed by 
a rule are not all getting executed, or at least, they are not getting writen.

How can I find out what the rule is really doing? The logs don't say much.

Any help will be grear at this moment of stress!!! X->

Saludos... :-)

-- 
Porqué usar una base de datos relacional cualquiera,
si podés usar PostgreSQL?
-----
Martín Marqués  |[EMAIL PROTECTED]
Programador, Administrador, DBA |   Centro de Telematica
   Universidad Nacional
del Litoral
-

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [PHP] [HACKERS] [BUGS] PostgreSQL / PHP Overrun Error

2001-09-27 Thread Martín Marqués

On Mié 26 Sep 2001 22:51, Mike Rogers wrote:
> There is a problem in PHP-4.0.6.  Please use PHP4.0.7 or 4.0.8 and the
> problem will be solved.  This can be obtained from CVS

Sorry, but 4.0.6 is the last version out (there may be some RC of 4.0.7), but 
how can we get those, and how much can we trust a RC version?

Saludos... :-)

-- 
Porqué usar una base de datos relacional cualquiera,
si podés usar PostgreSQL?
-
Martín Marqués  |[EMAIL PROTECTED]
Programador, Administrador, DBA |   Centro de Telematica
   Universidad Nacional
del Litoral
-

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[HACKERS] pg_dump bug

2001-09-26 Thread Martín Marqués

Short! :-)

PostgreSQL version: 7.1.3

I do I dump of a database which has some views, rules, and different 
permissions on each view.

The dump puts first the permissions and after that the view creation, so when 
I import the dump back to the server (or another server) I get lts of errors, 
and have to change the permission by hand.

Has this already been reported?

saludos... :-)

-- 
Porqué usar una base de datos relacional cualquiera,
si podés usar PostgreSQL?
-
Martín Marqués  |[EMAIL PROTECTED]
Programador, Administrador, DBA |   Centro de Telematica
   Universidad Nacional
del Litoral
-

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] Putting timestamps in PostgreSQL log

2001-09-18 Thread Martín Marqués

On Lun 17 Sep 2001 23:28, Christopher Kings-Lynne wrote:
> Would it be an idea to add timestamps to the PostgreSQL error/debug/notice
> log?
>
> Sometimes I would really like to know when an event has occurred!

Use syslog and you'll get timestamps in your log.

Saludos... :-)

-- 
Porqué usar una base de datos relacional cualquiera,
si podés usar PostgreSQL?
---------
Martín Marqués  |[EMAIL PROTECTED]
Programador, Administrador, DBA |   Centro de Telematica
   Universidad Nacional
del Litoral
-

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



[HACKERS] chunk size problem

2001-09-14 Thread Martín Marqués

I started getting these error messages

webunl=> \dt 
NOTICE:  AllocSetFree: detected write past chunk end in 
TransactionCommandContext 3a4608 pqReadData() -- backend closed the channel 
unexpectedly.
 This probably means the backend terminated abnormally
 before or while processing the request. 
The connection to the server was lost. 
Attempting reset: Failed. 
!>

The logs on the first times today I had these problems said this:

Sep 14 10:55:41 bugs postgres[1318]: [ID 748848 local0.debug] [12-1] DEBUG:  
query: SELECT c.relname as "Name", 'table'::text as "Type", u.usename as 
"Owner"
Sep 14 10:55:41 bugs postgres[1318]: [ID 748848 local0.debug] [12-2] FROM 
pg_class c, pg_user u
Sep 14 10:55:41 bugs postgres[1318]: [ID 748848 local0.debug] [12-3] WHERE 
c.relowner = u.usesysid AND c.relkind = 'r'
Sep 14 10:55:41 bugs postgres[1318]: [ID 748848 local0.debug] [12-4]   AND 
c.relname !~ '^pg_'
Sep 14 10:55:41 bugs postgres[1318]: [ID 748848 local0.debug] [12-5] UNION
Sep 14 10:55:41 bugs postgres[1318]: [ID 748848 local0.debug] [12-6] SELECT 
c.relname as "Name", 'table'::text as "Type", NULL as "Owner"
Sep 14 10:55:41 bugs postgres[1318]: [ID 748848 local0.debug] [12-7] FROM 
pg_class c
Sep 14 10:55:41 bugs postgres[1318]: [ID 748848 local0.debug] [12-8] WHERE 
c.relkind = 'r'
Sep 14 10:55:41 bugs postgres[1318]: [ID 748848 local0.debug] [12-9]   AND 
not exists (select 1 from pg_user where usesysid = c.relowner)
Sep 14 10:55:41 bugs postgres[1318]: [ID 748848 local0.debug] [12-10]   AND 
c.relname !~ '^pg_'
Sep 14 10:55:41 bugs postgres[1318]: [ID 748848 local0.debug] [12-11] ORDER 
BY "Name"
Sep 14 10:55:42 bugs postgres[1318]: [ID 553393 local0.notice] [13] NOTICE:  
AllocSetFree: detected write past chunk end in TransactionCommandContext 
3a4608
Sep 14 10:55:42 bugs postgres[1318]: [ID 553393 local0.notice] [14] NOTICE:  
AllocSetFree: detected write past chunk end in TransactionCommandContext 
3a4608
Sep 14 10:55:42 bugs postgres[1318]: [ID 553393 local0.notice] [15] NOTICE:  
AllocSetFree: detected write past chunk end in TransactionCommandContext 
3a4608
Sep 14 10:55:42 bugs postgres[1318]: [ID 553393 local0.notice] [16] NOTICE:  
AllocSetFree: detected write past chunk end in TransactionCommandContext 
3a4608
Sep 14 10:55:42 bugs postgres[1318]: [ID 553393 local0.notice] [17] NOTICE:  
AllocSetFree: detected write past chunk end in TransactionCommandContext 
3aadf0
Sep 14 10:55:42 bugs postgres[1318]: [ID 553393 local0.notice] [18] NOTICE:  
AllocSetFree: detected write past chunk end in TransactionCommandContext 
3aadf0
Sep 14 10:55:42 bugs postgres[1318]: [ID 553393 local0.notice] [19] NOTICE:  
AllocSetFree: detected write past chunk end in TransactionCommandContext 
3aadf0
Sep 14 10:55:42 bugs postgres[1318]: [ID 553393 local0.notice] [20] NOTICE:  
AllocSetFree: detected write past chunk end in TransactionCommandContext 
3aadf0

Any idea? Some databases are screwed up

Saludos... :-)

-- 
Porqué usar una base de datos relacional cualquiera,
si podés usar PostgreSQL?
-
Martín Marqués  |[EMAIL PROTECTED]
Programador, Administrador, DBA |   Centro de Telematica
   Universidad Nacional
del Litoral
-

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] syslog by default?

2001-09-11 Thread Martín Marqués

On Mar 11 Sep 2001 02:07, Bruce Momjian wrote:
> > There was a discussion about --enable-syslog by default. What was the
> > consensus? I think this is a good one.
>
> Yes, I thought we decided it should be the default too.

There was a discusion about log rotation last week, so, where are we going? 
Pipe the output of postmaster to a log rotator like apaches logrotate, or are 
we going to use syslog and have the syslog log rotator do the rotation?

Just a dought I had.

Saludos... :-)

-- 
Porqué usar una base de datos relacional cualquiera,
si podés usar PostgreSQL?
---------
Martín Marqués  |[EMAIL PROTECTED]
Programador, Administrador, DBA |   Centro de Telematica
   Universidad Nacional
del Litoral
-

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



Re: [HACKERS] Re: All's quiet ... RC3 packaging ...

2001-04-04 Thread Martín Marqués

On Thursday 05 April 2001 00:41, Thomas Lockhart wrote:
> I've got patches for the regression tests to work around the "time with
> time zone" DST problem. Will apply to the tree asap, and will post a
> message when that is done.

Is RC3 going out or should I think about RC2?

Saludos... ;-)

-- 
El mejor sistema operativo es aquel que te da de comer.
Cuida tu dieta.
-
Martin Marques  |[EMAIL PROTECTED]
Programador, Administrador  |   Centro de Telematica
   Universidad Nacional
del Litoral
-

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



Re: [HACKERS] Configure problems on Solaris 2.7, pgsql 7.02 and 7.03

2001-04-04 Thread Martín Marqués

On Wednesday 04 April 2001 22:42, Ciaran Johnston wrote:
> Hi,
>
> Sorry to bother you's but I am currently doing a database comparison and
> have been trying to get postgresql installed. I'm running Solaris 2.7. I
> downloaded pgsql 7.03 and ran ./configure in the src/ directory. This
> was fine until the very end when this error appeared:

Why are you running configure inside src/? I'm not sure if the 7.0.x had the 
configure on the src/ dir or the root.

You could take a look at 7.1RC[2-3], which looks pretty stable, and I have 
(RC1) compiled and working on a Solaris 8 SPARC.

Saludos... :-)


-- 
El mejor sistema operativo es aquel que te da de comer.
Cuida tu dieta.
-
Martin Marques  |[EMAIL PROTECTED]
Programador, Administrador  |   Centro de Telematica
   Universidad Nacional
del Litoral
-

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] Re: Final call for platform testing

2001-04-04 Thread Martín Marqués

On Wednesday 04 April 2001 13:29, Pete Forman wrote:
>  > Solaris 2.7-8 Sparc7.1 2001-03-22, Marc Fournier
>
> I've reported Solaris 2.6 Sparc as working on a post-RC1 snapshot.

Same for Solaris 8 Sparc, but only tested with RC1.

-- 
El mejor sistema operativo es aquel que te da de comer.
Cuida tu dieta.
-
Martin Marques  |[EMAIL PROTECTED]
Programador, Administrador  |   Centro de Telematica
   Universidad Nacional
del Litoral
-

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



[HACKERS] regression test

2001-03-29 Thread Martín Marqués

I'm at the moment installing Postgresql-7.1RC1 on Solaris 7 and 8 over 
UltraSPARC.
What do I have to do to report a regression test?
Where should I look for info on this.

Saludos... :-)

-- 
El mejor sistema operativo es aquel que te da de comer.
Cuida tu dieta.
-
Martin Marques  |[EMAIL PROTECTED]
Programador, Administrador  |   Centro de Telematica
   Universidad Nacional
del Litoral
-

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])