Re: [GENERAL] pg_dump on hot standby canceled despite hot_standby_feedback=on

2012-09-06 Thread Stuart Bishop
I'm still getting my pg_dumps on the 9.1 hot standby cancelled
occasionally, despite hot_standby_feedback being set.
pg_stat_replication tells me the replication connection is not being
reset or anything.

The last one was:
pg_dump: Error message from server: ERROR:  canceling statement due to
conflict with recovery
DETAIL:  User was holding a relation lock for too long.

Can anyone shed some insight? My understanding of hot_standby_feedback
is that it should make this sort of query cancellation never happen.



On Tue, Aug 14, 2012 at 6:34 PM, Stuart Bishop  wrote:
> Hi.
>
> I've found a situation on one of my PG 9.1 servers where pg_dump
> running on a hot standby gets terminated when a tble on the master is
> vacuumed. This is PostgreSQL 9.1.4, and purely streaming replication.
>
> pg_dump: Error message from server: ERROR:  canceling statement due to
> conflict with recovery
> DETAIL:  User was holding shared buffer pin for too long.
> pg_dump: The command was: COPY public.webcatalog_machine (id,
> owner_id, uuid, hostname, packages_checksum, package_list,
> logo_checksum) TO stdout;
> pg_dump: *** aborted because of error
>
> hot_standby_feedback is on, and my understanding is that this should
> instruct the master that there is still an open transaction and vacuum
> should not clean stuff up that is still in use on the hot standby.
> Replication is otherwise working flawlessly, and I've confirmed that
> the walstreamer has been alive the whole time.
>
> The pg_dump works when no vacuum kicks in, but I have reproduced the
> fault by manually running vacuum on the master once the pg_dump has
> started on this larger table.
>
> I think I must be missing something, as I don't see this on my other
> servers. This database isn't particularly large, with pg_dump
> finishing in a few minutes. I'm successfully using pg_dump on other
> hot standbys that take half a day to dump with tables active enough
> that they certainly should have triggered autovacuums.


-- 
Stuart Bishop 
http://www.stuartbishop.net/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL server embedded in NAS firmware?

2012-09-06 Thread Scott Marlowe
On Thu, Sep 6, 2012 at 6:06 PM, Adrian Klaver  wrote:
> On 09/06/2012 04:19 PM, Scott Marlowe wrote:
>>
>> That shouldn't really matter.  Either the db is just on the NAS in
>> which case as long as pg compiles on it then the client on the main
>> unit shouldn't matter, or the data is just stored there and the db is
>> on the main unit, client and all and again it wouldn't matter.
>>
>> But the client and server do NOT have to be the same architecture to
>> work for sure.
>
>
> If I understood the OP, it is not client <--> server, it is:
> main server <--> replication server
>
> In that case architecture would matter.

Ahh I thought he'd be moving both ends of the replication onto embedded nas.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Moving several databases into one database with several schemas

2012-09-06 Thread Edson Richter

Em 06/09/2012 15:40, John R Pierce escreveu:

On 09/06/12 5:30 AM, Edson Richter wrote:

You could change the default setting for the user with

ALTER ROLE someuser SET search_path=...
That is perfect! I can have separate users for each application, and 
then they will have the correct search path.
You saved my day, 


the default search_path is $USER,public, so by naming your schema's to 
the usernames, you don't even need to alter role...


Wonderful, this would have the effect I expect that the connection 
defines the path. Then I'll use user to select the specific schema, and 
the "public" schema as the main schema.


Thanks to you all, I think I have everything needed to put my migration 
project in practice.


Regards,

Edson.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL server embedded in NAS firmware?

2012-09-06 Thread Amitabh Kant
On Fri, Sep 7, 2012 at 4:10 AM, Andrew Barnham wrote:

> Scratch that. An immediate show stopping pitfall occurs to me: the
> necessity to match CPU/OS Architecture between primary server and replicate
> target.  Doubtful that there are any consumer NAS products out there
> running linux on 64bit/intel
>
>

FreeNAS is based on FreeBSD 8.2 and is available in 64 bit arch.


Amitabh


Re: [GENERAL] regexp_matches question SOLVED

2012-09-06 Thread Sergio Basurto
On Wed, 2012-09-05 at 21:15 -0400, David Johnston wrote:
> On Sep 5, 2012, at 19:02, Sergio Basurto 
> wrote:
> 
> 
> 
> > On Tue, 2012-09-04 at 21:58 -0400, David Johnston wrote: 
> > 
> > > On Sep 4, 2012, at 21:39, Sergio Basurto  wrote:
> > > 
> > > > I am using regexp_matches in a function like this
> > > > 
> > > > create or replace function test (v_string   in text) returns 
> > > > varchar as $$
> > > > declare
> > > > i_strings   text[];
> > > > i_stringtext[];
> > > > 
> > > > i_strings := 
> > > > regexp_matches(v_string,E'[a-zA-Z0-9:\\s\\-\\.#%]*:[A-Za-z0-9\\s\\-\\.#%]+','g');
> > > 
> > > You can store a single array value into i_strings.  It does not magically 
> > > convert a multi-row result into an array.  You can use ARRAY_AGG to do so 
> > > or execute the query directly as part of the loop while using a "record" 
> > > variable to store the current row's value(s). 
> > > 
> > > > 
> > > > -- Then I use  the results
> > > > foreach i_string slice 1 in array i_strings
> > > > loop
> > > > raise notice 'row = %',i_string;
> > > > end loop;
> > > > 
> > > > when I run the function like this:
> > > > 
> > > > select test('1:Warehouse1;2:Warehouse2;');
> > > > 
> > > > postgresql complains:
> > > > ERROR:  query "SELECT 
> > > > regexp_matches(v_string,E'[a-zA-Z0-9:\\s\\-\\.#%]*:[A-Za-z0-9\\s\\-\\.#%]+','g')"
> > > >  returned more than one row
> > > > 
> > > > Why postgres is sending the ERROR?
> > > > 
> > > > Off course I am expecting more than one row!, that's why is in a 
> > > > foreach loop in the first place.
> > > > 
> > > > If I run:
> > > > select 
> > > > regexp_matches('1:Warehouse1;2:Warehouse2;',E'[a-zA-Z0-9:\\s\\-\\.#%]*:[A-Za-z0-9\\s\\-\\.#%]+','g');
> > > > regexp_matches 
> > > > 
> > > > {1:Warehouse1}
> > > > {2:Warehouse2}
> > > > (2 rows)
> > > > 
> > > > I am doing something wrong?
> > > 
> > > Note that because you do not use grouping in your expression there is 
> > > only a single array "cell" in each row - but there could be more than one 
> > > in which case your for-each above would effectively loop through each 
> > > sub-component of the match.
> > > 
> > > > 
> > > > Regards,
> > > >
> > > 
> > > David J.
> > > 
> > 
> > Thanks for your response David, but my doubt arise because if I use
> > this
> > 
> > i_strings text[] := array [[1:Warehouse1],[2:Warehouse2]];
> > 
> > loops without problem. Is not the same thing?
> > 
> > it prints:
> > 
> > NOTICE: row = {1:Warehouse1}
> > NOTICE: row = {2:Warehouse2}
> 
> 
> 
> A 2-dimensional array is not the same as a set of 1-dimensional
> arrays.
> 
> 
> David J.

Thank you David for all your help,

I got it finally thanks your explanation, so the code that works for me
is:

create or replace function test (v_string   in text) returns varchar
as $$
declare
i_strings  refcursor := null;
i_stringtext[];
i_querytext;

begin

i_query := 'select regexp_matches('''||v_string||''',E''[a-zA-Z0-9:\
\s\\-\\.#%]*:[A-Za-z0-9\\s\\-\\.#%]+'',''g'')';
open i_strings for execute i_query;

if i_strings is not null then
loopfetch i_strings into i_string;
exit when not found;
raise notice 'row =
%',i_string;  
end loop;
close i_strings;
end if;
return 0;
end;
$$ LANGUAGE plpgsql;

Thanks again David.

Kind Regards,


Re: [GENERAL] Multiple indexes, huge table

2012-09-06 Thread Tom Lane
Jeff Janes  writes:
>> That sounds like you lack an index on the referencing column of the
>> foreign key constraint.  Postgres doesn't require you to keep such
>> an index, but it's a really good idea if you ever update the referenced
>> column.

> For updating 20 million out of 500 million rows, wouldn't a full table
> scan generally be preferable to an index scan anyway?

Foreign key triggers do their checks retail, though, so you really want
the probe for any referencing rows for a particular row-being-updated
to be able to use an index.

(It would be nice if we could replace that with a mass revalidation
once it got to be a big fraction of the table, but we don't have a
mechanism for that.  Yet.)

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Multiple indexes, huge table

2012-09-06 Thread Alan Hodgson
On Thursday, September 06, 2012 05:06:27 PM Jeff Janes wrote:
> For updating 20 million out of 500 million rows, wouldn't a full table
> scan generally be preferable to an index scan anyway?
> 

Not one table scan for each row updated ...



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL server embedded in NAS firmware?

2012-09-06 Thread Adrian Klaver

On 09/06/2012 04:19 PM, Scott Marlowe wrote:

That shouldn't really matter.  Either the db is just on the NAS in
which case as long as pg compiles on it then the client on the main
unit shouldn't matter, or the data is just stored there and the db is
on the main unit, client and all and again it wouldn't matter.

But the client and server do NOT have to be the same architecture to
work for sure.


If I understood the OP, it is not client <--> server, it is:
main server <--> replication server

In that case architecture would matter.



On Thu, Sep 6, 2012 at 4:40 PM, Andrew Barnham  wrote:

Scratch that. An immediate show stopping pitfall occurs to me: the necessity
to match CPU/OS Architecture between primary server and replicate target.
Doubtful that there are any consumer NAS products out there running linux on
64bit/intel


On Fri, Sep 7, 2012 at 8:23 AM, Andrew Barnham 
wrote:


Hi

I currently run a modest streaming replication target on a cheap, single
disk ASUS media center; replicating a 100GB PG database.

I want to add RAID via a consumer grade NAS device.

As far as I can tell consumer grade NAS devices these days appear to be
fairly rich & flexible embedded lniux/freebsd systems.

Has anyone had any experience with running postgresql on the NAS device
itself?  Which products?  Any traps or pitfalls or integrity concerns about
such an arrangement?

Andrew










--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Multiple indexes, huge table

2012-09-06 Thread Jeff Janes
>
>> There are also rare cases where I might want to make a correction.  For 
>> example, one of the columns is sample name which is a foreign key to a 
>> samples table defined with " ON UPDATE CASCADE."  I decided to change a 
>> sample name in the samples table which should affect about 20 million rows 
>> out of the previously mentioned 500 million.  That query has now been 
>> running for five days and isn't finished yet.
>
> That sounds like you lack an index on the referencing column of the
> foreign key constraint.  Postgres doesn't require you to keep such
> an index, but it's a really good idea if you ever update the referenced
> column.

For updating 20 million out of 500 million rows, wouldn't a full table
scan generally be preferable to an index scan anyway?

But, if he doesn't drop those other indexes during this process, the
maintenance on them is going to kill his performance anyway, just like
it does for bulk loading.  If you figure 20,000,000 * (1 table + 5
index) / 15,000 rpm, it comes out to around 5 days.

Cheers,

Jeff


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL server embedded in NAS firmware?

2012-09-06 Thread Scott Marlowe
That shouldn't really matter.  Either the db is just on the NAS in
which case as long as pg compiles on it then the client on the main
unit shouldn't matter, or the data is just stored there and the db is
on the main unit, client and all and again it wouldn't matter.

But the client and server do NOT have to be the same architecture to
work for sure.

On Thu, Sep 6, 2012 at 4:40 PM, Andrew Barnham  wrote:
> Scratch that. An immediate show stopping pitfall occurs to me: the necessity
> to match CPU/OS Architecture between primary server and replicate target.
> Doubtful that there are any consumer NAS products out there running linux on
> 64bit/intel
>
>
> On Fri, Sep 7, 2012 at 8:23 AM, Andrew Barnham 
> wrote:
>>
>> Hi
>>
>> I currently run a modest streaming replication target on a cheap, single
>> disk ASUS media center; replicating a 100GB PG database.
>>
>> I want to add RAID via a consumer grade NAS device.
>>
>> As far as I can tell consumer grade NAS devices these days appear to be
>> fairly rich & flexible embedded lniux/freebsd systems.
>>
>> Has anyone had any experience with running postgresql on the NAS device
>> itself?  Which products?  Any traps or pitfalls or integrity concerns about
>> such an arrangement?
>>
>> Andrew
>
>



-- 
To understand recursion, one must first understand recursion.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Multiple indexes, huge table

2012-09-06 Thread Aram Fingal

On Sep 6, 2012, at 5:54 PM, Tom Lane wrote:

> That sounds like you lack an index on the referencing column of the
> foreign key constraint.  Postgres doesn't require you to keep such
> an index, but it's a really good idea if you ever update the referenced
> column.


Thanks.  You're right.  That column (which is a foreign key)  is a component of 
a multi-column index but I don't have an index just for it.

-Aram

Re: [GENERAL] PostgreSQL server embedded in NAS firmware?

2012-09-06 Thread Tom Lane
Andrew Barnham  writes:
> Scratch that. An immediate show stopping pitfall occurs to me: the
> necessity to match CPU/OS Architecture between primary server and replicate
> target.  Doubtful that there are any consumer NAS products out there
> running linux on 64bit/intel

Maybe not, but there are with 32-bit Intel ... if you really want to do
this, there's nothing to stop you from running a 32-bit build on your
primary machine and then replicating to the NAS.  This would limit what
you could crank shared_buffers up to, but otherwise should work fine.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] regexp_matches question

2012-09-06 Thread Sergio Basurto
On Tue, 2012-09-04 at 21:58 -0400, David Johnston wrote:

> On Sep 4, 2012, at 21:39, Sergio Basurto  wrote:
> 
> > I am using regexp_matches in a function like this
> > 
> > create or replace function test (v_string   in text) returns varchar as 
> > $$
> > declare
> > i_strings   text[];
> > i_stringtext[];
> > 
> > i_strings := 
> > regexp_matches(v_string,E'[a-zA-Z0-9:\\s\\-\\.#%]*:[A-Za-z0-9\\s\\-\\.#%]+','g');
> 
> You can store a single array value into i_strings.  It does not magically 
> convert a multi-row result into an array.  You can use ARRAY_AGG to do so or 
> execute the query directly as part of the loop while using a "record" 
> variable to store the current row's value(s). 
> 
> > 
> > -- Then I use  the results
> > foreach i_string slice 1 in array i_strings
> > loop
> > raise notice 'row = %',i_string;
> > end loop;
> > 
> > when I run the function like this:
> > 
> > select test('1:Warehouse1;2:Warehouse2;');
> > 
> > postgresql complains:
> > ERROR:  query "SELECT 
> > regexp_matches(v_string,E'[a-zA-Z0-9:\\s\\-\\.#%]*:[A-Za-z0-9\\s\\-\\.#%]+','g')"
> >  returned more than one row
> > 
> > Why postgres is sending the ERROR?
> > 
> > Off course I am expecting more than one row!, that's why is in a foreach 
> > loop in the first place.
> > 
> > If I run:
> > select 
> > regexp_matches('1:Warehouse1;2:Warehouse2;',E'[a-zA-Z0-9:\\s\\-\\.#%]*:[A-Za-z0-9\\s\\-\\.#%]+','g');
> > regexp_matches 
> > 
> > {1:Warehouse1}
> > {2:Warehouse2}
> > (2 rows)
> > 
> > I am doing something wrong?
> 
> Note that because you do not use grouping in your expression there is only a 
> single array "cell" in each row - but there could be more than one in which 
> case your for-each above would effectively loop through each sub-component of 
> the match.
> 
> > 
> > Regards,
> >
> 
> David J.
> 

Thank you David for all your help,

I got it working finally thanks to your explanation, so the code that
works for me is:

create or replace function test (v_string   in text) returns varchar
as $$
declare
i_strings  refcursor := null;
i_stringtext[];
i_querytext;

begin

i_query := 'select regexp_matches('''||v_string||''',E''[a-zA-Z0-9:\
\s\\-\\.#%]*:[A-Za-z0-9\\s\\-\\.#%]+'',''g'')';
open i_strings for execute i_query; 
if i_strings is not null then
loopfetch i_strings into i_string;
exit when not found;
raise notice 'row =
%',i_string;  
end loop;
close i_strings;
end if;
return 0;
end;
$$ LANGUAGE plpgsql;

Thanks again David.

Kind Regards,


Re: [GENERAL] recovering databases

2012-09-06 Thread Yvon Thoraval
2012/9/4 Albe Laurenz 

> Yvon Thoraval wrote:
> > on my computer I had a disk probleme, then i had to reinstall the system
> (Xubuntu 12.04).
> > I've backuped some parts of the disk, namely /etc, /var and /home.
> > I wonder if I'm able to recover my past databases in the
> /var/lib/postgresql/9.1/ backup.
> > If yes, how ?
>
> If you have the complete data directory, there should be
> no problem.
> Install PostgreSQL 9.1 and start the server against the
> data directory.
>
> Yours,
> Laurenz Albe
>


fine, thanks

-- 
Yvon


Re: [GENERAL] PostgreSQL server embedded in NAS firmware?

2012-09-06 Thread Andrew Barnham
Scratch that. An immediate show stopping pitfall occurs to me: the
necessity to match CPU/OS Architecture between primary server and replicate
target.  Doubtful that there are any consumer NAS products out there
running linux on 64bit/intel

On Fri, Sep 7, 2012 at 8:23 AM, Andrew Barnham wrote:

> Hi
>
> I currently run a modest streaming replication target on a cheap, single
> disk ASUS media center; replicating a 100GB PG database.
>
> I want to add RAID via a consumer grade NAS device.
>
> As far as I can tell consumer grade NAS devices these days appear to be
> fairly rich & flexible embedded lniux/freebsd systems.
>
> Has anyone had any experience with running postgresql on the NAS device
> itself?  Which products?  Any traps or pitfalls or integrity concerns about
> such an arrangement?
>
> Andrew
>


[GENERAL] PostgreSQL server embedded in NAS firmware?

2012-09-06 Thread Andrew Barnham
Hi

I currently run a modest streaming replication target on a cheap, single
disk ASUS media center; replicating a 100GB PG database.

I want to add RAID via a consumer grade NAS device.

As far as I can tell consumer grade NAS devices these days appear to be
fairly rich & flexible embedded lniux/freebsd systems.

Has anyone had any experience with running postgresql on the NAS device
itself?  Which products?  Any traps or pitfalls or integrity concerns about
such an arrangement?

Andrew


Re: [GENERAL] Multiple indexes, huge table

2012-09-06 Thread Tom Lane
Aram Fingal  writes:
> I have a table which currently has about 500 million rows.  For the most 
> part, the situation is going to be that I will import a few hundred million 
> more rows from text files once every few months but otherwise there won't be 
> any insert, update or delete queries.  I have created five indexes, some of 
> them multi-column, which make a tremendous difference in performance for the 
> statistical queries which I need to run frequently (seconds versus hours.)  
> When adding data to the table, however, I have found that it is much faster 
> to drop all the indexes, copy the data to the table and then create the 
> indexes again (hours versus days.)  So, my question is whether this is really 
> the best way.  Should I write a script which drops all the indexes, copies 
> the data and then recreates the indexes or is there a better way to do this?  

Yes, that's actually recommended practice for such cases.

> There are also rare cases where I might want to make a correction.  For 
> example, one of the columns is sample name which is a foreign key to a 
> samples table defined with " ON UPDATE CASCADE."  I decided to change a 
> sample name in the samples table which should affect about 20 million rows 
> out of the previously mentioned 500 million.  That query has now been running 
> for five days and isn't finished yet.  

That sounds like you lack an index on the referencing column of the
foreign key constraint.  Postgres doesn't require you to keep such
an index, but it's a really good idea if you ever update the referenced
column.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pivot functions with variable number of columns

2012-09-06 Thread Misa Simic
Hi Pavel,

Hm... To me workaround looks as exactly as the same thing?

1) uses Dynamic SQL to bulid query (but returns refcursor insted of text)

2) client still needs to execute 2 commands (second is fetch instead of
execute 'result')

However, based on your name, and the name of the blog author :) I have made
conlusion you are working on Stored Procedures things?

I have a few questions about that... But will send another mail to dont mix
subjects...

Cheers,

Misa

On Thursday, September 6, 2012, Pavel Stehule wrote:

> Hello
>
> 2012/9/6 Misa Simic >:
> > That is one of most wanted features of PostgreSQL, what is not solved
> yet,,,
> >
> > But it seems will be soon with introductions of Stored Procedures...
> >
>
> I wish :)
>
> > For now, you must "know" what result (columns) you expects...
> >
> > So the only one option for now is to use Dynamic SQL - to build your
> query
> > dynamically based on data in your table (based on column what should be
> > pivoted)..
> >
> > And then execute that query...
> >
> > You can use your client side language to build SQL or inisde DB you could
> > make function what returns "text" as your Dynamic SQL and then execute it
> > from your client...
> >
>
> there is a some workaround
>
> http://okbob.blogspot.cz/2008/08/using-cursors-for-generating-cross.html
>
> Pavel
>
> > Kind Regards,
> >
> > Misa
> >
> > 2012/9/6 punnoose >
> >>
> >> I want to have a pivot like function in which i should have variable
> >> number
> >> of columns.i went for crosstab but it doesnot support variable number of
> >> columns.Can any body suggest an alternative.like if i have a event at a
> >> particular time of the day like one at 02:35,11:34, then i should have
> >> column name 02:35,11:34.
> >> Please do help me.
> >> Punnoose
> >>
> >>
> >>
> >>
> >> --
> >> View this message in context:
> >>
> http://postgresql.1045698.n5.nabble.com/pivot-functions-with-variable-number-of-columns-tp5723013.html
> >> Sent from the PostgreSQL - general mailing list archive at Nabble.com.
> >>
> >>
> >> --
> >> Sent via pgsql-general mailing list 
> >> (pgsql-general@postgresql.org
> )
> >> To make changes to your subscription:
> >> http://www.postgresql.org/mailpref/pgsql-general
> >
> >
>


[GENERAL] Multiple indexes, huge table

2012-09-06 Thread Aram Fingal
I have a table which currently has about 500 million rows.  For the most part, 
the situation is going to be that I will import a few hundred million more rows 
from text files once every few months but otherwise there won't be any insert, 
update or delete queries.  I have created five indexes, some of them 
multi-column, which make a tremendous difference in performance for the 
statistical queries which I need to run frequently (seconds versus hours.)  
When adding data to the table, however, I have found that it is much faster to 
drop all the indexes, copy the data to the table and then create the indexes 
again (hours versus days.)  So, my question is whether this is really the best 
way.  Should I write a script which drops all the indexes, copies the data and 
then recreates the indexes or is there a better way to do this?  

There are also rare cases where I might want to make a correction.  For 
example, one of the columns is sample name which is a foreign key to a samples 
table defined with " ON UPDATE CASCADE."  I decided to change a sample name in 
the samples table which should affect about 20 million rows out of the 
previously mentioned 500 million.  That query has now been running for five 
days and isn't finished yet.  

-Aram

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump slow on windows

2012-09-06 Thread John R Pierce

On 09/06/12 1:34 PM, Kobus Wolvaardt wrote:


Something that is curios is that if a DB takes long, it really takes 
horribly long like some kind of a lock is holding it. It would sit at 
a few kb dump size for 20 minutes en then run a bit and get stuck 
again (as far as we can tell), what we do know is that it is way to 
slow for some IO or cpu starvation. We have seen a 5GB backup sitting 
at 1Gb after 12hours and then we stop it.


Any siggestions? Can autovacume or lack thereof cause this? It seems 
noone has been doing any maintenance on the DB (it does look like 
autovacuum is running), so any suggestions would be nice. 

try...

select * from pg_stat_activity;
select * from pg_locks;

next time its hung



--
john r pierceN 37, W 122
santa cruz ca mid-left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pg_dump slow on windows

2012-09-06 Thread Kobus Wolvaardt
Hi,

I hope this is the right list. I have read through a few pg_dump slow posts
but none of them seem to apply to our case. We have a nice big server
running windows server 2008 and postgres 8.4. The machine does nothing else
and every so often running the nightly backup take 10 to 12 hours to
complete. Most nights it completes in 1 to 2 hours. Just for some context,
5 rather large DBs get backed up. All of them have many schema (100s) and
many many tables.


A few things I cannot change. I cannot switch to Linux even though I want
to. I cannot upgrade to a newer postgres just yet (though 9.0 should be
installed before year end). One of the issues holding a newer postgres up
is the fact that backups aren't running right. :-)

So my first thought was the machine or DB being over worked, but apart from
a lone vacuum at 2am there is little else going on, the cpu seems fine. I
thought maybe IO, but running the command as a user during peak working
hours seems to run well (< hour per DB).

Something that is curios is that if a DB takes long, it really takes
horribly long like some kind of a lock is holding it. It would sit at a few
kb dump size for 20 minutes en then run a bit and get stuck again (as far
as we can tell), what we do know is that it is way to slow for some IO or
cpu starvation. We have seen a 5GB backup sitting at 1Gb after 12hours and
then we stop it.

Any siggestions? Can autovacume or lack thereof cause this? It seems noone
has been doing any maintenance on the DB (it does look like autovacuum is
running), so any suggestions would be nice.

Thanks,
Kobus

P.S. Would we see performance improvements on windows going from 8.4 to
9.0? Any comment on the difference between 32 and 64? Is it a safe
migration?


Re: [GENERAL] return text from explain

2012-09-06 Thread Tom Lane
Bruce Momjian  writes:
> On Thu, Sep  6, 2012 at 07:18:50PM +0200, Willy-Bas Loos wrote:
>> Is it possible to use the output of explain as text values?

> I think you have to do EXPLAIN in a function and call the function.

Yeah, IIRC you can use EXPLAIN as the source statement in a plpgsql
FOR loop, ie
FOR text_variable IN EXPLAIN ... LOOP
which gets you the output one line at a time.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pivot functions with variable number of columns

2012-09-06 Thread Pavel Stehule
Hello

2012/9/6 Misa Simic :
> That is one of most wanted features of PostgreSQL, what is not solved yet,,,
>
> But it seems will be soon with introductions of Stored Procedures...
>

I wish :)

> For now, you must "know" what result (columns) you expects...
>
> So the only one option for now is to use Dynamic SQL - to build your query
> dynamically based on data in your table (based on column what should be
> pivoted)..
>
> And then execute that query...
>
> You can use your client side language to build SQL or inisde DB you could
> make function what returns "text" as your Dynamic SQL and then execute it
> from your client...
>

there is a some workaround

http://okbob.blogspot.cz/2008/08/using-cursors-for-generating-cross.html

Pavel

> Kind Regards,
>
> Misa
>
> 2012/9/6 punnoose 
>>
>> I want to have a pivot like function in which i should have variable
>> number
>> of columns.i went for crosstab but it doesnot support variable number of
>> columns.Can any body suggest an alternative.like if i have a event at a
>> particular time of the day like one at 02:35,11:34, then i should have
>> column name 02:35,11:34.
>> Please do help me.
>> Punnoose
>>
>>
>>
>>
>> --
>> View this message in context:
>> http://postgresql.1045698.n5.nabble.com/pivot-functions-with-variable-number-of-columns-tp5723013.html
>> Sent from the PostgreSQL - general mailing list archive at Nabble.com.
>>
>>
>> --
>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-general
>
>


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Moving several databases into one database with several schemas

2012-09-06 Thread John R Pierce

On 09/06/12 5:30 AM, Edson Richter wrote:

You could change the default setting for the user with

ALTER ROLE someuser SET search_path=...
That is perfect! I can have separate users for each application, and 
then they will have the correct search path.
You saved my day, 


the default search_path is $USER,public, so by naming your schema's to 
the usernames, you don't even need to alter role...




--
john r pierceN 37, W 122
santa cruz ca mid-left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] return text from explain

2012-09-06 Thread Willy-Bas Loos
Buce, thx for answering.
I cant't find the example you mean.
Tried a function, but won't work..

create or replace function test() returns setof record as $$
declare
t_rec record;
begin
for t_rec in (
explain
 select *
from (values ('a'),('b'), ('c')) foo(x)
where x > 'a'
) loop
return next t_rec;
end loop;
end;
$$ language plpgsql;


select *
from test() as (x text)

ERROR:  syntax error at or near "explain"
LINE 6: explain

When i comment-out the "explain", then it works.
You can also run the query from "explain" to  "x > 'a'".

Cheers,

WBL

On Thu, Sep 6, 2012 at 8:03 PM, Bruce Momjian  wrote:

> On Thu, Sep  6, 2012 at 07:18:50PM +0200, Willy-Bas Loos wrote:
> > Hi,
> >
> > Is it possible to use the output of explain as text values?
> > This won't work:
> >
> > explain select *
> > from (values (1),(2),(3)) foo(x)
> > where x > 2
> >
> > What i really want is to explain analyze a dynamic query that i build up
> in a
> > function.
> > If it returns a value i can do stuff with it, but i can't find out how
> to grasp
> > the query plan as a value.
> >
> > pgAdmin shows it as text values in the data output tab, but that might
> be a
> > hack outside the database realm.
>
> I think you have to do EXPLAIN in a function and call the function.  My
> CTE presentation has an example of that:
>
> http://momjian.us/main/presentations/features.html#cte
>
> --
>   Bruce Momjian  http://momjian.us
>   EnterpriseDB http://enterprisedb.com
>
>   + It's impossible for everything to be true. +
>



-- 
"Quality comes from focus and clarity of purpose" -- Mark Shuttleworth


Re: [GENERAL] pivot functions with variable number of columns

2012-09-06 Thread Misa Simic
That is one of most wanted features of PostgreSQL, what is not solved yet,,,

But it seems will be soon with introductions of Stored Procedures...

For now, you must "know" what result (columns) you expects...

So the only one option for now is to use Dynamic SQL - to build your query
dynamically based on data in your table (based on column what should be
pivoted)..

And then execute that query...

You can use your client side language to build SQL or inisde DB you could
make function what returns "text" as your Dynamic SQL and then execute it
from your client...

Kind Regards,

Misa

2012/9/6 punnoose 

> I want to have a pivot like function in which i should have variable number
> of columns.i went for crosstab but it doesnot support variable number of
> columns.Can any body suggest an alternative.like if i have a event at a
> particular time of the day like one at 02:35,11:34, then i should have
> column name 02:35,11:34.
> Please do help me.
> Punnoose
>
>
>
>
> --
> View this message in context:
> http://postgresql.1045698.n5.nabble.com/pivot-functions-with-variable-number-of-columns-tp5723013.html
> Sent from the PostgreSQL - general mailing list archive at Nabble.com.
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] return text from explain

2012-09-06 Thread Bruce Momjian
On Thu, Sep  6, 2012 at 07:18:50PM +0200, Willy-Bas Loos wrote:
> Hi,
> 
> Is it possible to use the output of explain as text values?
> This won't work:
> 
> explain select *
> from (values (1),(2),(3)) foo(x)
> where x > 2
> 
> What i really want is to explain analyze a dynamic query that i build up in a
> function.
> If it returns a value i can do stuff with it, but i can't find out how to 
> grasp
> the query plan as a value.
> 
> pgAdmin shows it as text values in the data output tab, but that might be a
> hack outside the database realm.

I think you have to do EXPLAIN in a function and call the function.  My
CTE presentation has an example of that:

http://momjian.us/main/presentations/features.html#cte

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] return text from explain

2012-09-06 Thread Willy-Bas Loos
correction.
What won't work is:

select y||'--some text'
from
(
explain select *
from (values (1),(2), (3)) foo(x)
where x > 2
) bar(y)

Cheers,

WBL

On Thu, Sep 6, 2012 at 7:18 PM, Willy-Bas Loos  wrote:

> Hi,
>
> Is it possible to use the output of explain as text values?
> This won't work:
>
> explain select *
> from (values (1),(2),(3)) foo(x)
> where x > 2
>
> What i really want is to explain analyze a dynamic query that i build up
> in a function.
> If it returns a value i can do stuff with it, but i can't find out how to
> grasp the query plan as a value.
>
> pgAdmin shows it as text values in the data output tab, but that might be
> a hack outside the database realm.
>
> Cheers,
>
> WBL
> --
> "Quality comes from focus and clarity of purpose" -- Mark Shuttleworth
>
>


-- 
"Quality comes from focus and clarity of purpose" -- Mark Shuttleworth


[GENERAL] return text from explain

2012-09-06 Thread Willy-Bas Loos
Hi,

Is it possible to use the output of explain as text values?
This won't work:

explain select *
from (values (1),(2),(3)) foo(x)
where x > 2

What i really want is to explain analyze a dynamic query that i build up in
a function.
If it returns a value i can do stuff with it, but i can't find out how to
grasp the query plan as a value.

pgAdmin shows it as text values in the data output tab, but that might be a
hack outside the database realm.

Cheers,

WBL
-- 
"Quality comes from focus and clarity of purpose" -- Mark Shuttleworth


Re: [GENERAL] pivot functions with variable number of columns

2012-09-06 Thread Willy-Bas Loos
a very nice way is to use a cursor.
http://okbob.blogspot.cz/2008/08/using-cursors-for-generating-cross.html

HTH

WBL

On Thu, Sep 6, 2012 at 12:40 PM, Vincent Veyron  wrote:

> Le jeudi 06 septembre 2012 à 00:40 -0700, Chris Travers a écrit :
> >
> >
> > On Wed, Sep 5, 2012 at 10:14 PM, punnoose
> >  wrote:
> > I want to have a pivot like function in which i should have
> > variable number
> > of columns.i went for crosstab but it doesnot support variable
> > number of
> > columns.Can any body suggest an alternative.like if i have a
> > event at a
> > particular time of the day like one at 02:35,11:34, then i
> > should have
> > column name 02:35,11:34.
> >
> > You could detect the columns you want to return and use a plpgsql
> > function that returns a refcursor, I suppose.
>
> Below is an example in Perl : it selects the values in column
> 'time_of_day' from 'your_table' and builds a table named 'crosstab' with
> the proper column names. You can start from this and adjust to your
> needs.
>
> If at all possible, I find a good solution to these problems is to
> provide an easy way for your users to download the data in csv format;
> that way they can import it into their office suite for processing there
> (MS-Access, OpenOffice have crosstab queries)
>
>
> CREATE OR REPLACE FUNCTION build_crosstab ( ) RETURNS VOID AS $$
>
> my @field_names;
> my $field_list;
>
> #la requête qui ramène les données
> my $rv = spi_exec_query("SELECT time_of_day FROM your_table GROUP BY
> time_of_day ORDER BY 1");
>
> #exécuter la requête, compter les lignes
> my $nrows = $rv->{processed};
>
> #pour chaque ligne, imprimer le nom
> foreach my $rn (0 .. $nrows - 1) {
>
> my $row = $rv->{rows}[$rn];
>
> push @field_names, '"' . $row->{time_of_day} . '"' ;
>
>}
>
> for ( @field_names ) {
>
> $field_list .= ', ' . $_ . ' text';
>
> }
>
> my $create_table = 'CREATE TABLE crosstab (' . substr($field_list, 1) .
> ')';
>
> my $action = spi_exec_query($create_table);
>
> $$ LANGUAGE plperlu;
>
>
>
>
>
>
>
> --
> Vincent Veyron
> http://marica.fr/
> Gestion informatisée des dossiers contentieux et des sinistres assurances
> pour le service juridique
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
"Quality comes from focus and clarity of purpose" -- Mark Shuttleworth


[GENERAL] RE: [GENERAL] UPDATE RULE to be invoked when UPDATE .. WHERE fails the WHERE predicate ?‏

2012-09-06 Thread John Lumby




> Date: Mon, 3 Sep 2012 09:31:21 +0100
> Subject: Re: [GENERAL] UPDATE RULE to be invoked when UPDATE .. WHERE fails 
> the WHERE predicate ?‏
> From: dean.a.rash...@gmail.com
> To: johnlu...@hotmail.com
> CC: pgsql-general@postgresql.org; pavan.deola...@gmail.com
>
> On 2 September 2012 22:42, johnlumby  wrote:
> > On 09/01/12 03:46, Dean Rasheed wrote:
> >> What you are trying to do cannot be achieved rules, and doing it this
> >> way with triggers is likely to be messy. I think you need to consider
> >> a different approach.
> >>
> >> It sounds like what you really want is finer-grained control over the
> >> Hibernate optimistic locking check. One way of doing that would be to
> >> do the check yourself in a BEFORE UPDATE ROW trigger, with something
> >> to the effect of:
> >>
> >> if new.version != old.version+1:
> >> raise concurrency error (will cause the entire transaction to be
> >> rolled back)
> >
> >
> > Thanks Dean. A nice suggestion but my reading of the rules for a BEFORE
> > row-level trigger
> > is that it cannot see the NEW tuple :
> >
> > "The data change (insertion, update, or deletion) causing the trigger to
> > fire
> > is naturally not visible to SQL commands executed in a row-level
> > BEFORE trigger,
> > because it hasn't happened yet."
> >
>
> What it's saying is that if you run a SELECT statement inside the
> BEFORE trigger function, you won't see the new values because the
> table hasn't been updated yet. However, a BEFORE UPDATE trigger has
> access to variables called OLD and NEW which are designed specifically
> for that purpose (you don't need to do a SELECT in the trigger). OLD
> is the value currently in the table (before the update) and NEW is the
> value about to be set on the table (modulo the caveat below).
>
> So you can implement optimistic locking as follows:
> 1). SELECT the original data from the table, including the original
> value of version
> 2). Work out the new values to set
> 3). UPDATE the table with the new values, and set version=original_version+1
>

Thanks Dean.  I tried it out and it works perfectly using the BEFORE row 
trigger.

Incidentally :  testing this It also brought to light a strange aspect of
postgresql locking that I had never seen before - the "transactionid" lock
and a deadlock involving same in certain circumstances.   I mention this
just in case someone comes along and tries out this trigger idea for
enforcing serialization.  Because of the transactionid lock, deadlocks 
can arise even when the two deadlocked transactions are both updating
the very same database row and hold no other contested row/table locks,
which is hardly intuitive. Some discussion of this here
   http://archives.postgresql.org/pgsql-novice/2010-05/msg00065.php
Here is what my particular deadlock looked like

STATEMENT:  update ENTITY set version=$1, inherit_right=$2, name=$3, 
parent_id=$4, association_id=$5, association2_id=$6, long1=$7, long2=$8 where 
id=$9
LOG:  0: execute : update ENTITY set version=$1, inherit_right=$2, 
name=$3, parent_id=$4, association_id=$5, association2_id=$6, long1=$7, 
long2=$8 where id=$9
DETAIL:  parameters: $1 = '3', $2 = 't', $3 = NULL, $4 = '5', $5 = '5', $6 = 
NULL, $7 = '100663296', $8 = '117440511', $9 = '909'
LOCATION:  exec_execute_message, postgres.c:1976
ERROR:  40P01: deadlock detected
DETAIL:  Process 11251 waits for ExclusiveLock on tuple (0,91) of relation 
16416 of database 16384; blocked by process 11246.
    Process 11246 waits for ShareLock on transaction 3196; blocked by process 
11251.
    Process 11251: update ENTITY set version=$1, inherit_right=$2, name=$3, 
parent_id=$4, association_id=$5, association2_id=$6, long1=$7, long2=$8 where 
id=$9
    Process 11246: update ENTITY set version=$1, inherit_right=$2, name=$3, 
parent_id=$4, association_id=$5, association2_id=$6, long1=$7, long2=$8 where 
id=$9


> Then in the BEFORE UPDATE trigger NEW.version will be equal to
> original_version+1. So if you compare NEW.version with OLD.version+1,
> you are really comparing OLD.version with original_version, i.e.,
> testing that the value in the table immediately before the update is
> same as in step (1). Thus it traps the case where another process has
> modified the row under your feet. By that point, postgresql has a lock
> on the row about to be modified, so you are guarded against race
> conditions.
>
>
> Regards,
> Dean
  


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Moving several databases into one database with several schemas

2012-09-06 Thread Edson Richter

Em 06/09/2012 09:21, Albe Laurenz escreveu:

Edson Richter wrote:

2) Is there a way to specify the default schema in JDBC url
(or command I can issue to change
the default schema at runtime, like "set path...")?

SET search_path=schema1,schema2,public;

Problem is that my application uses JDBC and Connection Pooling. After

a

connection is closed, I'll have to set search path again, and again...
Nevertheless, connection pool allows me to have one command to test is
connection is available, I'll try to put SET search_path on there, and
see results.

I see.
You could change the default setting for the user with

ALTER ROLE someuser SET search_path=...
That is perfect! I can have separate users for each application, and 
then they will have the correct search path.

You saved my day,

Thank you very much!

Edson


Yours,
Laurenz Albe






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] "Too far out of the mainstream"

2012-09-06 Thread Edson Richter

Em 06/09/2012 02:34, Chris Travers escreveu:



On Wed, Sep 5, 2012 at 7:56 PM, Edson Richter 
mailto:edsonrich...@hotmail.com>> wrote:


Em 05/09/2012 23:49, Chris Travers escreveu:

Regarding MySQL vs PostgreSQL:

MySQL is what you get when app developers build a database server.
PostgreSQL is what you get when db developers build a
development platform.

There really isn't anything more to say about it.


This kind of claim is just to feed flame wars. Don't think I need
to state that a "db developer" becomes a "app developer" as soon
as he start to develop any database server code, right?


I don't mean it that way.
Ok, understood. My point here was just to return focus to the main 
question, and avoid feed the trolls :-)


The basic thing is that MySQL's view of data integrity is extremely 
application centric.  Even today, applications get to tell the server 
whether to throw an error when you try to insert -00-00 into a 
date field (this is via the sql_mode setting and admins can't restrict 
what an app can do there).  MySQL makes perfect sense when you are an 
application developer looking at the database as a place to store 
information for your own private use. In essence, MySQL makes 
perfect sense when you realize that "my" = "private" in OO terms.
Yes, I agree. Nothing professional can run this way, but for personal 
purposes, you can even call "access", "dbf" or "Isis txt format" a database.


This isn't necessarily a bad thing if that's what you are using it 
for, and because of ways the db market has developed there are a huge 
number of developers who are very happy with a lowest common 
denominator RDBMS where you can assume one app writing to the db (or 
at least any given relation), and possibly other apps reading.  In 
short if you want an easy db to port SQL code that was intended to be 
portable to, MySQL is the RDBMS for you.  For people who want to avoid 
putting business logic in the db, and want to put all the API's for 
interoperability and integration in their app logic, it's a good 
RDBMS.  In fact, I can't actually think of better.  This is 
*especially true* if you want to make it dangerous for other apps to 
write to the db, perhaps in order to say this is not supported and ask 
people to purchase more client access licenses
Actually, for web based applications, developers are forced to add 
validation at several levels. But is still database responsibility to 
accept or reject the data that will persist for a lifetime (sometimes less).


MySQL behavior that seems "incorrect" is not necessarily "incorrect" 
in that context.  It's a data store for one app to write to and 
optionally other apps to read from.  The app can be trusted to not do 
crazy things with sql_mode settings or the like, and if it does, 
whatever the app tells the db is correct behavior, the db is supposed 
to do.
It is incorrect in a way to by the MySQL behavior, data will get corrupt 
in a very short of time, I know because I tried with application that 
run perfectly well in PostgreSQL and get corrupt in a very short of time 
when using MySQL. The same statement is true for Access and DBF in any 
multi user scenario.


PostgreSQL on the other hand has been engineered from the beginning 
(as I understand it) with the idea that you have multiple applications 
writing to the same relations.  So a lot of the things like sql_mode 
settings, which are great for porting applications to MySQL, would be 
dangerous in a PostgreSQL context.  The relations are a public API, 
while in MySQL they are at least semi-private.  Additionally from the 
beginning you have had a very strong emphasis on being able to do 
advanced data modelling in PostgreSQL perhaps to an extent even today 
unparalleled elsewhere.  If you are going to do db-level programming 
in PostgreSQL, you shouldn't IMO think like an application developer 
but rather like a database developer.
What I am getting at is that if you are an app developer looking at 
databases, MySQL looks fine, and the warts more or less match how you 
would tend to think a db should act anyway.  If you are a db 
developer, PostgreSQL tries hard where we all agree on correct db 
behavior to do the right thing without respect to what the app might 
have intended.  On the other hand, this is mostly a platform for data 
modelling, and if you are an app developer a lot of things will seem 
weird in that context until you get used to it.


Like it or not, the perspectives are very different.  If all you want 
is an information store for your app with reporting capabilities, then 
you end up with a different solution then if you want to manage data 
in a centralized way.


Of course. But remember that writing wrong applications is wrong by 
design, and its cause are the wrong decisions. The main problem is the 
people behind and its behavior, not the role they play. Remember that 
there is not software without bugs

Re: [GENERAL] Moving several databases into one database with several schemas

2012-09-06 Thread Albe Laurenz
Edson Richter wrote:
>>> 2) Is there a way to specify the default schema in JDBC url
>>> (or command I can issue to change
>>> the default schema at runtime, like "set path...")?

>> SET search_path=schema1,schema2,public;

> Problem is that my application uses JDBC and Connection Pooling. After
a
> connection is closed, I'll have to set search path again, and again...
> Nevertheless, connection pool allows me to have one command to test is
> connection is available, I'll try to put SET search_path on there, and
> see results.

I see.
You could change the default setting for the user with

ALTER ROLE someuser SET search_path=...

Yours,
Laurenz Albe


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Moving several databases into one database with several schemas

2012-09-06 Thread Edson Richter

Em 06/09/2012 05:12, Albe Laurenz escreveu:

Edson Richter wrote:

That's what I want to do know: I would like to consolidate these 4

separate databases in 1

database with 5 schemas:

- Main schema: will have all shared tables, that will be
   read only most of time;
- Schema1 to Schema4: will have their own tables, read write.

Now the questions:

1) Is there a way to "backup" database1 and "restore" in the
consolidated database, but in
"schema1" (not overwriting everything)?

There is no simple way.
You could pg_dump in plain format (-F p) and edit the SQL file,
but that's cumbersome and error-prone.

What I would try to do is restore the dump as it is in
a new database, rename the schema, e.g.

ALTER SCHEMA public RENAME TO schema1;

Then pg_dump that and restore it into the destination database.
Adjust the schema permissions as desired.
Ok, seems the way to go. No big deal, just few hours of work to the 
cicle "restore in a tempdb", "rename schema", "backup schema", "restore 
in consolidated".



2) Is there a way to specify the default schema in JDBC url
(or command I can issue to change
the default schema at runtime, like "set path...")?

SET search_path=schema1,schema2,public;


Problem is that my application uses JDBC and Connection Pooling. After a 
connection is closed, I'll have to set search path again, and again... 
Nevertheless, connection pool allows me to have one command to test is 
connection is available, I'll try to put SET search_path on there, and 
see results. The search path for schema1 will be


SET search_path=schema1,main,public;




I've tried following command (on Windows platform), but command

returns without any import, and "exit

code 0" (output translated, because I do use PT-BR):
pg_restore.exe --host localhost --port 5432 --username "postgres"

--dbname "consolidado" --role

"MyUser" --no-password  --schema main --verbose

"E:\backups\maindatabase.bk"

pg_restore: connecting to database for restore

Process returned exit code 0.

That will try to restore schema "main" from the dump.
If there is no such schema in the dump (in the original
database), it will do nothing.


Ok, thanks for the clarification.
I'll share my experience and results after I finish this..

Regards,

Edson.



Yours,
Laurenz Albe






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pivot functions with variable number of columns

2012-09-06 Thread Vincent Veyron
Le jeudi 06 septembre 2012 à 00:40 -0700, Chris Travers a écrit :
> 
> 
> On Wed, Sep 5, 2012 at 10:14 PM, punnoose
>  wrote:
> I want to have a pivot like function in which i should have
> variable number
> of columns.i went for crosstab but it doesnot support variable
> number of
> columns.Can any body suggest an alternative.like if i have a
> event at a
> particular time of the day like one at 02:35,11:34, then i
> should have
> column name 02:35,11:34.
> 
> You could detect the columns you want to return and use a plpgsql
> function that returns a refcursor, I suppose.

Below is an example in Perl : it selects the values in column
'time_of_day' from 'your_table' and builds a table named 'crosstab' with
the proper column names. You can start from this and adjust to your
needs. 

If at all possible, I find a good solution to these problems is to
provide an easy way for your users to download the data in csv format;
that way they can import it into their office suite for processing there
(MS-Access, OpenOffice have crosstab queries)


CREATE OR REPLACE FUNCTION build_crosstab ( ) RETURNS VOID AS $$

my @field_names;
my $field_list;

#la requête qui ramène les données
my $rv = spi_exec_query("SELECT time_of_day FROM your_table GROUP BY
time_of_day ORDER BY 1");

#exécuter la requête, compter les lignes
my $nrows = $rv->{processed};

#pour chaque ligne, imprimer le nom
foreach my $rn (0 .. $nrows - 1) {
 
my $row = $rv->{rows}[$rn];

push @field_names, '"' . $row->{time_of_day} . '"' ;

   }

for ( @field_names ) {

$field_list .= ', ' . $_ . ' text';

}

my $create_table = 'CREATE TABLE crosstab (' . substr($field_list, 1) .
')';

my $action = spi_exec_query($create_table);

$$ LANGUAGE plperlu;







-- 
Vincent Veyron
http://marica.fr/
Gestion informatisée des dossiers contentieux et des sinistres assurances pour 
le service juridique



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] max_connections

2012-09-06 Thread Albe Laurenz
Sireesha Modumudi wrote:
> I am using postgres 8.3.9 on SUSE 64 bit. By default max_connections
is 100, but I want to know if
> this can be increased, if so, what should we take into consideration?

It can be increased, but you habe to restart the server for
the change to take effect.

It is not a good idea to increase the setting without thought.
There is a Wiki article about it:
http://wiki.postgresql.org/wiki/Number_Of_Database_Connections

If you need more than 100 connections, consider the use of a
connection pool.

Yours,
Laurenz Albe


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Moving several databases into one database with several schemas

2012-09-06 Thread Albe Laurenz
Edson Richter wrote:
> That's what I want to do know: I would like to consolidate these 4
separate databases in 1
> database with 5 schemas:
> 
> - Main schema: will have all shared tables, that will be
>   read only most of time;
> - Schema1 to Schema4: will have their own tables, read write.
> 
> Now the questions:
> 
> 1) Is there a way to "backup" database1 and "restore" in the
> consolidated database, but in
> "schema1" (not overwriting everything)?

There is no simple way.
You could pg_dump in plain format (-F p) and edit the SQL file,
but that's cumbersome and error-prone.

What I would try to do is restore the dump as it is in
a new database, rename the schema, e.g.

ALTER SCHEMA public RENAME TO schema1;

Then pg_dump that and restore it into the destination database.
Adjust the schema permissions as desired.

> 2) Is there a way to specify the default schema in JDBC url
> (or command I can issue to change
> the default schema at runtime, like "set path...")?

SET search_path=schema1,schema2,public;

> I've tried following command (on Windows platform), but command
returns without any import, and "exit
> code 0" (output translated, because I do use PT-BR):

> pg_restore.exe --host localhost --port 5432 --username "postgres"
--dbname "consolidado" --role
> "MyUser" --no-password  --schema main --verbose
"E:\backups\maindatabase.bk"
> pg_restore: connecting to database for restore
> 
> Process returned exit code 0.

That will try to restore schema "main" from the dump.
If there is no such schema in the dump (in the original
database), it will do nothing.

Yours,
Laurenz Albe


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pivot functions with variable number of columns

2012-09-06 Thread Chris Travers
On Wed, Sep 5, 2012 at 10:14 PM, punnoose wrote:

> I want to have a pivot like function in which i should have variable number
> of columns.i went for crosstab but it doesnot support variable number of
> columns.Can any body suggest an alternative.like if i have a event at a
> particular time of the day like one at 02:35,11:34, then i should have
> column name 02:35,11:34.
>
> You could detect the columns you want to return and use a plpgsql function
that returns a refcursor, I suppose.

Best Wishes,
Chris travers


Re: [GENERAL] pivot functions with variable number of columns

2012-09-06 Thread Raghavendra
On Thu, Sep 6, 2012 at 10:44 AM, punnoose wrote:

> I want to have a pivot like function in which i should have variable number
> of columns.i went for crosstab but it doesnot support variable number of
> columns.Can any body suggest an alternative.like if i have a event at a
> particular time of the day like one at 02:35,11:34, then i should have
> column name 02:35,11:34.
> Please do help me.
> Punnoose
>
>
Am not sure how your table structure is designed to use the best of
crosstab. Here is a simple example to give some light on how to work with
crosstab it.

*Table & Data:*

CREATE TABLE pivot_test (id integer, customer_id integer, product_code
VARCHAR, quantity integer);


INSERT INTO pivot_test VALUES (1, 1, 'A', 10);

INSERT INTO pivot_test VALUES (2, 1, 'B', 20);

INSERT INTO pivot_test VALUES (3, 1, 'C', 30);

INSERT INTO pivot_test VALUES (4, 2, 'A', 40);

INSERT INTO pivot_test VALUES (5, 2, 'C', 50);

INSERT INTO pivot_test VALUES (6, 3, 'A', 60);

INSERT INTO pivot_test VALUES (7, 3, 'B', 70);

INSERT INTO pivot_test VALUES (8, 3, 'C', 80);

INSERT INTO pivot_test VALUES (9, 3, 'D', 90);

INSERT INTO pivot_test VALUES (10, 4, 'A', 100);


postgres=# select * from pivot_test;

 id | customer_id | product_code | quantity

+-+--+--

  1 |   1 | A|   10

  2 |   1 | B|   20

  3 |   1 | C|   30

  4 |   2 | A|   40

  5 |   2 | C|   50

  6 |   3 | A|   60

  7 |   3 | B|   70

  8 |   3 | C|   80

  9 |   3 | D|   90

 10 |   4 | A|  100

(10 rows)

*Here is Pivot kind result:*

postgres=select * from crosstab

   ('select customer_id::text,

product_code::text,

quantity::text

from pivot_test

where product_code=''A'' or product_code=''B'' or
product_code=''C''

order by 1,2'

   ) as ct(customer_id text, "A" text,"B" text,"C" text);


 customer_id |  A  | B  | C

-+-++

 1   | 10  | 20 | 30

 2   | 40  | 50 |

 3   | 60  | 70 | 80

 4   | 100 ||

(4 rows)
Someone, might have better example. Timely you can work with above example.

---
Regards,
Raghavendra
EnterpriseDB Corporation
Blog: http://raghavt.blogspot.com/