Re: [GENERAL] Postgres fails to start

2015-04-07 Thread Haiming Zhang
Hi Michael,

Thank you again. It seems hard to recover, I will go the hard way (lost data). 
Learned a lesson.

Regards,
Haiming

-Original Message-
From: Michael Paquier [mailto:michael.paqu...@gmail.com]
Sent: Tuesday, 7 April 2015 1:04 PM
To: Haiming Zhang
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Postgres fails to start

On Tue, Apr 7, 2015 at 10:46 AM, Haiming Zhang haiming.zh...@redflex.com.au 
wrote:
 Thank you for replying. The file was there, is that ok to remove the 
 corrupted file to recover postgres? base/2008723533/2107262657.2

Be careful here, I would recommend taking a file-level snapshot before going on 
and do perhaps-stupid things. As that's a btree right split, perhaps you could 
recover your data by ignoring this index...

 Unfortunately, I only have a backup on February. Is there a way I can recover 
 it without losing the recent data?

What is lost is lost. A good backup strategy is essential.
--
Michael



If you are not an authorised recipient of this e-mail, please contact me at 
Redflex immediately by return phone call or by email. In this case, you should 
not read, print, retransmit, store or act in reliance on this e-mail or any 
attachments, and should destroy all copies of them. This e-mail and any 
attachments are confidential and may contain privileged information and/or 
copyright material of Redflex or third parties. You should only retransmit, 
distribute or commercialise the material if you are authorised to do so. This 
notice should not be removed.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [SQL] check data for datatype

2015-04-07 Thread Gerardo Herzig
I guess that could need something like (untested)

delete from bigtable text_column !~ '^[0-9][0-9]*$';


HTH
Gerardo

- Mensaje original -
 De: Suresh Raja suresh.raja...@gmail.com
 Para: pgsql-general@postgresql.org, pgsql-...@postgresql.org
 Enviados: Viernes, 27 de Marzo 2015 15:08:43
 Asunto: [SQL] check data for datatype
 
 
 
 
 
 
 
 
 Hi All:
 
 
 I have a very large table and the column type is text. I would like
 to convert in numeric. How can I find rows that dont have numbers. I
 would like to delete those rows.
 
 
 Thanks,
 -Suersh Raja


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgresql Development Options

2015-04-07 Thread Ray Madigan
On Sun, Apr 5, 2015 at 6:46 PM, Steve Atkins st...@blighty.com wrote:


 On Apr 5, 2015, at 1:21 PM, Ray Madigan raymond.madi...@gmail.com wrote:

  I have been using postgresql in java off and on for many years.   I now
 have an assignemtn where I have to build a very straight forward networked
 application that needs to be able to insert Array data types from a windows
 environment to a remote Linux Postgresql database.
 
  My first attempt was to use Qt.  Their is a postgresql driver and found
 out the hard way that it doesn't support the Array data type.  I need a
 small ui so the user can set options for the application.   My question is,
 are there other UI options that I can use to development this application.

 The Qt database driver is not great for general use. Where it shines is
 when you want to do simple CRUD queries and to have them connected to
 widgets with minimal work. It should support arrays, though, with a little
 data transformation.

 If you're looking to use C++ then Qt is an excellent framework for a GUI
 app - one that you won't beat for cross-platform work - but you might
 consider whether using libpqxx or libpq to connect to the database might
 suit your needs better.

 Cheers,
   Steve


I tried to use libpq this morning and all it can do is crash.  I have
Postgresql running on my local machine and have installed the ODBC drivers,
not that that matters at this point.  I can't seem to get past the call to
connect to the database.:

my code looks like:

PGconn * connection;

char   conninfo[250];


sprintf(conninfo, user=%s password=%s dbname=%s hostaddr=%s
port=%d, user, password, foo, 192.168.3.3, 5433);

qDebug()  Foo1:   conninfo;


connection = PQconnectdb( conninfo );

qDebug()  Foo1: ;


And I get

The program has unexpectedly finished.


Re: [GENERAL] Problems with casting

2015-04-07 Thread Tom Lane
Jim Nasby jim.na...@bluetreble.com writes:
 On 4/7/15 4:17 PM, Tom Lane wrote:
 I suspect that that's only the tip of the iceberg.  Remember the mess
 we had with implicit casts to text?  And those only existed for a dozen
 or so types, not for everything.  Every function or operator you define
 for variant is going to be a loaded gun just waiting to shoot your foot
 off, if you make all those casts implicit.

 Yeah, that's why I avoided it. But that makes using it in a function a 
 real pain. :( I think this is a bit of a different scenario though, 
 because I don't see why you'd want to overload a function to accept both 
 variant and some other type.

 Really what I want is for casting to variant to be a last-choice option, 
 and even then only for function calls, not operators. I believe that 
 would be safe, because then you'd have to explicitly be calling a 
 function, or explicitly doing something::variant = variant.

Just out of curiosity, what's the point of this type at all, compared
to anyelement and friends?

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problems with casting

2015-04-07 Thread Jim Nasby

On 4/7/15 4:17 PM, Tom Lane wrote:

Jim Nasby jim.na...@bluetreble.com writes:

I've created a variant data type [1]. It seems to work pretty well,
except for some issues with casting.



Since the idea of the type is to allow storing any other data type, it
creates casts to and from all other types. At first these were all
marked as ASSIGNMENT, but that made using variant with functions quite
cumbersome. With functions that accepted a variant, you still had to
explicitly cast it:



SELECT variant_function( some_field::variant.variant ) FROM some_table;



I was reluctant to make the casts to variant IMPLICIT, but it seems like
it actually works rather well... except for arrays:


I suspect that that's only the tip of the iceberg.  Remember the mess
we had with implicit casts to text?  And those only existed for a dozen
or so types, not for everything.  Every function or operator you define
for variant is going to be a loaded gun just waiting to shoot your foot
off, if you make all those casts implicit.


Yeah, that's why I avoided it. But that makes using it in a function a 
real pain. :( I think this is a bit of a different scenario though, 
because I don't see why you'd want to overload a function to accept both 
variant and some other type.


Really what I want is for casting to variant to be a last-choice option, 
and even then only for function calls, not operators. I believe that 
would be safe, because then you'd have to explicitly be calling a 
function, or explicitly doing something::variant = variant.


The other option I thought of was controlling this better by putting the 
variant operators in their own schema, but that didn't work.

--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Problems with casting

2015-04-07 Thread Jim Nasby
I've created a variant data type [1]. It seems to work pretty well, 
except for some issues with casting.


Since the idea of the type is to allow storing any other data type, it 
creates casts to and from all other types. At first these were all 
marked as ASSIGNMENT, but that made using variant with functions quite 
cumbersome. With functions that accepted a variant, you still had to 
explicitly cast it:


SELECT variant_function( some_field::variant.variant ) FROM some_table;

I was reluctant to make the casts to variant IMPLICIT, but it seems like 
it actually works rather well... except for arrays:


ERROR:  operator is not unique: regtype[] = regtype[]

This was true for all operators, not something unique to regtype[], 
presumably because array_cmp() does something slightly different than 
the rest of the system. I do have a = operator, but I do not have an 
operator class.


For now, I work around this by leaving casts from arrays to variant as 
ASSIGNMENT, but I'm wondering if there's a better solution to be had.


I could change my = operator to something else, but I believe that will 
break things like IS DISTINCT.


I've wondered if creating an operator class would just fix this, but I'm 
not sure. I'd also need a somewhat different comparison function because 
right now I don't enforce that there's an operator class to do comparison.


I tried putting the operators into a different schema, but operator 
lookup appears to ignore schema.


It's worth noting that the only problem I've seen so far has been 
dealing with function calls. It reminds me of the surprise people run 
into when they define a function that accepts smallint and then they 
can't call it directly. I find myself wondering if there's some way to 
handle this at the function call level.

--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problems with casting

2015-04-07 Thread Tom Lane
Jim Nasby jim.na...@bluetreble.com writes:
 I've created a variant data type [1]. It seems to work pretty well, 
 except for some issues with casting.

 Since the idea of the type is to allow storing any other data type, it 
 creates casts to and from all other types. At first these were all 
 marked as ASSIGNMENT, but that made using variant with functions quite 
 cumbersome. With functions that accepted a variant, you still had to 
 explicitly cast it:

 SELECT variant_function( some_field::variant.variant ) FROM some_table;

 I was reluctant to make the casts to variant IMPLICIT, but it seems like 
 it actually works rather well... except for arrays:

I suspect that that's only the tip of the iceberg.  Remember the mess
we had with implicit casts to text?  And those only existed for a dozen
or so types, not for everything.  Every function or operator you define
for variant is going to be a loaded gun just waiting to shoot your foot
off, if you make all those casts implicit.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [SQL] check data for datatype

2015-04-07 Thread Jim Nasby

On 4/7/15 11:59 AM, Gerardo Herzig wrote:

I guess that could need something like (untested)

delete from bigtable text_column !~ '^[0-9][0-9]*$';


Won't work for...

.1
-1
1.1e+5
...

Really you need to do something like what Jerry suggested if you want 
this to be robust.

--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Serializable transaction restart/re-execute

2015-04-07 Thread Jim Nasby

On 4/6/15 6:42 AM, Bill Moran wrote:

CREATE OR REPLACE FUNCTION mytest() RETURNS integer AS $$
BEGIN
 update account set balance = balance+10 where id=1 RETURNING balance;
END
$$
LANGUAGE SQL;

of course, it's unlikely that you'll ever want to wrap such a
simple query in a function, so I'm supposing that you'd want
to do something else with the old value of balance before
updating it, in which case:

CREATE OR REPLACE FUNCTION mytest() RETURNS integer AS $$
DECLARE
 cc integer;
BEGIN
 SELECT INTO cc balance FROM account WHERE id = 1 FOR UPDATE;

 RAISE NOTICE 'Balance: %', cc;
 perform pg_sleep(3);

 update account set balance = cc+10 where id=1 RETURNING balance INTO cc;

 return cc;
END
$$
LANGUAGE plpgsql;

The FOR UPDATE ensures that no other process can modify the
row while this one is sleeping.

Now, I understand that you want to don't want to do row locking,
but this is (again) an insistance on your part of trying to
force PostgreSQL to do things the way GTM did instead of
understanding the RDBMS way of doing things.


Actually, the entire point of SERIALIZABLE is to avoid the need to mess 
around with FOR UPDATE and similar. It's a trade-off. If you have a 
large application that has lots of DML paths the odds of getting 
explicit locking correct drop rapidly to zero. That's where SERIALIZABLE 
shines; you just turn it on and stop worrying about locking.


The downside of course is that you need to be ready to deal with a 
serialization failure.


I *think* what Fillpe was looking for is some way to have Postgres 
magically re-try a serialization failure. While theoretically possible 
(at least to a degree), that's actually a really risky thing. The whole 
reason you would need any of this is if you're using a pattern where you:


1 BEGIN SERIALIZABLE;
2 Get data from database
3 Do something with that data
4 Put data back in database

If you get a serialization failure, it's because someone modified the 
data underneath you, which means you can't simply repeat step 4, you 
have to ROLLBACK and go back to step 1. If you design your app with that 
in mind it's not a big deal. If you don't... ugh. :)

--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problems with casting

2015-04-07 Thread Jim Nasby

On 4/7/15 4:35 PM, Tom Lane wrote:

Jim Nasby jim.na...@bluetreble.com writes:

On 4/7/15 4:17 PM, Tom Lane wrote:

I suspect that that's only the tip of the iceberg.  Remember the mess
we had with implicit casts to text?  And those only existed for a dozen
or so types, not for everything.  Every function or operator you define
for variant is going to be a loaded gun just waiting to shoot your foot
off, if you make all those casts implicit.



Yeah, that's why I avoided it. But that makes using it in a function a
real pain. :( I think this is a bit of a different scenario though,
because I don't see why you'd want to overload a function to accept both
variant and some other type.



Really what I want is for casting to variant to be a last-choice option,
and even then only for function calls, not operators. I believe that
would be safe, because then you'd have to explicitly be calling a
function, or explicitly doing something::variant = variant.


Just out of curiosity, what's the point of this type at all, compared
to anyelement and friends?


The two big differences are that you can store a variant in a table 
(with reasonable protection against things like dropping the underlying 
type out from under it), and you can readily determine what the original 
type was. Well, and you're not limited to a single type in a function as 
you are with polymorphic.


One place I've wanted this in the past is to allow storing settings or 
other configuration in the database. Currently you're stuck either 
casting everything to and from text or having a bunch of fields. With 
variant you just store what you're handed.


The other thing I'm currently working on is a template system that would 
allow you to use whatever type you wanted to pass data to a template 
(and for the template itself), as well as allowing you to store 
templates for later re-use. The nice thing about variant in this context 
is that the framework itself doesn't really need to care about what's 
being passed through it. If it didn't support storing templates I could 
probably get away with anyelement for this; but that kinda defeats the 
purpose.


I think there's a chicken and egg problem here. I've pondered variant 
for several years and never thought of anything better than the case of 
storing settings, which was hardly compelling enough to invest the work. 
I finally decided to do it anyway just to see what would be required. 
Only after I had something working did it occur to me that I could use 
this to build a template system. It's certainly possible that there 
isn't all that compelling of a case for variants afterall, but I don't 
think they'll get a fair shake unless there's something available that's 
pretty workable. I suspect there's actually some rather interesting 
things it could be used for if people start thinking about it.


Your question does raise an interesting thought though... is there some 
way I could leverage the polymorphic system here? I did experiment with 
having functions accept anyelement instead of a variant and had some 
success with that (though IIRC plpgsql tended to revolt when trying to 
assign that to a variant in older versions).

--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Basic Question on Point In Time Recovery

2015-04-07 Thread Jim Nasby

On 3/11/15 6:46 AM, Andrew Sullivan wrote:

Is our current frequent pg_dump approach a sensible way to go about
things.  Or are we missing something?  Is there some other way to
restore one database without affecting the others?

Slony-I, which is a PITA to administer, has a mode where you can ship
logs off and restore them in pieces.  The logs are not WAL, but Slony
logs (which are produced by triggers and some explicit event writing
for schema changes).  So they work per-database.  Schema changes are
really quite involved for Slony, and there's overhead resulting from
the triggrs, and as I said it's rather clunky to administer.  But it's
been around some time, it still is actively maintained, and it has
this functionality.  The PITR tools were, last I checked, pretty
primitive.  But the tool might work for your case.  I don't know
whether Bucardo or Londiste (two alternative systems that work on
roughly the same principle) have this functionality, but I kind of
doubt it since both were designed to get rid of several of the
complexities that Slony presented.  (Slony had all those complexities
because it was trying to offer all this functionality at once.)


You could do something very similar with londiste by setting up a second 
queue and delaying when you move data to it from the primary queue, 
based on event_time.


Or now that I think about it... you might be able to do that right in 
the replay process itself.


The big reason I prefer londiste over Slony is that it's extremely 
modular, so it's easy to do stuff like this.

--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Asynchronous replication in postgresql

2015-04-07 Thread Chris Winslett
I've recently open sourced this template for managing state for PostgreSQL:
https://github.com/compose/governor

Take a test drive around it.  As long as the old Leader is verifiably dead
or stopped at the forked WAL log point, I've not had issues with inserting
a `recovery.conf` to tail the new Leader.

On Tue, Apr 7, 2015 at 5:16 PM, Jim Nasby jim.na...@bluetreble.com wrote:

 On 3/11/15 5:27 AM, Deole, Pushkar (Pushkar) wrote:

 Hi,

 I am new to postgresql and evaluating the streaming replication for my
 use case. My use case is:

 1.Need to replicate data from primary database (master) to secondary
 database (slave) asynchronously.

 2.If master goes down, the slave should automatically be promoted to
 master.

 3.Later, when the original primary server (original master) is brought
 up again, it should obtain back its master role and the new master
 should assume the slave again as it was with original setup.

 For #1, the streaming replication of postgresql is good enough.

 For #2, we need to create the trigger file. How can we do this
 automatically?


 You'll need to use something else to figure out that the master node died.
 But that's not the big problem... the big problem is you need to be careful
 to ensure you don't get into a 'split brain' state.

 Say you promoted the slave, so now it's responding to all queries. Now the
 master suddenly starts. Not only is the master missing data that's been
 written to the slave, but if you have a load balancer now *both* databases
 are acting as if they're the master.

 That's bad. :)

 Typically, you want some way to Shoot The Other Node In The Head before
 you promote the slave. For example, you could modify the configuration of
 something in your network so it's no longer possible to reach the old
 master.

  For #3, this seems to be quite complicated. Is this even possible using
 streaming replication? If yes, how can this be achieved?


 You basically need to replace the master with a new replica built off the
 new master. There's been some recent work to make this easier/faster to do,
 but it's not terribly trivial, and you have to be careful to do it
 correctly.
 --
 Jim Nasby, Data Architect, Blue Treble Consulting
 Data in Trouble? Get it in Treble! http://BlueTreble.com


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] Problems with casting

2015-04-07 Thread Jim Nasby

On 4/7/15 5:56 PM, David G. Johnston wrote:

On Tue, Apr 7, 2015 at 3:09 PM, Jim Nasby jim.na...@bluetreble.com
mailto:jim.na...@bluetreble.comwrote:

On 4/7/15 4:35 PM, Tom Lane wrote:

Jim Nasby jim.na...@bluetreble.com writes:

On 4/7/15 4:17 PM, Tom Lane wrote:

I suspect that that's only the tip of the iceberg.
Remember the mess
we had with implicit casts to text?  And those only
existed for a dozen
or so types, not for everything.  Every function or
operator you define
for variant is going to be a loaded gun just waiting
to shoot your foot
off, if you make all those casts implicit.


Yeah, that's why I avoided it. But that makes using it in a
function a
real pain. :( I think this is a bit of a different scenario
though,
because I don't see why you'd want to overload a function to
accept both
variant and some other type.


Really what I want is for casting to variant to be a
last-choice option,
and even then only for function calls, not operators. I
believe that
would be safe, because then you'd have to explicitly be
calling a
function, or explicitly doing something::variant = variant.


Just out of curiosity, what's the point of this type at all,
compared
to anyelement and friends?


The two big differences are that you can store a variant in a table
(with reasonable protection against things like dropping the
underlying type out from under it), and you can readily determine
what the original type was.
​ ​
Well, and you're not limited to a single type in a function as you
are with polymorphic.


One place I've wanted this in the past is to allow storing settings
or other configuration in the database. Currently you're stuck
either casting everything to and from text or having a bunch of
fields. With variant you just store what you're handed.

The other thing I'm currently working on is a template system that
would allow you to use whatever type you wanted to pass data to a
template (and for the template itself), as well as allowing you to
store templates for later re-use. The nice thing about variant in
this context is that the framework itself doesn't really need to
care about what's being passed through it. If it didn't support
storing templates I could probably get away with anyelement for
this; but that kinda defeats the purpose.

I think there's a chicken and egg problem here. I've pondered
variant for several years and never thought of anything better than
the case of storing settings, which was hardly compelling enough to
invest the work. I finally decided to do it anyway just to see what
would be required. Only after I had something working did it occur
to me that I could use this to build a template system. It's
certainly possible that there isn't all that compelling of a case
for variants afterall, but I don't think they'll get a fair shake
unless there's something available that's pretty workable. I suspect
there's actually some rather interesting things it could be used for
if people start thinking about it.

Your question does raise an interesting thought though... is there
some way I could leverage the polymorphic system here? I did
experiment with having functions accept anyelement instead of a
variant and had some success with that (though IIRC plpgsql tended
to revolt when trying to assign that to a variant in older versions).


​I recently posited a use for an anyelement-like pseudo type that
didn't have all the function restrictions of existing pseudo-types.

http://www.postgresql.org/message-id/cakfquwazck37j7fa4pzoz8m9jskmqqopaftxry0ca_n4r2x...@mail.gmail.com

The idea was to define a function with one pseudo-type and one generic
(any) type that the caller is responsible for supplying a meaningful
specific type that the function can act upon.  But this specific use
would not need an actual type but only another pseudo-type.
​

Given the nature of SQL, and PostgreSQL's implementation thereof, a
storage variant type seems non-idiomatic and problematic in usage.
Hell, my recollection is that our implementation of Domains has some
meaningful hiccups when dealing with type promotion and base-type
comparisons; and domains are considerably less complicated than Variant...


BTW, to answer Tom's question... I'm definitely NOT trying to use 
variant to do EAV. I'm sure someone that thinks EAV is a good idea (NOT 
me!) might get excited at being able to natively remember what the 
original type was, but they're likely to have much bigger problems than 
variant in the long run... ;)


My 

Re: [GENERAL] Asynchronous replication in postgresql

2015-04-07 Thread Jim Nasby

On 3/11/15 5:27 AM, Deole, Pushkar (Pushkar) wrote:

Hi,

I am new to postgresql and evaluating the streaming replication for my
use case. My use case is:

1.Need to replicate data from primary database (master) to secondary
database (slave) asynchronously.

2.If master goes down, the slave should automatically be promoted to
master.

3.Later, when the original primary server (original master) is brought
up again, it should obtain back its master role and the new master
should assume the slave again as it was with original setup.

For #1, the streaming replication of postgresql is good enough.

For #2, we need to create the trigger file. How can we do this
automatically?


You'll need to use something else to figure out that the master node 
died. But that's not the big problem... the big problem is you need to 
be careful to ensure you don't get into a 'split brain' state.


Say you promoted the slave, so now it's responding to all queries. Now 
the master suddenly starts. Not only is the master missing data that's 
been written to the slave, but if you have a load balancer now *both* 
databases are acting as if they're the master.


That's bad. :)

Typically, you want some way to Shoot The Other Node In The Head 
before you promote the slave. For example, you could modify the 
configuration of something in your network so it's no longer possible to 
reach the old master.



For #3, this seems to be quite complicated. Is this even possible using
streaming replication? If yes, how can this be achieved?


You basically need to replace the master with a new replica built off 
the new master. There's been some recent work to make this easier/faster 
to do, but it's not terribly trivial, and you have to be careful to do 
it correctly.

--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Benchmarking partitioning triggers and rules

2015-04-07 Thread Jim Nasby

On 3/12/15 8:15 AM, Tomas Vondra wrote:

On 12.3.2015 04:57, Tim Uckun wrote:

I am using postgres 9.4, the default install with brew install
postgres, no tuning at all.  BTW if I use postgres.app application the
benchmarks run twice as slow!


I have no idea what brew or postgres.app is. But I strongly recommend
you to do some tuning.

   https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server



Why do you think there is such dramatic difference between

EXECUTE  'INSERT INTO ' ||  quote_ident(partition_name) ||  ' SELECT
($1).*' USING NEW ;

and

  EXECUTE  'INSERT INTO ' ||  quote_ident(partition_name) ||  ' VALUES(
($1).*)' USING NEW ;

One is thirty percent faster than the other.  Also is there an even
better way that I don't know about.


Because processing dynamic SQL commands (i.e. EXECUTE '...') is simply
more expensive, as it needs to do more stuff (on every execution). There
are reasons for that, but you may think of it as regular queries vs.
prepared statements.

Prepared statements are parsed and planned once, regular query needs to
be parsed and planned over and over again.


BTW, if you're that concerned about performance you could probably do a 
lot better than a plpgsql trigger by creating one in C. There's an 
enormous amount of code involved just in parsing and starting a plpgsql 
trigger, and then it's going to have to re-parse the dynamic SQL for 
every single row, whereas a C trigger could avoid almost all of that.


Rules are likely to be even faster (at least until you get to a fairly 
large number of partitions), but as Thomas mentioned they're very tricky 
to use. The critical thing to remember with them is they're essentially 
hacking on the original query itself.

--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Working with Array of Composite Type

2015-04-07 Thread Jim Nasby

On 3/28/15 9:36 AM, Jan de Visser wrote:

On March 28, 2015 06:18:49 PM Alex Magnum wrote:

Hello,
I am struggling with finding the right way to deal with arrays of composite
types. Bellow is an example of the general setup where I defined an image
type to describe the image properties. A user can have mulitple images
stored.


The canonical answer is that in almost all cases where you think you want an
array of composites, you *really* want a table join:

i.e. turn your image *type* into an image *table* with the user_id as a
foreign key.

CREATE TABLE users (
   user_id   serial NOT NULL,
);

CREATE TABLE image (
   idsmallint,
   user_id int references users (user_id)
   caption   text,
   is_primaryboolean,
   is_privateboolean
);


Another option is to use unnest() to turn the array into a recordset, 
which you can then use SQL on. If the array is quite small you might get 
away with that. But if you're actually storing images you'll probably be 
pretty unhappy with performance, because every time you make ANY change 
to that array you'll need to completely re-write the *entire* array to disk.

--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problems with casting

2015-04-07 Thread Tom Lane
Jim Nasby jim.na...@bluetreble.com writes:
 On 4/7/15 4:35 PM, Tom Lane wrote:
 Just out of curiosity, what's the point of this type at all, compared
 to anyelement and friends?

 The two big differences are that you can store a variant in a table 
 (with reasonable protection against things like dropping the underlying 
 type out from under it), and you can readily determine what the original 
 type was. Well, and you're not limited to a single type in a function as 
 you are with polymorphic.

I'm fairly skeptical of the idea that you should want to store a variant
in a table --- smells of EAV schema design to me.  What would a unique
index mean on such a column, for instance?  As for the other two, the only
reason you can't do them with polymorphic arguments is nobody has wanted
them bad enough to do something about it.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problems with casting

2015-04-07 Thread David G. Johnston
On Tue, Apr 7, 2015 at 3:09 PM, Jim Nasby jim.na...@bluetreble.com wrote:

 On 4/7/15 4:35 PM, Tom Lane wrote:

 Jim Nasby jim.na...@bluetreble.com writes:

 On 4/7/15 4:17 PM, Tom Lane wrote:

 I suspect that that's only the tip of the iceberg.  Remember the mess
 we had with implicit casts to text?  And those only existed for a dozen
 or so types, not for everything.  Every function or operator you define
 for variant is going to be a loaded gun just waiting to shoot your
 foot
 off, if you make all those casts implicit.


  Yeah, that's why I avoided it. But that makes using it in a function a
 real pain. :( I think this is a bit of a different scenario though,
 because I don't see why you'd want to overload a function to accept both
 variant and some other type.


  Really what I want is for casting to variant to be a last-choice option,
 and even then only for function calls, not operators. I believe that
 would be safe, because then you'd have to explicitly be calling a
 function, or explicitly doing something::variant = variant.


 Just out of curiosity, what's the point of this type at all, compared
 to anyelement and friends?


 The two big differences are that you can store a variant in a table (with
 reasonable protection against things like dropping the underlying type out
 from under it), and you can readily determine what the original type was.
 ​ ​
 Well, and you're not limited to a single type in a function as you are
 with polymorphic.


 One place I've wanted this in the past is to allow storing settings or
 other configuration in the database. Currently you're stuck either casting
 everything to and from text or having a bunch of fields. With variant you
 just store what you're handed.

 The other thing I'm currently working on is a template system that would
 allow you to use whatever type you wanted to pass data to a template (and
 for the template itself), as well as allowing you to store templates for
 later re-use. The nice thing about variant in this context is that the
 framework itself doesn't really need to care about what's being passed
 through it. If it didn't support storing templates I could probably get
 away with anyelement for this; but that kinda defeats the purpose.

 I think there's a chicken and egg problem here. I've pondered variant for
 several years and never thought of anything better than the case of storing
 settings, which was hardly compelling enough to invest the work. I finally
 decided to do it anyway just to see what would be required. Only after I
 had something working did it occur to me that I could use this to build a
 template system. It's certainly possible that there isn't all that
 compelling of a case for variants afterall, but I don't think they'll get a
 fair shake unless there's something available that's pretty workable. I
 suspect there's actually some rather interesting things it could be used
 for if people start thinking about it.

 Your question does raise an interesting thought though... is there some
 way I could leverage the polymorphic system here? I did experiment with
 having functions accept anyelement instead of a variant and had some
 success with that (though IIRC plpgsql tended to revolt when trying to
 assign that to a variant in older versions).


​I recently posited a use for an anyelement-like pseudo type that didn't
have all the function restrictions of existing pseudo-types.

http://www.postgresql.org/message-id/cakfquwazck37j7fa4pzoz8m9jskmqqopaftxry0ca_n4r2x...@mail.gmail.com

The idea was to define a function with one pseudo-type and one generic
(any) type that the caller is responsible for supplying a meaningful
specific type that the function can act upon.  But this specific use would
not need an actual type but only another pseudo-type.
​

Given the nature of SQL, and PostgreSQL's implementation thereof, a storage
variant type seems non-idiomatic and problematic in usage.  Hell, my
recollection is that our implementation of Domains has some meaningful
hiccups when dealing with type promotion and base-type comparisons; and
domains are considerably less complicated than Variant...

Neither settings nor templates screams for a non-text solution; but I
also haven't given topic much consideration.

The typed text capability would allow for a simpler UI but for the limited
cases where it is a valid model (e.g., a settings table) writing a
function-based UI would provide a place to hook in the desired input
validation without introducing a entirely new global concept.

David J.


Re: [GENERAL] autovacuum worker running amok - and me too ;)

2015-04-07 Thread Jim Nasby

On 3/9/15 3:56 AM, wambacher wrote:

Hi paul

just found my system (24 GB Mem, 72 GB Swap) running nearly at it's limits:

The current vaccum is using 66.7 GB (sixty-six dot seven) GB of memory and
my System is nearly down.

I'm sorry, but that must be an bug. Remember: It's the Analyze of an
GIN-Index that is making that problems. Various tables - same Problem.

Regards
walter

http://postgresql.nabble.com/file/n5841074/top.png
duríng the last 10 Minutes vaccum took one more GB, now it's using 67.5 if
mem.

should i ask at the dev-list? Open a ticket?


Sorry for the late reply.

Yes, that sounds like a bug in the GIN code. Please post to pgsql-bugs 
or hit http://www.postgresql.org/support/submitbug/

--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Would like to know how analyze works technically

2015-04-07 Thread Jim Nasby

On 4/2/15 2:18 PM, TonyS wrote:

On Wed, April 1, 2015 5:50 pm, Tom Lane-2 [via PostgreSQL] wrote:
 

 
  TonyS [hidden email]
/user/SendEmail.jtp?type=nodenode=5844517i=0 writes:
 
  The analyze function has crashed again while the overcommit entries
  were as above. The last bit of the PostgreSQL log shows: MdSmgr:
41934848
  total in 14 blocks; 639936 free (0 chunks); 41294912 used ident parser
  context: 0 total in 0 blocks; 0 free (0 chunks); 0 used
  hba parser context: 7168 total in 3 blocks; 2288 free (1 chunks); 4880
  used LOCALLOCK hash: 8192 total in 1 blocks; 1680 free (0 chunks); 6512
  used Timezones: 83472 total in 2 blocks; 3744 free (0 chunks); 79728
  used ErrorContext: 8192 total in 1 blocks; 8160 free (6 chunks); 32
used
   2015-04-01 14:23:27 EDT ERROR:  out of memory
  2015-04-01 14:23:27 EDT DETAIL:  Failed on request of size 80.
  2015-04-01 14:23:27 EDT STATEMENT:  analyze verbose;
 
 
  We need to see all of that memory map, not just the last six lines of
it.
 
 
  regards, tom lane
 


I have used the procedures from this web page to try to get a core dump:
https://wiki.postgresql.org/wiki/Getting_a_stack_trace_of_a_running_PostgreSQL_backend_on_Linux/BSD

If I follow the procedure and kill the postmaster pid while psql is
connected to it, it does generate a core dump; however, no core dump is
generated when the error I have been experiencing occurs.

I guess at this point I am just going to rebuild from the Linux
installation up. I also tried changing the work_mem to 16MB, but that
didn't seem to make a difference.


I don't know that a core dump will be helpful here. What Tom was talking 
about were all those lines in your log file, talking about blah context: 
xxx total in xxx blocks;... That's diagnostics about where PG has used 
all it's memory. That's what we need here.

--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] could not split GIN page; no new items fit

2015-04-07 Thread Jim Nasby

On 4/4/15 8:38 AM, Chris Curvey wrote:


On Fri, Apr 3, 2015 at 9:27 PM, Tom Lane t...@sss.pgh.pa.us
mailto:t...@sss.pgh.pa.us wrote:

Chris Curvey ch...@chriscurvey.com mailto:ch...@chriscurvey.com
writes:
 Hmm, I'm trying to create a gin index, thusly:
 create index foo_idx on foo using gin(entry gin_trgm_ops);

 and I'm getting the error could not split GIN page; no new items fit

 Any idea what this means, or how I can get around it?

Looks to me like a bug (ie, the code seems to think this is a
can't-happen
case).  Don't suppose you could supply sample data that triggers this?

 regards, tom lane


I can!  I just copied the data to a new table, obfuscated the sensitive
parts, and was able to reproduce the error.  I can supply the script to
create and populate the table, but that's still clocking in at 250Mb
after being zipped. What's the best way of getting this data out to
someone who can take a look at this?  (Feel free to contact me off-list
to coordinate.)


It would be nice if you could further reduce it, but if not I'd suggest 
posting it to something like DropBox and posting the public link here.

--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] could not split GIN page; no new items fit

2015-04-07 Thread Tom Lane
Jim Nasby jim.na...@bluetreble.com writes:
 On 4/4/15 8:38 AM, Chris Curvey wrote:
 I can!  I just copied the data to a new table, obfuscated the sensitive
 parts, and was able to reproduce the error.  I can supply the script to
 create and populate the table, but that's still clocking in at 250Mb
 after being zipped. What's the best way of getting this data out to
 someone who can take a look at this?  (Feel free to contact me off-list
 to coordinate.)

 It would be nice if you could further reduce it, but if not I'd suggest 
 posting it to something like DropBox and posting the public link here.

So far I've been unable to reproduce the failure from Chris' data :-(
Don't know why not.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] could not split GIN page; no new items fit

2015-04-07 Thread Jim Nasby

On 4/7/15 11:58 PM, Tom Lane wrote:

Jim Nasby jim.na...@bluetreble.com writes:

On 4/4/15 8:38 AM, Chris Curvey wrote:

I can!  I just copied the data to a new table, obfuscated the sensitive
parts, and was able to reproduce the error.  I can supply the script to
create and populate the table, but that's still clocking in at 250Mb
after being zipped. What's the best way of getting this data out to
someone who can take a look at this?  (Feel free to contact me off-list
to coordinate.)



It would be nice if you could further reduce it, but if not I'd suggest
posting it to something like DropBox and posting the public link here.


So far I've been unable to reproduce the failure from Chris' data :-(
Don't know why not.


Could it be dependent on the order of the data in the heap?

I'm assuming the field being indexed isn't one of the one's Chris had to 
obfuscate...

--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Benchmarking partitioning triggers and rules

2015-04-07 Thread Tim Uckun
I understand that there is overhead involved in parsing the strings and
such.  The amount of overhead was surprising to me but that's another
matter.  What I am really puzzled about is the difference between the
statements

EXECUTE  'INSERT INTO ' ||  quote_ident(partition_name) ||  ' SELECT
($1).*' USING NEW ;

and

  EXECUTE  'INSERT INTO ' ||  quote_ident(partition_name) ||  ' VALUES
(($1).*)' USING NEW ;

They both do string interpolation but one is significantly faster than the
other.   Is there a third and even faster way?

I am using RDS so I can't really do stored procs in C.


Re: [GENERAL] Benchmarking partitioning triggers and rules

2015-04-07 Thread Adrian Klaver

On 04/07/2015 07:49 PM, Tim Uckun wrote:

I understand that there is overhead involved in parsing the strings and
such.  The amount of overhead was surprising to me but that's another
matter.  What I am really puzzled about is the difference between the
statements

EXECUTE  'INSERT INTO ' ||  quote_ident(partition_name) ||  ' SELECT
($1).*' USING NEW ;

and

   EXECUTE  'INSERT INTO ' ||  quote_ident(partition_name) ||  ' VALUES
(($1).*)' USING NEW ;



Offhand I would say because in the first case you are doing a SELECT and 
in the second you are just doing a substitution.




They both do string interpolation but one is significantly faster than
the other.   Is there a third and even faster way?

I am using RDS so I can't really do stored procs in C.




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general