Re: [GENERAL] starting PG command line options vs postgresql.con

2017-10-30 Thread Michael Paquier
On Mon, Oct 30, 2017 at 2:08 PM, David G. Johnston
 wrote:
> On Mon, Oct 30, 2017 at 6:48 AM, rakeshkumar464 
> wrote:
>>
>> I would prefer using postgresql.conf.  what is the consensus in this forum
>> regarding command line vs postgresql.conf.
>
> I suspect that most people administering a PostgreSQL database would expect
> that the configuration file would be changed in lieu of passing options via
> the command line.

Disagreement here. For one, it makes pg_upgrade more complicated
because it would need to track and then rewrite postgresql.conf, or
just copy it temporarily. The current way of doing things gives the
best of both worlds.
-- 
Michael


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] starting PG command line options vs postgresql.con

2017-10-30 Thread David G. Johnston
On Mon, Oct 30, 2017 at 6:48 AM, rakeshkumar464 
wrote:

> I would prefer using postgresql.conf.  what is the consensus in this forum
> regarding command line vs postgresql.conf.


​I suspect that most people administering a PostgreSQL database would
expect that the configuration file would be changed in lieu of passing
options via the command line.

Also if conflicting, which one
> takes priority.
>

​https://www.postgresql.org/docs/9.6/static/config-setting.html#AEN32498​

David J.


Re: [GENERAL] starting PG command line options vs postgresql.con

2017-10-30 Thread Tom Lane
rakeshkumar464  writes:
> I am new to Docker env and I see that PG, as a container is started with
> [ lots of command-line parameters ]

> I would prefer using postgresql.conf.  what is the consensus in this forum
> regarding command line vs postgresql.conf.  Also if conflicting, which one
> takes priority.

The command line takes priority, IIRC, which means that nothing set on
the command line can be overridden without a restart.

I like to specify -p on the command line so that it's easy to tell which
postmaster is which in "ps" listings (of course, this only matters if
you're running multiple postmasters).  Otherwise it's better to leave
as much as you can to postgresql.conf.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] starting PG command line options vs postgresql.con

2017-10-30 Thread rakeshkumar464
I am new to Docker env and I see that PG, as a container is started with
parameters like this:
docker run -it \
--detach \
--name name \
--restart=unless-stopped \
-p 5432:5432 \

-e PGDATA=/var/lib/postgresql/data/pg10 
-N 500 \
-B 3GB \
-S 6291kB \
-c listen_addresses=* \
-c effective_cache_size=9GB \
-c maintenance_work_mem=768MB \
-c min_wal_size=2GB \
-c max_wal_size=4GB \

I would prefer using postgresql.conf.  what is the consensus in this forum
regarding command line vs postgresql.conf.  Also if conflicting, which one
takes priority.

thanks



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Can PG replace redis, amqp, s3 in the future?

2017-05-08 Thread Scott Marlowe
On Mon, May 1, 2017 at 2:59 PM, Sven R. Kunze  wrote:
> On 30.04.2017 16:25, Steve Atkins wrote:
>
> You can use postgresql for caching, but caches don't require the data
> durability that a database offers, and can be implemented much more
> efficiently.
>
>
> I for one can understand Thomas' need for a single solution.
> Just recently I needed a cache which was supposed to be set up in a
> SERIALIZABLE manner as in
> https://www.postgresql.org/docs/devel/static/transaction-iso.html#xact-serializable
> Available cache mechanisms would have produce erroneous results. So, I went
> for PG.

This brings up another subject, reliability. If PostgreSQL is fast
enough, and on stable hardware, it's often the preferred choice
because of its very good stability. Try running a big production noSQL
cluster and you'll find plenty of sharp corners in most. A lot of
times it's just easier to set up a pair of VMs (on good hardware) and
toss a pg db at the problem, esp if performance is a secondary
consideration, or not likely to tax pgsql's basic architecture.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Can PG replace redis, amqp, s3 in the future?

2017-05-05 Thread Bill Moran
On Fri, 5 May 2017 19:52:42 +0100
Tony Finch  wrote:

> Bill Moran  wrote:
> >
> > There's a well-written article I saw recently that directly addresses
> > your question ... I'm too lazy to find it, but google will probably
> > turn it up for you.
> 
> This? http://renesd.blogspot.co.uk/2017/02/is-postgresql-good-enough.html

Oh, hey! You found it, thanks!

-- 
Bill Moran 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Can PG replace redis, amqp, s3 in the future?

2017-05-05 Thread Tony Finch
Bill Moran  wrote:
>
> There's a well-written article I saw recently that directly addresses
> your question ... I'm too lazy to find it, but google will probably
> turn it up for you.

This? http://renesd.blogspot.co.uk/2017/02/is-postgresql-good-enough.html

Tony.
-- 
f.anthony.n.finch    http://dotat.at/  -  I xn--zr8h punycode
Irish Sea: East or northeast 5 or 6. Slight or moderate. Fair. Good.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Can PG replace redis, amqp, s3 in the future?

2017-05-02 Thread Thomas Delrue
On April 30, 2017 1:37:02 PM GMT+02:00, "Thomas Güttler" 
 wrote:
>Is is possible that PostgreSQL will replace these building blocks in
>the future?
> 
> - redis (Caching)
> - rabbitmq (amqp)
> - s3 (Blob storage)

These are three very different sets of functionalities, each requiring a 
different approach.  I am curious as to why you are thinking about having a 
single piece of software that does these three very different things. 

>One question is "is it possible?", then next "is it feasible?"

Possible? Sure: p != 0
Probable? No
Desirable? No

>I think it would be great if I could use PG only and if I could
>avoid the other types of servers.

When you're holding a hammer, everything looks like a nail. But hammering 
screws doesn't get you very far.  Sometimes you need a screwdriver and on other 
days a glue gun...

>The benefit is not very obvious on the first sight. I think it will
>saves you
>time, money and energy only in the long run.
>
>What do you think?

Do one thing(*) and do that thing well. Don't try to be everything to everyone. 


--
Thomas
(Sent from my mobile device,  please forgive brevity or typos.)


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Can PG replace redis, amqp, s3 in the future?

2017-05-01 Thread Jeff Janes
On Sun, Apr 30, 2017 at 4:37 AM, Thomas Güttler <
guettl...@thomas-guettler.de> wrote:

> Is is possible that PostgreSQL will replace these building blocks in the
> future?
>
>  - redis (Caching)
>

PostgreSQL has its own caching.  It might not be quite as effective as
redis', but you can us it if you are willing to take those trade offs.

 - rabbitmq (amqp)
>

PostgreSQL has its own system for this, and other ones can be layered on
top of fully transactional tables.
Again, you can use one or the other, depending on your needs, if you are
willing to deal with the trade offs.



>  - s3 (Blob storage)
>

No.  You can certainly use PostgreSQL to store blobs.  But then, you need
to store the PostgreSQL data **someplace**.  If you don't store it in S3,
you have to store it somewhere else.

Cheers,

Jeff


Re: [GENERAL] Can PG replace redis, amqp, s3 in the future?

2017-05-01 Thread Bill Moran
On Mon, 1 May 2017 22:05:03 +0200
Thomas Güttler  wrote:
> > 
> > There's a well-written article I saw recently that directly addresses
> > your question ... I'm too lazy to find it, but google will probably
> > turn it up for you.
> 
> I tried to find it, but failed. Can you give me some keywords to find
> this well-written article?

I can't seem find it again. Sorry.

-- 
Bill Moran 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Can PG replace redis, amqp, s3 in the future?

2017-05-01 Thread Bill Moran
On Mon, 1 May 2017 22:02:15 +0200
Thomas Güttler  wrote:
> Leaving two things open:
> 
>  - blob storage
>  - redis/caching

I've used Postgres for both of these purposes, and at the load
level we were experiencing at the time, it worked fine.

We later implemented Redis when our caching requirements exceeded
what Postgres could do in that capacity. We never switched to
anything else for blob storage, as Postgres was always sufficient.

-- 
Bill Moran 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Can PG replace redis, amqp, s3 in the future?

2017-05-01 Thread Sven R. Kunze

On 30.04.2017 16:25, Steve Atkins wrote:

You can use postgresql for caching, but caches don't require the data
durability that a database offers, and can be implemented much more
efficiently.


I for one can understand Thomas' need for a single solution.
Just recently I needed a cache which was supposed to be set up in a 
SERIALIZABLE manner as in 
https://www.postgresql.org/docs/devel/static/transaction-iso.html#xact-serializable 
Available cache mechanisms would have produce erroneous results. So, I 
went for PG.


But it's still a cache, isn't it?


You can use postgresql to provide message queue services and it
does so reasonably well, particularly when the messages are generated within
the database. But it's not going to do so as efficiently, or be as easy to
monitor, to make highly redundant or to scale across a whole datacenter
as a dedicated message queue service.

You could use postgresql to store binary blobs, but it'd be a horrifically
inefficient way to do it. (Using postgresql to store the metadata, while
the content is stored elsewhere, sure).

Use the right tool for the job.


I think it's not as easy as ads and buzz words make us believe it is.

Especially when it comes to reinventing the wheel, I prefer a single 
solution. With the better JSON support, PG made NoSQL obsolete. I don't 
see why this cannot happen with blob storage and massive scale out. Just 
a matter of time, if you ask me.


Regards,
Sven


Re: [GENERAL] Can PG replace redis, amqp, s3 in the future?

2017-05-01 Thread Thomas Güttler
Am 30.04.2017 um 17:09 schrieb Bill Moran:
> On Sun, 30 Apr 2017 13:37:02 +0200
> Thomas Güttler  wrote:
> 
>> Is is possible that PostgreSQL will replace these building blocks in the 
>> future?
>>  
>>  - redis (Caching)
>>  - rabbitmq (amqp)
>>  - s3 (Blob storage)
>>
>> One question is "is it possible?", then next "is it feasible?"
>>
>> I think it would be great if I could use PG only and if I could
>> avoid the other types of servers.
>>
>> The benefit is not very obvious on the first sight. I think it will saves you
>> time, money and energy only in the long run.
>>
>> What do you think?
> 
> There's a well-written article I saw recently that directly addresses
> your question ... I'm too lazy to find it, but google will probably
> turn it up for you.
> 

I tried to find it, but failed. Can you give me some keywords to find
this well-written article?



> Take a message bus for example. PG's notify works pretty damn well as a
> centralized message bus. But if you need a distributed message bus or you
> need massive throughput, you're almost certainly better of with something
> specifically designed for that purpose.

SELECT FOR UPDATE ... SKIP LOCKED looks nice:

 https://blog.2ndquadrant.com/what-is-select-skip-locked-for-in-postgresql-9-5/




> Of course, if you need structured, relational data to be stored reliably,
> you can't do much better than Postgres.

Yes, PG is our solid central data storage.

Regards,
 Thomas Güttler


-- 
I am looking for feedback for my personal programming guidelines:
https://github.com/guettli/programming-guidelines


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Can PG replace redis, amqp, s3 in the future?

2017-05-01 Thread Thomas Güttler
Am 30.04.2017 um 15:39 schrieb Thomas Delrue:
> On April 30, 2017 1:37:02 PM GMT+02:00, "Thomas Güttler" 
>  wrote:
>> Is is possible that PostgreSQL will replace these building blocks in
>> the future?
>>
>> - redis (Caching)
>> - rabbitmq (amqp)
>> - s3 (Blob storage)
> 
> These are three very different sets of functionalities, each requiring a 
> different approach.  I am curious as to why you are thinking about having a 
> single piece of software that does these three very different things. 


I love transactions. If you store data in four systems (three from above plus 
PG), then you have
better performance if you have high load. But what happens if a transaction 
fails (rolls back). Then this
can leave the other data sinks in a broken state. Example: a blob in s3 might 
be updated, but the rollback
in PG does not rollback in s3 

And one other benefit if you have one system: Configuration management is 
easier.

I know that if you have very high load, then you need to optimize.

But in my context the load is far from high. Robust transactions (including
rollback in all related systems) is more important for me.

For the rabbitmq/amqp part I found that the new SKIP LOCKED feature can help:

  https://blog.2ndquadrant.com/what-is-select-skip-locked-for-in-postgresql-9-5/

Leaving two things open:

 - blob storage
 - redis/caching

>> One question is "is it possible?", then next "is it feasible?"
> 
> Possible? Sure: p != 0
> Probable? No
> Desirable? No
> 
>> I think it would be great if I could use PG only and if I could
>> avoid the other types of servers.
> 
> When you're holding a hammer, everything looks like a nail. But hammering 
> screws doesn't get you very far. Sometimes you need a screwdriver and on 
> other days a glue gun...

Yes, you are right.


Regards,
  Thomas Güttler

-- 
I am looking for feedback for my personal programming guidelines:
https://github.com/guettli/programming-guidelines


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Can PG replace redis, amqp, s3 in the future?

2017-04-30 Thread Bill Moran
On Sun, 30 Apr 2017 13:37:02 +0200
Thomas Güttler  wrote:

> Is is possible that PostgreSQL will replace these building blocks in the 
> future?
>  
>  - redis (Caching)
>  - rabbitmq (amqp)
>  - s3 (Blob storage)
> 
> One question is "is it possible?", then next "is it feasible?"
> 
> I think it would be great if I could use PG only and if I could
> avoid the other types of servers.
> 
> The benefit is not very obvious on the first sight. I think it will saves you
> time, money and energy only in the long run.
> 
> What do you think?

There's a well-written article I saw recently that directly addresses
your question ... I'm too lazy to find it, but google will probably
turn it up for you.

The upshot is that Postgres does a lot of things well, but when the need
comes up to do them _REALLY_ well, you're generally better off picking a
tool that's specialized for your needs.

Take a message bus for example. PG's notify works pretty damn well as a
centralized message bus. But if you need a distributed message bus or you
need massive throughput, you're almost certainly better of with something
specifically designed for that purpose.

Of course, if you need structured, relational data to be stored reliably,
you can't do much better than Postgres.

-- 
Bill Moran 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Can PG replace redis, amqp, s3 in the future?

2017-04-30 Thread Steve Atkins

> On Apr 30, 2017, at 4:37 AM, Thomas Güttler  
> wrote:
> 
> Is is possible that PostgreSQL will replace these building blocks in the 
> future?
> 
> - redis (Caching)
> - rabbitmq (amqp)
> - s3 (Blob storage)

No.

You can use postgresql for caching, but caches don't require the data
durability that a database offers, and can be implemented much more
efficiently.

You can use postgresql to provide message queue services and it
does so reasonably well, particularly when the messages are generated within
the database. But it's not going to do so as efficiently, or be as easy to
monitor, to make highly redundant or to scale across a whole datacenter
as a dedicated message queue service.

You could use postgresql to store binary blobs, but it'd be a horrifically
inefficient way to do it. (Using postgresql to store the metadata, while
the content is stored elsewhere, sure).

Use the right tool for the job.

Cheers,
  Steve

> 
> One question is "is it possible?", then next "is it feasible?"
> 
> I think it would be great if I could use PG only and if I could
> avoid the other types of servers.
> 
> The benefit is not very obvious on the first sight. I think it will saves you
> time, money and energy only in the long run.
> 
> What do you think?
> 
> Regards,
>  Thomas Güttler
> 
> 
> -- 
> I am looking for feedback for my personal programming guidelines:
> https://github.com/guettli/programming-guidelines
> 
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Can PG replace redis, amqp, s3 in the future?

2017-04-30 Thread Thomas Güttler
Is is possible that PostgreSQL will replace these building blocks in the future?
 
 - redis (Caching)
 - rabbitmq (amqp)
 - s3 (Blob storage)

One question is "is it possible?", then next "is it feasible?"

I think it would be great if I could use PG only and if I could
avoid the other types of servers.

The benefit is not very obvious on the first sight. I think it will saves you
time, money and energy only in the long run.

What do you think?

Regards,
  Thomas Güttler


-- 
I am looking for feedback for my personal programming guidelines:
https://github.com/guettli/programming-guidelines


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Nullsif ? PG 9.1

2017-04-09 Thread Patrick B
2017-04-09 20:18 GMT+12:00 Patrick B :

> Hi guys,
>
> I've got this select:
>  SELECT split_part(n.node_full_path::text, '/'::text, 8)::integer AS id,
>
> However, not always I will get the 8th field, and because of that, i may
> get no data somethings.
>
> Example:
> /filesuser/client/27801123/attachment/4510/main
> /filesuser/client//attachment/21314134/file/12312312312
> <(231)%20231-2312>/small/photo.jpg
>
> Note that, sometimes it only goes until the 7th splitted_part, not always
> i will get the 8th.
>
> How can I tell the select, if no 8th field is found, then returns null?
>
> Thanks!
> Patrick
>


Sorry guys.. Got it!
https://www.postgresql.org/docs/9.1/static/functions-conditional.html


[GENERAL] Nullsif ? PG 9.1

2017-04-09 Thread Patrick B
Hi guys,

I've got this select:
 SELECT split_part(n.node_full_path::text, '/'::text, 8)::integer AS id,

However, not always I will get the 8th field, and because of that, i may
get no data somethings.

Example:
/filesuser/client/27801123/attachment/4510/main
/filesuser/client//attachment/21314134/file/12312312312/small/photo.jpg

Note that, sometimes it only goes until the 7th splitted_part, not always i
will get the 8th.

How can I tell the select, if no 8th field is found, then returns null?

Thanks!
Patrick


Re: [GENERAL] Average - Pg 9.2

2017-02-02 Thread David G. Johnston
On Thursday, February 2, 2017, Patrick B  wrote:

> Hi guys,
>
> I've got a table which has id and created date columns.
>
> I want to get the average of inserted rows monthly.
> How can I get this data?
>
> This query is not working as it is showing me same data in both columns.
>
> select created_date,
> AVG(id)
> OVER(ORDER BY created_date) AS avr from test
>
> http://sqlfiddle.com/#!15/3289b/1
>
>
Maybe...see GROUP BY...

David J.


[GENERAL] Average - Pg 9.2

2017-02-02 Thread Patrick B
Hi guys,

I've got a table which has id and created date columns.

I want to get the average of inserted rows monthly.
How can I get this data?

This query is not working as it is showing me same data in both columns.

select created_date,
AVG(id)
OVER(ORDER BY created_date) AS avr from test

http://sqlfiddle.com/#!15/3289b/1

Thanks!
Patrick


Re: [GENERAL] ruby pg connection fails on centos - okay on debian and dev machine

2016-05-04 Thread Adrian Klaver

On 05/04/2016 05:26 AM, john.tiger wrote:

On 05/03/2016 06:18 PM, Adrian Klaver wrote:

On 05/03/2016 05:07 PM, john.tiger wrote:

our model.rb runs fine on a dev machine and a debian server  but is
failing on a new centos server - checked the postgres db name and user
name and password - all seem fine


The error message is?



could it be:

host => "localhost"

or maybe CORS ?


CORS as in:

https://en.wikipedia.org/wiki/Cross-origin_resource_sharing

or something else?

Have you tried connecting using psql with same parameters?

Have you looked at the pg_hba.conf files on the different platforms
and see if they differ?


yeah, but we couldn't find any log file - not sure on centos where this
should be  - nothing in /var/log or /var/lib/pgsql/9.5/data


I do not use Centos so I am not sure where the logs go. You might get a 
hint by logging in using psql and doing:


show log_directory ;



running psql dbname="ourdb" works
running psql dbname="ourdb" username="ouruser"  ==> FATAL peer


That is failing because peer authentication is set up for connecting via 
local(per pg_hba.conf below), where local is a socket connection. For 
more information see:


http://www.postgresql.org/docs/9.5/interactive/auth-methods.html#AUTH-PEER

Retry the above with -h localhost or -h 127.0.0.1


authentication failed for user"ouruser"

running \l  => shows db
   \du =>   ouruser {}  ? doesn't show login although we specified that
as when created role - could that be the problem ?? then how to add
login to user   alter role ouruser with login  => still shows {}


That is normal. If you could not login you would see something like;

test=# \du test_role
List of roles
 Role name |  Attributes  | Member of
---+--+---
 test_role | Cannot login | {}

FYI, {} is for role membership, so something like:

test=# \du aklaver
  List of roles
 Role name | Attributes |Member of
---++-
 aklaver   || {app_admin,enhanced,production}




both pg_hba show:
localallall  peer
host allall  127.0.0.1
host all   all ident


Is that all?

In particular is that the full line for the second line above?

Are there any IPv6 address?




re cors - all on same linode server with postgres running on std port so
don't think should be an issue and is not issue on debian server - just
searching for possible answer

here is our model
@conn = PG.connect(
  :host => "localhost",
  :dbname => "ourdb",
  :user  => "ouruser",
  :password => "ourpassword"
)
again, this runs fine on debian and debian dev machine


Is there a chance there is more then one instance of Postgres running on 
this machine?






















--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ruby pg connection fails on centos - okay on debian and dev machine

2016-05-04 Thread john.tiger

On 05/03/2016 06:18 PM, Adrian Klaver wrote:

On 05/03/2016 05:07 PM, john.tiger wrote:

our model.rb runs fine on a dev machine and a debian server  but is
failing on a new centos server - checked the postgres db name and user
name and password - all seem fine


The error message is?



could it be:

host => "localhost"

or maybe CORS ?


CORS as in:

https://en.wikipedia.org/wiki/Cross-origin_resource_sharing

or something else?

Have you tried connecting using psql with same parameters?

Have you looked at the pg_hba.conf files on the different platforms 
and see if they differ?


yeah, but we couldn't find any log file - not sure on centos where this 
should be  - nothing in /var/log or /var/lib/pgsql/9.5/data


running psql dbname="ourdb" works
running psql dbname="ourdb" username="ouruser"  ==> FATAL peer 
authentication failed for user"ouruser"


running \l  => shows db
   \du =>   ouruser {}  ? doesn't show login although we specified that 
as when created role - could that be the problem ?? then how to add 
login to user   alter role ouruser with login  => still shows {}


both pg_hba show:
localallall  peer
host allall  127.0.0.1
host all   all ident


re cors - all on same linode server with postgres running on std port so 
don't think should be an issue and is not issue on debian server - just 
searching for possible answer


here is our model
@conn = PG.connect(
  :host => "localhost",
  :dbname => "ourdb",
  :user  => "ouruser",
  :password => "ourpassword"
)
again, this runs fine on debian and debian dev machine

















--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ruby pg connection fails on centos - okay on debian and dev machine

2016-05-03 Thread Adrian Klaver

On 05/03/2016 05:07 PM, john.tiger wrote:

our model.rb runs fine on a dev machine and a debian server  but is
failing on a new centos server - checked the postgres db name and user
name and password - all seem fine


The error message is?



could it be:

host => "localhost"

or maybe CORS ?


CORS as in:

https://en.wikipedia.org/wiki/Cross-origin_resource_sharing

or something else?

Have you tried connecting using psql with same parameters?

Have you looked at the pg_hba.conf files on the different platforms and 
see if they differ?









--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ruby pg connection fails on centos - okay on debian and dev machine

2016-05-03 Thread Joshua D. Drake

On 05/03/2016 05:07 PM, john.tiger wrote:

our model.rb runs fine on a dev machine and a debian server  but is
failing on a new centos server - checked the postgres db name and user
name and password - all seem fine

could it be:

host => "localhost"

or maybe CORS ?




What do the PostgreSQL logs say? Does it show a connection trying to be 
made?


--
Command Prompt, Inc.  http://the.postgres.company/
+1-503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Everyone appreciates your honesty, until you are honest with them.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] ruby pg connection fails on centos - okay on debian and dev machine

2016-05-03 Thread john.tiger
our model.rb runs fine on a dev machine and a debian server  but is 
failing on a new centos server - checked the postgres db name and user 
name and password - all seem fine


could it be:

host => "localhost"

or maybe CORS ?


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Does PG support in place upgrade

2016-04-27 Thread Bruce Momjian
On Wed, Apr 27, 2016 at 10:30:41AM -0400, Bruce Momjian wrote:
> On Wed, Apr 27, 2016 at 10:24:36AM -0400, Rakesh Kumar wrote:
> > [PUsaBSKn_n] Compose (@composeio)
> > 4/26/16, 1:24 PM
> > You can now upgrade your #PostgreSQL 9.4 to 9.5 easily at Compose. buff.ly/
> > 1WRsFFu #RDBMS
> > 
> > 
> > Based on the above tweet it seems that PG has no native way of doing an 
> > inplace
> > upgrade of a db. How do users upgrade db of tera byte size. 
> 
> That web page mentions pg_upgrade, which allows in-place major upgrades
> to happen in several minutes:
> 
>   Whichever backup you go with, it will be be restored to a new PostgreSQL
>   deployment where we may, or may not, run the pg_upgrade tool. 
> 
> The only _zero-downtime_ upgrade option is with logical-based
> replication like Slony.

Oh, here are the upgrade docs about Postgres:

http://www.postgresql.org/docs/9.5/static/upgrading.html

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+ Ancient Roman grave inscription +


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Does PG support in place upgrade

2016-04-27 Thread Bruce Momjian
On Wed, Apr 27, 2016 at 10:24:36AM -0400, Rakesh Kumar wrote:
> [PUsaBSKn_n] Compose (@composeio)
> 4/26/16, 1:24 PM
> You can now upgrade your #PostgreSQL 9.4 to 9.5 easily at Compose. buff.ly/
> 1WRsFFu #RDBMS
> 
> 
> Based on the above tweet it seems that PG has no native way of doing an 
> inplace
> upgrade of a db. How do users upgrade db of tera byte size. 

That web page mentions pg_upgrade, which allows in-place major upgrades
to happen in several minutes:

Whichever backup you go with, it will be be restored to a new PostgreSQL
deployment where we may, or may not, run the pg_upgrade tool. 

The only _zero-downtime_ upgrade option is with logical-based
replication like Slony.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+ Ancient Roman grave inscription +


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Does PG support in place upgrade

2016-04-27 Thread Rakesh Kumar
Compose (@composeio)
4/26/16, 1:24 PM
You can now upgrade your #PostgreSQL 9.4 to 9.5 easily at Compose. 
buff.ly/1WRsFFu #RDBMS

Based on the above tweet it seems that PG has no native way of doing an inplace 
upgrade of a db. How do users upgrade db of tera byte size. 

--
Sent from mobile. 

Re: [GENERAL] Generate PG schemas from the Oracle Data Modeler tool?

2016-03-08 Thread Adrian Klaver

On 03/08/2016 08:58 AM, Ken Winter wrote:

I have established a connection between the Oracle tool and my PG
databases by installing the PG JDBC driver from
https://jdbc.postgresql.org/download.html and following instructions at
http://stackoverflow.com/questions/7592519/oracle-sql-developer-and-postgresql/22299552
and https://www.youtube.com/watch?v=UGG_N9Mlgdw=youtu.be .

While the connection thus established seems solid, it delivers access to
only part of the PG database objects.  Most weirdly, the public schema
is invisible in the Oracle tool browser, though it displays other
schemas I create.  For full details of this weirdness, see
http://stackoverflow.com/questions/35809963/cant-see-postgresql-public-schema-in-oracle-sql-developer.
Any advice from the PG world would be very welcome.


The below would seem to indicate that it can be done:

http://tabdoelhafiezkhan.blogspot.com/2014/05/reverse-engineer-postgresql-with-oracle.html

Though the links you provided seem to indicate that Oracle SQL Developer 
Data Modeler behavior with Postgres is erratic.




~ Thanks, Ken

On Wed, Feb 24, 2016 at 10:52 PM, Ken Winter > wrote:

The best affordable (in this case, free) data modeling tool that I
have found is the "Oracle SQL Developer Data Modeler"

(http://www.oracle.com/technetwork/developer-tools/datamodeler/overview/index.html).


The best DBMS (free or otherwise) that I have found is PostgreSQL.

So of course it would be great to be able to connect the Oracle tool
to a PG database, so one could forward- and reverse-engineer between
the two.  At present, apparently the Oracle tool only natively
connects with Oracle, DB2, and SQL Server.

So I'm wondering if anybody knows of a utility or an Oracle Data
Modeler add-on that will take some dialect of SQL DDL that that tool
generates and turn it into PG-readable SQL.

I get it from the list at

https://wiki.postgresql.org/wiki/Converting_from_other_Databases_to_PostgreSQL
that there are tools that do something more ambitious: read schemas
from actual Oracle databases and implement them as schemas in actual
PG databases.  What I need is more modest than that: a tool that
inputs a file of DDL from the Oracle Data Modeler tool and outputs
that DDL in PG syntax.

~ Thanks for any leads you can provide
~ Ken






--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Generate PG schemas from the Oracle Data Modeler tool?

2016-03-08 Thread Ken Winter
I have established a connection between the Oracle tool and my PG databases
by installing the PG JDBC driver from
https://jdbc.postgresql.org/download.html and following instructions at
http://stackoverflow.com/questions/7592519/oracle-sql-developer-and-postgresql/22299552
and https://www.youtube.com/watch?v=UGG_N9Mlgdw=youtu.be .

While the connection thus established seems solid, it delivers access to
only part of the PG database objects.  Most weirdly, the public schema is
invisible in the Oracle tool browser, though it displays other schemas I
create.  For full details of this weirdness, see
http://stackoverflow.com/questions/35809963/cant-see-postgresql-public-schema-in-oracle-sql-developer.
Any advice from the PG world would be very welcome.

~ Thanks, Ken

On Wed, Feb 24, 2016 at 10:52 PM, Ken Winter  wrote:

> The best affordable (in this case, free) data modeling tool that I have
> found is the "Oracle SQL Developer Data Modeler" (
> http://www.oracle.com/technetwork/developer-tools/datamodeler/overview/index.html).
>
>
> The best DBMS (free or otherwise) that I have found is PostgreSQL.
>
> So of course it would be great to be able to connect the Oracle tool to a
> PG database, so one could forward- and reverse-engineer between the two.
> At present, apparently the Oracle tool only natively connects with Oracle,
> DB2, and SQL Server.
>
> So I'm wondering if anybody knows of a utility or an Oracle Data Modeler
> add-on that will take some dialect of SQL DDL that that tool generates and
> turn it into PG-readable SQL.
>
> I get it from the list at
> https://wiki.postgresql.org/wiki/Converting_from_other_Databases_to_PostgreSQL
> that there are tools that do something more ambitious: read schemas from
> actual Oracle databases and implement them as schemas in actual PG
> databases.  What I need is more modest than that: a tool that inputs a file
> of DDL from the Oracle Data Modeler tool and outputs that DDL in PG syntax.
>
> ~ Thanks for any leads you can provide
> ~ Ken
>
>
>


[GENERAL] Generate PG schemas from the Oracle Data Modeler tool?

2016-02-24 Thread Ken Winter
The best affordable (in this case, free) data modeling tool that I have
found is the "Oracle SQL Developer Data Modeler" (
http://www.oracle.com/technetwork/developer-tools/datamodeler/overview/index.html).


The best DBMS (free or otherwise) that I have found is PostgreSQL.

So of course it would be great to be able to connect the Oracle tool to a
PG database, so one could forward- and reverse-engineer between the two.
At present, apparently the Oracle tool only natively connects with Oracle,
DB2, and SQL Server.

So I'm wondering if anybody knows of a utility or an Oracle Data Modeler
add-on that will take some dialect of SQL DDL that that tool generates and
turn it into PG-readable SQL.

I get it from the list at
https://wiki.postgresql.org/wiki/Converting_from_other_Databases_to_PostgreSQL
that there are tools that do something more ambitious: read schemas from
actual Oracle databases and implement them as schemas in actual PG
databases.  What I need is more modest than that: a tool that inputs a file
of DDL from the Oracle Data Modeler tool and outputs that DDL in PG syntax.

~ Thanks for any leads you can provide
~ Ken


Re: [GENERAL] Why PG uses nested-loop join when no indexes are available?

2016-01-20 Thread David Grelaud
Thank you both for your help.

We will test your patch but we need to understand a bit more the code in
order to follow your discussions.
Actually, your patch helps us to find where to start in the code ;).

> The planner is never going to get it right 100% of the time.

Yes, I agree.
In production environnements, even if PostgreSQL chooses such a bad plan 1%
of the time, it is enough to make clients angry. My goal is to eradicate
this risk of choosing a nested loop in certain cases, which freezes
PostgreSQL during many minutes, whereas a hash-join or something else takes
only 2 seconds to complete. The performance difference is huge.
I mean, even if the plan is not the best one 100% of the time, it should at
least choose a "risk-free" plan, without these "bad" nested-loops. It is
maybe easier said than done but we want to try.

Regards,

*David Grelaud*

2016-01-15 2:16 GMT+01:00 David Rowley :

> On 15 January 2016 at 04:00, Tom Lane  wrote:
>
>> David Rowley  writes:
>> > Perhaps separating out enable_nestloop so that it only disables
>> > non-parameterised nested loops, and add another GUC for parameterised
>> > nested loops would be a good thing to do. Likely setting
>> enable_nestloop to
>> > off in production would be a slightly easier decision to make, if that
>> was
>> > the case.
>> > It looks pretty simple to do this, so I hacked it up, and attached it
>> here.
>> > There's no doc changes and I'm not that interested in fighting for this
>> > change, it's more just an idea for consideration.
>>
>> I'm not terribly excited by this idea either.  If making such a change
>> actually makes things better for someone consistently, I'd argue that
>> the problem is a mistaken cost estimate elsewhere, and we'd be better off
>> to find and fix the real problem.  (There have already been discussions
>> of only believing single-row rowcount estimates when they're provably
>> true, which might help if we can figure out how to do it cheaply enough.)
>>
>
> Actually, it's not very hard to hit a bad underestimate at all. All you
> need is a join on two columns which are co-related. Since PostgreSQL
> multiplies the estimated selectivities the row count is going to come out
> too low. This also tricks the planner into thinking that this is a good
> join to perform early, since (it thinks that) it does not produce many rows
> at all. You only need 1 more join to occur after that to choose a nested
> loop join mistakenly to hit the issue.
>
> FWIW TPC-H Q9 has this exact trip hazard with the partsupp table, which is
> the exact reason why this patch was born:
> https://commitfest.postgresql.org/7/210/
>
> I also think that the attitude that we can *always* fix the costs and
> estimates is not the right one. The planner is never going to get it right
> 100% of the time. If we ever think we can build such a planner then someone
> needs to come along and direct us back into the real world.
>
> --
>  David Rowley   http://www.2ndQuadrant.com/
> 
>  PostgreSQL Development, 24x7 Support, Training & Services
>


Re: [GENERAL] Why PG uses nested-loop join when no indexes are available?

2016-01-14 Thread David Rowley
On 14 January 2016 at 03:48, David Grelaud  wrote:

> 3) Always avoid nested-loop join when no indexes are available?
>
> Tom Lane said "There might be some cases where this would help, but there
> would be many more where it would be useless or counterproductive."
> Who is right between Tom Lane and the Leis Viktor's paper above?
>
> We tried to disable nested_loop all the time in a production environment
> and we observed an overall improvement in all queries where Indexes are not
> useful or not available (CTEs), which confirms the paper.
> In fact, one of our production environment is still running with
> "nested_loop off" because benefits are a lot greater than drawbacks as long
> as some tables are relatively small (Indexes not used).
>

I don't really think any of them are wrong. Simply Tom is talking in
general terms for no specific workload, and the paper is dealing with one
specific workload. Of course there are cases when a non-parameterised
nested loop are the fastest way, I mean what could possibility be faster if
there's only 1 row to be joined, for example. It's just that it's not that
much faster since such a join is likely to perform very quickly no matter
which join algorithm is used.

On the other hand, if your tables are not tiny, or you're never just
joining to just a few rows, and you are suffering from stats
underestimations, then it's quite probable that you'll improve your
workload overall by doing enable_nestloop = off. But you have to remember
that if you do this, then you miss out on parameterised inner scans on
nested loops. Quite often these are the fastest option, even when the
number of rows is fairly large, as it might save building a hash table on a
very large relation, or having to sort that relation for a merge join.

Perhaps separating out enable_nestloop so that it only disables
non-parameterised nested loops, and add another GUC for parameterised
nested loops would be a good thing to do. Likely setting enable_nestloop to
off in production would be a slightly easier decision to make, if that was
the case.

It looks pretty simple to do this, so I hacked it up, and attached it here.
There's no doc changes and I'm not that interested in fighting for this
change, it's more just an idea for consideration.

-- 
 David Rowley   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


enable_paramnestloop.patch
Description: Binary data

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why PG uses nested-loop join when no indexes are available?

2016-01-14 Thread David Rowley
On 15 January 2016 at 04:00, Tom Lane  wrote:

> David Rowley  writes:
> > Perhaps separating out enable_nestloop so that it only disables
> > non-parameterised nested loops, and add another GUC for parameterised
> > nested loops would be a good thing to do. Likely setting enable_nestloop
> to
> > off in production would be a slightly easier decision to make, if that
> was
> > the case.
> > It looks pretty simple to do this, so I hacked it up, and attached it
> here.
> > There's no doc changes and I'm not that interested in fighting for this
> > change, it's more just an idea for consideration.
>
> I'm not terribly excited by this idea either.  If making such a change
> actually makes things better for someone consistently, I'd argue that
> the problem is a mistaken cost estimate elsewhere, and we'd be better off
> to find and fix the real problem.  (There have already been discussions
> of only believing single-row rowcount estimates when they're provably
> true, which might help if we can figure out how to do it cheaply enough.)
>

Actually, it's not very hard to hit a bad underestimate at all. All you
need is a join on two columns which are co-related. Since PostgreSQL
multiplies the estimated selectivities the row count is going to come out
too low. This also tricks the planner into thinking that this is a good
join to perform early, since (it thinks that) it does not produce many rows
at all. You only need 1 more join to occur after that to choose a nested
loop join mistakenly to hit the issue.

FWIW TPC-H Q9 has this exact trip hazard with the partsupp table, which is
the exact reason why this patch was born:
https://commitfest.postgresql.org/7/210/

I also think that the attitude that we can *always* fix the costs and
estimates is not the right one. The planner is never going to get it right
100% of the time. If we ever think we can build such a planner then someone
needs to come along and direct us back into the real world.

-- 
 David Rowley   http://www.2ndQuadrant.com/

 PostgreSQL Development, 24x7 Support, Training & Services


Re: [GENERAL] Why PG uses nested-loop join when no indexes are available?

2016-01-14 Thread Tom Lane
David Rowley  writes:
> Perhaps separating out enable_nestloop so that it only disables
> non-parameterised nested loops, and add another GUC for parameterised
> nested loops would be a good thing to do. Likely setting enable_nestloop to
> off in production would be a slightly easier decision to make, if that was
> the case.
> It looks pretty simple to do this, so I hacked it up, and attached it here.
> There's no doc changes and I'm not that interested in fighting for this
> change, it's more just an idea for consideration.

I'm not terribly excited by this idea either.  If making such a change
actually makes things better for someone consistently, I'd argue that
the problem is a mistaken cost estimate elsewhere, and we'd be better off
to find and fix the real problem.  (There have already been discussions
of only believing single-row rowcount estimates when they're provably
true, which might help if we can figure out how to do it cheaply enough.)

Having said that, if we did split enable_nestloop like this, what I think
you'd want to discriminate against is nestloops where the inner rel is
not parameterized *by the outer rel*.  This test isn't doing that; it will
happily accept inner rels that are parameterized by some unrelated rel.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why PG uses nested-loop join when no indexes are available?

2016-01-13 Thread Tom Lane
David Grelaud  writes:
> Statistics are not propagated when Common Table Expressions (CTE) are used.
> The cardinality of a CTE is equal to 1 most of the time

Really?

The rest of this seems to be proceeding from completely false assumptions.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Why PG uses nested-loop join when no indexes are available?

2016-01-13 Thread David Grelaud
Hi,

Statistics are not propagated when Common Table Expressions (CTE) are used.
The cardinality of a CTE is equal to 1 most of the time so every joins with
previously computed CTEs are done with the nested-loop algorithm.
This seems to be really a risky choice, even without CTEs, according to
this paper and our own experiments:

"How good are query optimizers, really?." Proceedings of the VLDB Endowment
9.3 (2015): 204-215. (Paragraph 4.1) Leis, Viktor, et al.
http://www.vldb.org/pvldb/vol9/p204-leis.pdf

There are interesting discussions on the web about CTEs and bad
performances:

-
http://www.postgresql.org/message-id/flat/col116-w25f8931477407ed7689d69a3...@phx.gbl#col116-w25f8931477407ed7689d69a3...@phx.gbl
- http://blog.2ndquadrant.com/postgresql-ctes-are-optimization-fences/
- ...

So when the problem happens (underestimation costs -> nested-loop ->  many
rows -> bad performance guarantee), we have currently these solutions:

- refactor the query using Subquery Expressions instead of CTEs but the
query looks really ugly to read (increasing maintenance cost), and we may
loose some other execution plan optimisations provided by CTEs
- refactor the query using temporary table but it becomes impossible to use
single-query prepared statement
- disable nested loop but PostgreSQL does not use Indexes anymore when
available
- use an extension to enable Oracle-style hints (
https://fr.osdn.jp/projects/pghintplan/) but the system becomes blindness
(not data-dependent, potential futures algorithms never used, ...)
- is there another existing solution I'm not aware of?

I'm sure PostgreSQL could provide a better solution to solve this problem.


1) Would it be easy to compute and propagate statistics of CTEs, like
subqueries?

The argument usually returned by the PostgreSQL community is:
"CTEs have been created to let the developer control the execution plan, so
the statistics computation is virtually disabled"
Ok, the developer can control roughly the execution plan but in the same
time, Postgres becomes "stupid" inside each CTEs and chooses always the
"same" join algorithm (the riskiest) to join previously computed CTEs.
It is like giving to somebody the power to fly, while removing his eyes ;).

Drawbacks: even if statistics are computed and propagated across CTEs, and
if queries are really complex, the cost estimator may fail to compute
cardinality and the problem of nested-loop joins still happens.


2) Would it be possible to let the developer inject cardinality hints in
the query?

As suggested by this paper:
"Query optimizers: time to rethink the contract?."" In : Proceedings of the
2009 ACM SIGMOD International Conference on Management of data. ACM, 2009.
p. 961-968. CHAUDHURI, Surajit.
http://courses.cs.washington.edu/courses/csep544/10au/readings/p961-chaudhuri.pdf

The SQL developer could for example inject cardinality in a comment
"my_cte:10". The developer is responsible to update this cardinality
with its own metrics and tools.
Thus, the cardinality is still data-dependent (not constant Ad vitam
æternam) and the planner is able to choose the best join algorithm
according to all other parameters (system IO...).


3) Always avoid nested-loop join when no indexes are available?

Tom Lane said "There might be some cases where this would help, but there
would be many more where it would be useless or counterproductive."
Who is right between Tom Lane and the Leis Viktor's paper above?

We tried to disable nested_loop all the time in a production environment
and we observed an overall improvement in all queries where Indexes are not
useful or not available (CTEs), which confirms the paper.
In fact, one of our production environment is still running with
"nested_loop off" because benefits are a lot greater than drawbacks as long
as some tables are relatively small (Indexes not used).


4) Do runtime optimizations?

According to research papers, this will be the next challenge. But I think
it is difficult to implement it in a relatively short-term period?



What is the purpose of this message:

We would like to find a "simple" long-term solution to this
under-estimation cost problem, which generate hazarduous performance
regressions in our production environments.

We would like to hear critiques or other solutions from PostgreSQL experts.

We would like to help developing and testing the solution.


Thank you very much!

Regards,
---
David Grelaud,
Ideolys.


Re: [GENERAL] Why PG uses nested-loop join when no indexes are available?

2016-01-13 Thread David Grelaud
Thank you for your fast response!

I'm sorry, my vocabulary may be not correct and my "french approach" to
explain my problem is not efficient ;).

But the underestimation problem in complex queries is still there? ... and
generates potential "dangerous" plans (nested loop).

We thought at the beginning we were alone but it seems to be a problem of
most database systems.
What do you think about the paragraph 4.1 of this paper
http://www.vldb.org/pvldb/vol9/p204-leis.pdf ?

Regards,
---
David Grelaud,
Ideolys.



2016-01-13 16:02 GMT+01:00 Tom Lane :

> David Grelaud  writes:
> > Statistics are not propagated when Common Table Expressions (CTE) are
> used.
> > The cardinality of a CTE is equal to 1 most of the time
>
> Really?
>
> The rest of this seems to be proceeding from completely false assumptions.
>
> regards, tom lane
>


Re: [GENERAL] to pg

2015-09-25 Thread Tom Lane
Alban Hertroys  writes:
> On 25 September 2015 at 13:08, Ramesh T  wrote:
>> CREATE UNIQUE INDEX idx_load_pick ON  pick (case picked when picked='y' then
>> load_id else null end );
>> 
>> how can i convert case expressed to postgres..above it is oracle.

> BTW, your CASE statement isn't exactly valid, even in Oracle. Your
> comparison is in fact this: picked = picked='y'.

Yeah.  Aside from that confusion, the other reason this command doesn't
work as-is is you need more parentheses.  An expression in an index has
to either look like a function call or be parenthesized.  So:

regression=# create table pick (picked text, load_id int);
CREATE TABLE
regression=# CREATE UNIQUE INDEX idx_load_pick ON  pick (case picked when 
picked='y' then load_id else null end );
ERROR:  syntax error at or near "case"
regression=# CREATE UNIQUE INDEX idx_load_pick ON  pick ((case picked when 
picked='y' then load_id else null end ));
ERROR:  operator does not exist: text = boolean
regression=# CREATE UNIQUE INDEX idx_load_pick ON  pick ((case when picked='y' 
then load_id else null end ));
CREATE INDEX

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] to pg

2015-09-25 Thread Igor Neyman


From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Ramesh T
Sent: Friday, September 25, 2015 7:09 AM
To: pgsql-general@postgresql.org
Subject: [GENERAL] to pg

CREATE UNIQUE INDEX idx_load_pick ON  pick (case picked when picked='y' then 
load_id else null end );

how can i convert case expressed to postgres..above it is oracle.

any help appreciated...



CREATE UNIQUE INDEX idx_load_pick ON  pick (load_id) where picked='y';

Regards,
Igor Neyman


Re: [GENERAL] to pg

2015-09-25 Thread Geoff Winkless
Surely just

CASE picked WHEN 'y' THEN load_id ELSE NULL END

or

CASE WHEN picked='y' THEN load_id ELSE NULL END

?

On 25 September 2015 at 12:08, Ramesh T 
wrote:

> CREATE UNIQUE INDEX idx_load_pick ON  pick (case picked when picked='y'
> then load_id else null end );
>
> how can i convert case expressed to postgres..above it is oracle.
>
> any help appreciated...
>


[GENERAL] to pg

2015-09-25 Thread Ramesh T
CREATE UNIQUE INDEX idx_load_pick ON  pick (case picked when picked='y'
then load_id else null end );

how can i convert case expressed to postgres..above it is oracle.

any help appreciated...


Re: [GENERAL] to pg

2015-09-25 Thread Ladislav Lenart
On 25.9.2015 13:08, Ramesh T wrote:
> CREATE UNIQUE INDEX idx_load_pick ON  pick (case picked when picked='y' then
> load_id else null end );
> 
> how can i convert case expressed to postgres..above it is oracle.
> 
> any help appreciated... 

Hello.

And what about a partial unique index as documented here:

http://www.postgresql.org/docs/9.4/static/indexes-partial.html

I.e.:

CREATE UNIQUE INDEX ON  pick (load_id) WHERE picked = 'y';

HTH,

Ladislav Lenart



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] to pg

2015-09-25 Thread Albe Laurenz
Ramesh T wrote:
> CREATE UNIQUE INDEX idx_load_pick ON  pick (case picked when picked='y' then 
> load_id else null end );
> 
> how can i convert case expressed to postgres..above it is oracle.

CREATE TABLE pick (picked char(1), load_id integer);

CREATE FUNCTION picked_loadid(character, integer) RETURNS integer
   IMMUTABLE STRICT LANGUAGE sql AS
   $$SELECT CASE WHEN $1 = 'y' THEN $2 ELSE NULL END$$;

CREATE INDEX idx_load_pick ON pick (picked_loadid(picked, load_id));

*but*

It will only work with queries like:

SELECT * FROM pick WHERE picked_loadid(picked, load_id) IS NOT NULL;

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] to pg

2015-09-25 Thread Alban Hertroys
On 25 September 2015 at 13:08, Ramesh T  wrote:
> CREATE UNIQUE INDEX idx_load_pick ON  pick (case picked when picked='y' then
> load_id else null end );
>
> how can i convert case expressed to postgres..above it is oracle.

Assuming that your queries are written in such a way that Oracle is
indeed using that index and you want your queries to use the index as
well in PG:

CREATE UNIQUE INDEX idx_load_pick ON (load_id) WHERE CASE picked WHEN
'y' THEN load_id ELSE NULL END IS NOT NULL;

That's definitely written a bit redundantly, that's Oracle's fault.

If your queries aren't like that, it's as Ladislav wrote. Much simpler in PG!


To make Oracle use your original index, your queries are probably of a
form containing snippets like:

SELECT *
FROM foo
WHERE CASE picked WHEN 'y' THEN load_id ELSE NULL END IS NOT NULL

BTW, your CASE statement isn't exactly valid, even in Oracle. Your
comparison is in fact this: picked = picked='y'.

-- 
If you can't see the forest for the trees,
Cut the trees and you'll see there is no forest.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [PERFORM] pg bouncer issue what does sv_used column means

2015-06-12 Thread Andrew Dunstan


Please do not cross-post on the PostgreSQL lists. Pick the most 
appropriate list to post to and just post there.


cheers

andrew


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [PERFORM] pg bouncer issue what does sv_used column means

2015-06-12 Thread Xenofon Papadopoulos
Unsubscribe

On Fri, Jun 12, 2015 at 8:57 PM, Sheena, Prabhjot 
prabhjot.si...@classmates.com wrote:

  Guys we see spike in pg bouncer during the peak hours and that was
 slowing down the application. We did bump up the connection limit and it is
 helpful but now we  again notice little spike in connection. And one thing
 that I notice that is different is jump in sv_used value when I run command
 show pools during problem times





 *Can anyone please explain what value of sv_used means when i run show
 pools;*






 Regards

 *Prabhjot *





Re: [GENERAL] [PERFORM] pg bouncer issue what does sv_used column means

2015-06-12 Thread k...@rice.edu
On Fri, Jun 12, 2015 at 09:37:36PM +, Sheena, Prabhjot wrote:
 Here is some more information
 
 pool_mode | transaction
 
 We have transactional pooling and our application is set up in such a way 
 that we have one query per transaction. We have set  default pool size to 100.
 
 This is output . As you guys can see active connection are 100 and 224 are 
 waiting. We are planning to move default pool size to 250. Please suggest if 
 you guys think otherwise
 
 pgbouncer=# show pools;
 database  |   user| cl_active | cl_waiting | sv_active | sv_idle | 
 sv_used | sv_tested | sv_login | maxwait
 ---+---+---++---+-+-+---+--+-
 pgbouncer | pgbouncer | 2 |  0 | 0 |   0 |   
 0 | 0 |0 |   0
 site  | feature   |   418 |  0 |20 |  17 |   
 0 | 0 |0 |   0
 site  | service   |   621 |224 |   100 |   0 |   
 0 | 0 |0 |   0
 site  | zabbix| 0 |  0 | 0 |   0 |   
 0 | 0 |0 |   0
 
 Prabhjot Singh
 Database Administrator
 
 CLASSMATES
 1501 4th Ave., Suite 400
 Seattle, WA 98101
 206.301.4937 o
 206.301.5701 f
 
 From: Sheena, Prabhjot
 Sent: Friday, June 12, 2015 10:57 AM
 To: 'pgsql-general@postgresql.org'; 'pgsql-performa...@postgresql.org'
 Subject: pg bouncer issue what does sv_used column means
 
 Guys we see spike in pg bouncer during the peak hours and that was slowing 
 down the application. We did bump up the connection limit and it is helpful 
 but now we  again notice little spike in connection. And one thing that I 
 notice that is different is jump in sv_used value when I run command show 
 pools during problem times
 
 
 Can anyone please explain what value of sv_used means when i run show pools;
 
 
 
 Regards
 Prabhjot
 

Hi Parbhjot,

The spike in pgbouncer during peak hours just indicates that you are busier 
then. How
many sv_active do you have in non-peak hours? What kind of system is this on? I 
suspect
that your hardware cannot actually handle 100 simultaneous processes at once 
and if you
increase that to 250 processes there is a good likelyhood that your system 
response
will get even worse. Number of CPU to 2x number of CPU is typical for peak 
performance
throughput. Are you using a 50-core system? What do the I/O stats look like? 
You may be
I/O limited.

Regards,
Ken


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [SQL] [GENERAL] Does PG support bulk operation in embedded C

2015-05-19 Thread Ravi Krishna

Not sure whether I am understanding this. I checked embedded C and did not find any section which describes what I have asked, that is the ability to do multiple inserts, or updates or deletes in one sql call. For example, if my application does the following

BEGIN TRANSACTION
 INSERT INTO TABLE_A
 UPDATE TABLE_B
 INSERT INTO TABLE_C

COMMIT TRANSACTION


DB2 provides to combine the three sql operations into an array and make a call to DB2 which executes the array (that is all 3 sqls as one single call).

I am looking for something similar in PG.

thanks



Sent:Tuesday, May 19, 2015 at 8:13 PM
From:Joshua D. Drake j...@commandprompt.com
To:Ravi Krishna srkris...@gmx.com, pgsql-...@postgresql.org
Cc:pgsql-general@postgresql.org
Subject:Re: [SQL] [GENERAL] Does PG support bulk operation in embedded C


On 05/19/2015 04:47 PM, Ravi Krishna wrote:

 To explain pls refer to this for DB2

 http://www-01.ibm.com/support/knowledgecenter/SSEPGG_9.7.0/com.ibm.db2.luw.apdv.cli.doc/doc/r0002329.html


 Essentially in one single sql call, we can do
 -- Add new rows
 -- Update a set of rows where each row is identified by a bookmark
 -- Delete a set of rows where each row is identified by a bookmark
 -- Fetch a set of rows where each row is identified by a bookmark

 This gives tremendous performance benefits as the network round trip is
 avoided for each sql.

 I am looking for an equivalent of this in PG and C language.

For embedded C, I believe you are looking for:

http://www.postgresql.org/docs/9.4/static/ecpg.html



 Thanks.





--
Command Prompt, Inc. - http://www.commandprompt.com/ 503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Announcing Im offended is basically telling the world you cant
control your own emotions, so everyone else should do it for you.


--
Sent via pgsql-sql mailing list (pgsql-...@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql






[GENERAL] Does PG support bulk operation in embedded C

2015-05-19 Thread Ravi Krishna

To explain pls refer to this for DB2

http://www-01.ibm.com/support/knowledgecenter/SSEPGG_9.7.0/com.ibm.db2.luw.apdv.cli.doc/doc/r0002329.html

Essentially in one single sql call, we can do
-- Add new rows
-- Update a set of rows where each row is identified by a bookmark
-- Delete a set of rows where each row is identified by a bookmark
-- Fetch a set of rows where each row is identified by a bookmark

This gives tremendous performance benefits as the network round trip is 
avoided for each sql.


I am looking for an equivalent of this in PG and C language.

Thanks.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [SQL] [GENERAL] Does PG support bulk operation in embedded C

2015-05-19 Thread Joshua D. Drake


On 05/19/2015 05:27 PM, Ravi Krishna wrote:

Not sure whether I am understanding this. I checked embedded C and did
not find any section which describes what I have asked, that is the
ability to do multiple inserts, or updates or deletes in one sql call.
For example, if my application does the following

BEGIN TRANSACTION
INSERT INTO TABLE_A
UPDATE TABLE_B
INSERT INTO TABLE_C
COMMIT TRANSACTION


Well PostgreSQL certainly supports the above.



DB2 provides to combine the three sql operations into an array and make
a call to DB2 which executes the array (that is all 3 sqls as one single
call).


You can do this with inserts using multivalue.

INSERT INTO TABLE_A VALUES (), (), ();

I am not sure about UPDATE or DELETE, I know you can use WITH on DELETE 
which gives you some flexibility.


I don't think you will get a one to one comparison but you should be 
able to get close.


JD




--
Command Prompt, Inc. - http://www.commandprompt.com/  503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Announcing I'm offended is basically telling the world you can't
control your own emotions, so everyone else should do it for you.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Does PG support bulk operation in embedded C

2015-05-19 Thread Joshua D. Drake


On 05/19/2015 04:47 PM, Ravi Krishna wrote:


To explain pls refer to this for DB2

http://www-01.ibm.com/support/knowledgecenter/SSEPGG_9.7.0/com.ibm.db2.luw.apdv.cli.doc/doc/r0002329.html


Essentially in one single sql call, we can do
-- Add new rows
-- Update a set of rows where each row is identified by a bookmark
-- Delete a set of rows where each row is identified by a bookmark
-- Fetch a set of rows where each row is identified by a bookmark

This gives tremendous performance benefits as the network round trip is
avoided for each sql.

I am looking for an equivalent of this in PG and C language.


For embedded C, I believe you are looking for:

http://www.postgresql.org/docs/9.4/static/ecpg.html




Thanks.






--
Command Prompt, Inc. - http://www.commandprompt.com/  503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Announcing I'm offended is basically telling the world you can't
control your own emotions, so everyone else should do it for you.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Does PG support bulk operation in embedded C

2015-05-19 Thread Michael Paquier
On Wed, May 20, 2015 at 8:47 AM, Ravi Krishna srkris...@gmx.com wrote:
 Essentially in one single sql call, we can do
 -- Add new rows
 -- Update a set of rows where each row is identified by a bookmark
 -- Delete a set of rows where each row is identified by a bookmark
 -- Fetch a set of rows where each row is identified by a bookmark

 This gives tremendous performance benefits as the network round trip is
 avoided for each sql.

 I am looking for an equivalent of this in PG and C language.

What you are looking at could be accomplished with a user-defined function:
http://www.postgresql.org/docs/devel/static/xfunc.html
Perhaps you are looking for something in C, now it would be less
complex to do it for example with pl/pgsql or another language, and
call it from a C client with a correct set of arguments satisfying
your needs.
-- 
Michael


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Monitoring Pg servers with Microsoft SCOM

2014-05-06 Thread Glen Eustace

On 7/05/2014, at 3:07 am, pgsql-general-ow...@postgresql.org wrote:

 I would just use check_postgres (perl) from your agent script...

Having downloaded and had a look at this script I would agree.  I'll let 
check_postgresl do the heavy lifting and just try and get a build an MP that 
collects the output from the various actions.

Thanks.
 

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Monitoring Pg servers with Microsoft SCOM

2014-05-05 Thread Magnus Hagander
On Mon, May 5, 2014 at 3:15 AM, Glen Eustace geust...@godzone.net.nzwrote:


 On 5/05/2014, at 12:26 pm, Adrian Klaver adrian.kla...@aklaver.com
 wrote:

  On 05/04/2014 04:17 PM, Glen Eustace wrote:
  I am in the process of deploying Microsoft System Centre Operations
  Manager and was hoping that somebody had either developed or knew of
  where I could get hold of a management pack for PostgreSQL.
 
  I am not sure whether there is an instrumentation interface into the DB
  so haven't yet looked at rolling our own.
 
  Any comments appreciated.
 
  Searches on this all seem to point back to this post:
 
 
 http://blogs.technet.com/b/kevinholman/archive/2012/03/19/opsmgr-how-to-monitor-non-microsoft-sql-databases-in-scom-an-example-using-postgre-sql.aspx

 Yes, I had seen that.  I was more interested in being able to instrument
 the Pg internals, connection counts, transaction rates, RAM usage etc.  In
 a similar sort of way to MS-SQL.

 Running transactions against an individual DB is still useful but I was
 hoping to do better.


As long as you can run arbitrary SQL, you can get all the information out.
Assuming it can do something with it rather than just run a plain query. If
so, I suggest you take a look at the check_postgres nagios plugin or the
munin plugins for some examples of which SQL to run to get the interesting
metrics back that you want. Some quick googling shows several examples of
how to monitor with custom SQL queries, but I don't know enough (or
anything) about SCOM to recommend any of them in particular.

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


Re: [GENERAL] Monitoring Pg servers with Microsoft SCOM

2014-05-05 Thread Glen Eustace

On 6/05/2014, at 1:30 am, Magnus Hagander mag...@hagander.net wrote:

 As long as you can run arbitrary SQL, you can get all the information out. 
 Assuming it can do something with it rather than just run a plain query. If 
 so, I suggest you take a look at the check_postgres nagios plugin or the 
 munin plugins for some examples of which SQL to run to get the interesting 
 metrics back that you want. Some quick googling shows several examples of how 
 to monitor with custom SQL queries, but I don't know enough (or anything) 
 about SCOM to recommend any of them in particular.
 

Thanks, that is the conclusion I had come to as well.  I have written a 
management pack for collection other Linux data so may have a go at a 
PostgreSQL one if I get some time.



Re: [GENERAL] Monitoring Pg servers with Microsoft SCOM

2014-05-05 Thread John R Pierce

On 5/5/2014 12:43 PM, Glen Eustace wrote:


Thanks, that is the conclusion I had come to as well.  I have written 
a management pack for collection other Linux data so may have a go at 
a PostgreSQL one if I get some time.


I would just use check_postgres (perl) from your agent script...



--
john r pierce  37N 122W
somewhere on the middle of the left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Monitoring Pg servers with Microsoft SCOM

2014-05-04 Thread Glen Eustace
I am in the process of deploying Microsoft System Centre Operations Manager and 
was hoping that somebody had either developed or knew of where I could get hold 
of a management pack for PostgreSQL.

I am not sure whether there is an instrumentation interface into the DB so 
haven't yet looked at rolling our own.

Any comments appreciated.

Glen


-- 
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Glen and Rosanne Eustace,
GodZone Internet Services, a division of AGRE Enterprises Ltd.,
P.O. Box 8020, Palmerston North, New Zealand 4446
Ph: +64 6 357 8168, Fax: +64 6 357 8165, Mob: +64 27 542 4015

Specialising in providing low-cost professional Internet Services since 1997



Re: [GENERAL] Monitoring Pg servers with Microsoft SCOM

2014-05-04 Thread John R Pierce

On 5/4/2014 4:17 PM, Glen Eustace wrote:
I am in the process of deploying Microsoft System Centre Operations 
Manager and was hoping that somebody had either developed or knew of 
where I could get hold of a management pack for PostgreSQL.


I am not sure whether there is an instrumentation interface into the 
DB so haven't yet looked at rolling our own.


if you have perl (strawberryperl ?) on the postgres server, you could 
use the checkpostgresql.pl script meant for Nagios from the Bucardo 
folks, it will allow you to poll for 100s of status and performance 
metrics, even things like table bloat.




--
john r pierce  37N 122W
somewhere on the middle of the left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Monitoring Pg servers with Microsoft SCOM

2014-05-04 Thread Adrian Klaver

On 05/04/2014 04:17 PM, Glen Eustace wrote:

I am in the process of deploying Microsoft System Centre Operations
Manager and was hoping that somebody had either developed or knew of
where I could get hold of a management pack for PostgreSQL.

I am not sure whether there is an instrumentation interface into the DB
so haven't yet looked at rolling our own.

Any comments appreciated.


Searches on this all seem to point back to this post:

http://blogs.technet.com/b/kevinholman/archive/2012/03/19/opsmgr-how-to-monitor-non-microsoft-sql-databases-in-scom-an-example-using-postgre-sql.aspx



Glen


--


--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Monitoring Pg servers with Microsoft SCOM

2014-05-04 Thread Glen Eustace

On 5/05/2014, at 12:26 pm, Adrian Klaver adrian.kla...@aklaver.com wrote:

 On 05/04/2014 04:17 PM, Glen Eustace wrote:
 I am in the process of deploying Microsoft System Centre Operations
 Manager and was hoping that somebody had either developed or knew of
 where I could get hold of a management pack for PostgreSQL.
 
 I am not sure whether there is an instrumentation interface into the DB
 so haven't yet looked at rolling our own.
 
 Any comments appreciated.
 
 Searches on this all seem to point back to this post:
 
 http://blogs.technet.com/b/kevinholman/archive/2012/03/19/opsmgr-how-to-monitor-non-microsoft-sql-databases-in-scom-an-example-using-postgre-sql.aspx

Yes, I had seen that.  I was more interested in being able to instrument the Pg 
internals, connection counts, transaction rates, RAM usage etc.  In a similar 
sort of way to MS-SQL.

Running transactions against an individual DB is still useful but I was hoping 
to do better.

Glen



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Monitoring Pg servers with Microsoft SCOM

2014-05-04 Thread Tatsuo Ishii
 I am in the process of deploying Microsoft System Centre Operations Manager 
 and was hoping that somebody had either developed or knew of where I could 
 get hold of a management pack for PostgreSQL.
 
 I am not sure whether there is an instrumentation interface into the DB so 
 haven't yet looked at rolling our own.
 
 Any comments appreciated.
 
 Glen

Have you looked into this?

http://pg-monz.github.io/pg_monz/index-en.html

Best regards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese: http://www.sraoss.co.jp


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Monitoring Pg servers with Microsoft SCOM

2014-05-04 Thread Glen Eustace

On 5/05/2014, at 3:18 pm, Tatsuo Ishii is...@postgresql.org wrote:

 I am in the process of deploying Microsoft System Centre Operations Manager 
 and was hoping that somebody had either developed or knew of where I could 
 get hold of a management pack for PostgreSQL.
 
 I am not sure whether there is an instrumentation interface into the DB so 
 haven't yet looked at rolling our own.
 
 Any comments appreciated.
 
 Glen
 
 Have you looked into this?
 
 http://pg-monz.github.io/pg_monz/index-en.html

Unfortunately the goal is integration of services with SCOM as opposed to 
monitoring PostgreSQL so I don't get a choice of platform.

Glen.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Largest PG database known to man!

2013-10-01 Thread Mark Jones
Hi all,

We are currently working with a customer who is looking at a database of
between 200-400 TB! They are after any confirmation of PG working at this
size or anywhere near it.
Anyone out there worked on anything like this size in PG please? If so, can
you let me know more details etc..

   Mark Jones
   Principal Sales Engineer Emea


   http://www.enterprisedb.com/
   
   Email: mark.jo...@enterprisedb.com
   Tel: 44 7711217186
   Skype: Mxjones121







Re: [GENERAL] Largest PG database known to man!

2013-10-01 Thread John R Pierce

On 10/1/2013 2:49 PM, Mark Jones wrote:
We are currently working with a customer who is looking at a database 
of between 200-400 TB! They are after any confirmation of PG working 
at this size or anywhere near it.



is that really 200-400TB of relational data, or is it 199-399TB of bulk 
data (blobs or whatever) interspersed with some relational metadata?


what all is the usage pattern of this data?   that determines the 
feasibility of something far more than just the raw size.





--
john r pierce  37N 122W
somewhere on the middle of the left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Largest PG database known to man!

2013-10-01 Thread Mark Jones
Thanks for your quick response John.

From the limited information, it is mostly relational.
As for usage patterns, I do not have that yet.
I was just after a general feel of what is out there size wise.

Regards

 

   Mark Jones
   Principal Sales Engineer Emea


   http://www.enterprisedb.com/
   
   Email: mark.jo...@enterprisedb.com
   Tel: 44 7711217186
   Skype: Mxjones121














On 01/10/2013 22:56, John R Pierce pie...@hogranch.com wrote:

On 10/1/2013 2:49 PM, Mark Jones wrote:
 We are currently working with a customer who is looking at a database
 of between 200-400 TB! They are after any confirmation of PG working
 at this size or anywhere near it.


is that really 200-400TB of relational data, or is it 199-399TB of bulk
data (blobs or whatever) interspersed with some relational metadata?

what all is the usage pattern of this data?   that determines the
feasibility of something far more than just the raw size.




-- 
john r pierce  37N 122W
somewhere on the middle of the left coast



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Largest PG database known to man!

2013-10-01 Thread Jeff Ross

Maybe some of these folks can chime in?

http://cds.u-strasbg.fr/

Simbad (and I think VisieR) runs on PostgreSQL.  A friend of mine is a 
grad student in astronomy and he told me about them.


Jeff Ross
On 10/1/13 3:49 PM, Mark Jones wrote:

Hi all,

We are currently working with a customer who is looking at a database 
of between 200-400 TB! They are after any confirmation of PG working 
at this size or anywhere near it.
Anyone out there worked on anything like this size in PG please? If 
so, can you let me know more details etc..



 Mark Jones
   Principal Sales Engineer Emea

http://www.enterprisedb.com/

   Email: mark.jo...@enterprisedb.com mailto:mark.jo...@enterprisedb.com
   Tel: 44 7711217186
   Skype: Mxjones121






Re: [GENERAL] Largest PG database known to man!

2013-10-01 Thread John R Pierce

On 10/1/2013 3:00 PM, Mark Jones wrote:

From the limited information, it is mostly relational.


phew.   thats going to be a monster.400TB on 600GB 15000rpm SAS 
drives in raid10 will require around 1400 drives.   at 25 disks per 2U 
drive tray, thats 2 6' racks of nothing but disks, and to maintain a 
reasonable fanout to minimize IO bottlenecks, would require on the order 
of 25 SAS raid cards.   or, a really big SAN with some serious IOPS.   
and naturally, you should have at least 2 of these for availability.


if we assume the tables average 1KB/record (which is a fairly large 
record size even including indexing), you're looking at 400 billion 
records.   if you can populate these at 5000 records/second, it would 
take 2.5 years of 24/7 operation to populate that.


this sort of big data system is probably more suitable for something 
like hadoop+mongo or whatever on a cloud of 1000 nodes, not a monolithic 
SQL relational database.



--
john r pierce  37N 122W
somewhere on the middle of the left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Largest PG database known to man!

2013-10-01 Thread Julian
On 02/10/13 07:49, Mark Jones wrote:
 Hi all,

 We are currently working with a customer who is looking at a database
 of between 200-400 TB! They are after any confirmation of PG working
 at this size or anywhere near it.
 Anyone out there worked on anything like this size in PG please? If
 so, can you let me know more details etc..  
 

Wow that's awesome - but you know the difference between 200TB and 400TB
is quite significant (100%)? Like a whole bunch of cash
significant...unless we are talking GB.
But is that it? This isn't really fair, is this a test?
Jules


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Largest PG database known to man!

2013-10-01 Thread Stephen Frost
* John R Pierce (pie...@hogranch.com) wrote:
 if we assume the tables average 1KB/record (which is a fairly large
 record size even including indexing), you're looking at 400 billion
 records.   if you can populate these at 5000 records/second, it
 would take 2.5 years of 24/7 operation to populate that.

5000 1KB records per second is only 5MB/s or so, which is really quite
slow..  I can't imagine that they'd load all of this data by doing a
commit for each record and you could load a *huge* amount of data *very*
quickly, in parallel, by using either unlogged tables or wal_level =
minimal and creating the tables in the same transaction that's loading
them.

 this sort of big data system is probably more suitable for something
 like hadoop+mongo or whatever on a cloud of 1000 nodes, not a
 monolithic SQL relational database.

Or a federated PG database using FDWs..

Sadly, I've not personally worked with a data system on the 100+TB range
w/ PG (we do have a Hadoop environment along that scale) but I've built
systems as large as 25TB which, built correctly, work very well.  Still,
I don't think I'd recommend building a single-image PG database on that
scale but rather would shard it.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [GENERAL] Largest PG database known to man!

2013-10-01 Thread John R Pierce

On 10/1/2013 6:53 PM, Stephen Frost wrote:

I don't think I'd recommend building a single-image PG database on that
scale but rather would shard it.


sharding only works well if your data has natural divisions and you're 
not doing complex joins/aggregates across those divisions.




--
john r pierce  37N 122W
somewhere on the middle of the left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Largest PG database known to man!

2013-10-01 Thread Chris Travers
On Tue, Oct 1, 2013 at 3:00 PM, Mark Jones mark.jo...@enterprisedb.comwrote:

 Thanks for your quick response John.

 From the limited information, it is mostly relational.
 As for usage patterns, I do not have that yet.
 I was just after a general feel of what is out there size wise.


Usage patterns are going to be critical here.  There is a huge difference
between a large amount of data being used in an OLTP workflow than a
DSS/OLAP workflow.  Additionally, I am concerned your indexes are going to
be very large.  Now, depending on your usage pattern, breaking things down
carefully regarding tablespace and partial indexes may be enough.  However,
keep in mind that no table can be larger than 32TB.  At any rate, no matter
what solution you use, I don't see a generally tuned database being what
you want (which means you are tuning for workflow).

Now, I think your big limits in OLTP are going to be max table size (32TB)
and index size.  These can all be managed (and managed carefully) but they
are limits.

For OLAP you have a totally different set of concerns, and since you are
talking about aggregating a lot of data, vanilla PostgreSQL is going to be
a pain to get working as it is.  On the other hand OLAP and large db mixed
workloads is where Postgres-XC might really shine.  The complexity costs
there will likely be worth it in removing limitations on disk I/O and lack
of intraquery parallelism.

200TB is a lot of data.  400TB is twice that.  Either way you are going to
have a really complex set of problems to tackle regardless of what solution
you choose.

I have heard of db sizes in the 30-100TB range on PostgreSQL even before
Postgres-XC.  I am not sure beyond that.

Best Wishes,
Chris Travers


 Regards


 
Mark Jones
Principal Sales Engineer Emea


http://www.enterprisedb.com/

Email: mark.jo...@enterprisedb.com
Tel: 44 7711217186
Skype: Mxjones121














 On 01/10/2013 22:56, John R Pierce pie...@hogranch.com wrote:

 On 10/1/2013 2:49 PM, Mark Jones wrote:
  We are currently working with a customer who is looking at a database
  of between 200-400 TB! They are after any confirmation of PG working
  at this size or anywhere near it.
 
 
 is that really 200-400TB of relational data, or is it 199-399TB of bulk
 data (blobs or whatever) interspersed with some relational metadata?
 
 what all is the usage pattern of this data?   that determines the
 feasibility of something far more than just the raw size.
 
 
 
 
 --
 john r pierce  37N 122W
 somewhere on the middle of the left coast
 
 
 
 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general




 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general




-- 
Best Wishes,
Chris Travers

Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor
lock-in.
http://www.efficito.com/learn_more.shtml


[GENERAL] Re: PG 9.3 complains about specified more than once ??? Those views worked in PG 9.1 + 9.2

2013-09-11 Thread David Johnston
Andreas-3-2 wrote
 Hi,
 
 *I ran into a major problem when I tried to import a backup from 9.1. 
 into a 9.3 PG.*
 
 I just installed PG 9.3 on a new linux box.
 Then I wanted to import a plaintext dump of a DB that was created by 
 pg_dump of PG 9.1
 
 There are a lot of views that have joins to a subquery in the from-clause.
 
 something like
 
 SELECT ... some columns ...
 FROM
  maintable AS m
  JOIN someflag AS f ON m.flag_1_id = f.id
 LEFT JOIN
 (
  child_table AS   c
  JOIN   someotherflag  AS  f   ON   c.flag_2_id = f.id
 )   AS x  ON m.id = x.main_id
 
 This works with PG 9.1 and PG 9.2 but PG 9.3 complains:
 
 ERROR:  table name f specified more than once
 
 *Are there no separate namespaces for subqueries anymore in PG 9.3 ?*
 
 Do I have to change ALL those views in the old PG 9.1 server before I 
 can import the backup into 9.3 or is there another way to work around 
 this issue ?

Not running 9.3 yet so cannot test this myself.  Ignore the pg_dump and
view.  Does (should) the above query work if executed in psql?

David J.




--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/PG-9-3-complains-about-specified-more-than-once-Those-views-worked-in-PG-9-1-9-2-tp5770489p5770510.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: PG 9.3 complains about specified more than once ??? Those views worked in PG 9.1 + 9.2

2013-09-11 Thread David Johnston
Adrian Klaver-3 wrote
 My guess you are seeing this:
 
 http://www.postgresql.org/docs/9.3/interactive/release-9-3.html
 
 Improve view/rule printing code to handle cases where referenced tables 
 are renamed, or columns are renamed, added, or dropped (Tom Lane)
 
 Table and column renamings can produce cases where, if we merely 
 substitute the new name into the original text of a rule or view, the 
 result is ambiguous. This change fixes the rule-dumping code to insert 
 manufactured table and column aliases when needed to preserve the 
 original semantics.
 
 
 You would be advised to use the 9.3 version of pg_dump to dump the 9.1 
 database.

Maybe; but the supplied query does not seem to be ambiguous and the dump
phase has already completed.  pg_restore simply issues a CREATE VIEW and
does not perform interpolation of the contents.  If the select statement is
indeed correctly represented then I could very well see creating one like
that by hand and inserting it as part of an external database schema
installation (i.e., not via pg_restore) and would expect it to work. 
According to this such a scenario should also fail with the same message.

David J.





--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/PG-9-3-complains-about-specified-more-than-once-Those-views-worked-in-PG-9-1-9-2-tp5770489p5770512.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Re: PG 9.3 complains about specified more than once ??? Those views worked in PG 9.1 + 9.2

2013-09-11 Thread Andreas

Am 11.09.2013 21:02, schrieb David Johnston:

Andreas-3-2 wrote

Hi,

*I ran into a major problem when I tried to import a backup from 9.1.
into a 9.3 PG.*

I just installed PG 9.3 on a new linux box.
Then I wanted to import a plaintext dump of a DB that was created by
pg_dump of PG 9.1

There are a lot of views that have joins to a subquery in the from-clause.

something like

SELECT ... some columns ...
FROM
  maintable AS m
  JOIN someflag AS f ON m.flag_1_id = f.id
LEFT JOIN
(
  child_table AS   c
  JOIN   someotherflag  AS  f   ON   c.flag_2_id = f.id
)   AS x  ON m.id = x.main_id

This works with PG 9.1 and PG 9.2 but PG 9.3 complains:

ERROR:  table name f specified more than once

*Are there no separate namespaces for subqueries anymore in PG 9.3 ?*

Do I have to change ALL those views in the old PG 9.1 server before I
can import the backup into 9.3 or is there another way to work around
this issue ?

Not running 9.3 yet so cannot test this myself.  Ignore the pg_dump and
view.  Does (should) the above query work if executed in psql?

David J.



No, it doesn't work in psql of PG 9.3. I got from EnterpriseDB

I did check like this:
Open pgAdmin 1.18 of PG 9.3
Open connections to both servers ... to the remote 9.1 and the 9.3 here 
in my LAN.
Take the SQL definition of one of the problematic views out of the 9.1 
server as pgAdmin displays it.

Paste this into a sql editor window of the 9.3 server and execute it.
The creation of the view is rejected.
When I try to run the query within the view directly PG 9.3 balks too.

It doesn't accept the reused alias within the subquery.
This internal alias references not even the same table as the one 
outside the subquery.


It appeares as if there is just a global namespace for the whole query 
that spans over the names within the subquery, too.


If this is the case then I can't switch to PG 9.3 at all, because I had 
to reevaluate every query at my application throws at the DB and not 
only those 70 views that get rejected while the initial import of the 
sql dump.






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: PG 9.3 complains about specified more than once ??? Those views worked in PG 9.1 + 9.2

2013-09-11 Thread David Johnston
Andreas-3-2 wrote
 No, it doesn't work in psql of PG 9.3. I got from EnterpriseDB

Can you please create a minimal self-contained query that exhibits this
behavior and file a bug report?

I quickly cobbled this together - works on 9.0.x

WITH
  tblA (a_id, a_val) AS ( VALUES (1::integer, 'one-a'::varchar) )
, tblB (b_id, b_val) AS ( VALUES (1::integer, 'one-b'::varchar) )
, tblC (c_id, c_val) AS ( VALUES (1::integer, 'one-c'::varchar) )
SELECT *
FROM (
tblA AS dup_tbl JOIN tblB ON (dup_tbl.a_id = tblB.b_id)
) AS refD
JOIN tblC AS dup_tbl ON (dup_tbl.c_id = refD.a_id)

If it fails on 9.3 it would do.

David J.





--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/PG-9-3-complains-about-specified-more-than-once-Those-views-worked-in-PG-9-1-9-2-tp5770489p5770519.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: PG 9.3 complains about specified more than once ??? Those views worked in PG 9.1 + 9.2

2013-09-11 Thread David Johnston
Try these too, please:

WITH 
  tblA (a_id, a_val) AS ( VALUES (1::integer, 'one-1'::varchar) )
, tblB (b_id, b_val) AS ( VALUES (1::integer, 'one-b'::varchar) )
, tblC (c_id, c_val) AS ( VALUES (1::integer, 'one-c'::varchar) )
SELECT *
FROM (
tblA JOIN tblB ON (tblA.a_id = tblB.b_id)
) AS refD
JOIN tblA ON (tblA.a_id = refD.a_id)
;

WITH 
  tblA (a_id, a_val) AS ( VALUES (1::integer, 'one-1'::varchar) )
, tblB (b_id, b_val) AS ( VALUES (1::integer, 'one-b'::varchar) )
, tblC (c_id, c_val) AS ( VALUES (1::integer, 'one-c'::varchar) )
SELECT *
FROM (
tblA JOIN tblB ON (tblA.a_id = tblB.b_id)
) AS refD
JOIN tblA ON (tblA.a_id = tblB.a_id)
; --this one should fail trying to reference tblB


The reference to tblA is used inside refD and outside of it as well; but no
table aliases are used.

Thanks

David J.



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/PG-9-3-complains-about-specified-more-than-once-Those-views-worked-in-PG-9-1-9-2-tp5770489p5770520.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Re: PG 9.3 complains about specified more than once ??? Those views worked in PG 9.1 + 9.2

2013-09-11 Thread Andreas

Hi David,
your 3 examples work as you expected. That is 1+2 work and 3 throws an 
error.


I tried to figure out an example and found something peculiar.
The issue arises when there is another join in the subquery after the 
one with the reused table alias.

There is no error without this following join.

Look at this rather chunky sample.
If I join flag_2 before flag_1 it works.
It won't with flag_2 after flag_1.

The query works as soon as the reused alias joins last in the subquery.
If there are 2 reused aliases then the query wont work at all without 
renaming one alias.


The error shows with pgAdmin aws well as psql (9.3).

So probaply it's not an namespace issue but the query-parser screws up.   :(

I never placed a bug-report.  :}
Could you give a hand?


droptable if exists sub_tab;
droptable if exists main_tab;
droptable if exists flag_1;
droptable if exists flag_2;

create  temporary table flag_1 ( flag_1_id integer primary key, flag_1_t 
text );

insert into flag_1 values (0, null ), ( 1, 'a' ), ( 2, 'b' ), ( 3, 'c' );

create  temporary table flag_2 ( flag_2_id integer primary key, flag_2_t 
text );

insert into flag_2 values (0, null ), ( 1, 'a' ), ( 2, 'b' ), ( 3, 'c' );

create  temporary table main_tab ( main_id integer primary key, main_t 
text, flag_1_id integer references flag_1 ( flag_1_id ) );
insert into main_tab values ( 1, 'Main 1', 1 ), ( 2, 'Main 2', 2 ), ( 3, 
'Main 3', 3 );


create  temporary table sub_tab ( sub_id integer primary key, sub_t 
text, main_id integer references main_tab ( main_id ), flag_1_id integer 
references flag_1 ( flag_1_id ), flag_2_id integer references flag_2 ( 
flag_2_id ) );
insert into sub_tab values ( 1, 'Sub 1', 1, 3, 1 ), ( 2, 'Sub 2', 2, 2, 
2 ), ( 3, 'Sub 3', 3, 1, 3 );



select  m.main_id, m.main_t, f.flag_1_t,
x.sub_id, x.sub_t, x.flag_1_t, x.flag_2_t
frommain_tabas  m
joinflag_1  as  f   using   ( flag_1_id )
left join
(
sub_tab as  s
joinflag_2  as  f2  using   ( flag_2_id )   -- that 
way it works

joinflag_1  as  f   using   ( flag_1_id )
--joinflag_2  as  f2  using   ( flag_2_id ) -- that 
way it doesn't work

)   as  x   using   ( main_id );



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Re: PG 9.3 complains about specified more than once ??? Those views worked in PG 9.1 + 9.2

2013-09-11 Thread Andreas

Just another addition...

If I remove the aliases for the tbles flag_1 and flag_2 the problem 
still comes up.


So one either has to mind the order of the joins or use unique aliases.

It's really an issue as there are bound to be some queries in 
sql-functions or some that get assembled dynamically in my application.
Those won't get executed in the initial import of the db-dump but will 
come up anytime later when the query gets used the first time.


This is a no go:(


regards
Andreas


Am 12.09.2013 00:33, schrieb Andreas:

Hi David,
your 3 examples work as you expected. That is 1+2 work and 3 throws an 
error.


I tried to figure out an example and found something peculiar.
The issue arises when there is another join in the subquery after the 
one with the reused table alias.

There is no error without this following join.

Look at this rather chunky sample.
If I join flag_2 before flag_1 it works.
It won't with flag_2 after flag_1.

The query works as soon as the reused alias joins last in the subquery.
If there are 2 reused aliases then the query wont work at all without 
renaming one alias.


The error shows with pgAdmin aws well as psql (9.3).

So probaply it's not an namespace issue but the query-parser screws 
up.   :(


I never placed a bug-report.  :}
Could you give a hand?


droptable if exists sub_tab;
droptable if exists main_tab;
droptable if exists flag_1;
droptable if exists flag_2;

create  temporary table flag_1 ( flag_1_id integer primary key, 
flag_1_t text );

insert into flag_1 values (0, null ), ( 1, 'a' ), ( 2, 'b' ), ( 3, 'c' );

create  temporary table flag_2 ( flag_2_id integer primary key, 
flag_2_t text );

insert into flag_2 values (0, null ), ( 1, 'a' ), ( 2, 'b' ), ( 3, 'c' );

create  temporary table main_tab ( main_id integer primary key, main_t 
text, flag_1_id integer references flag_1 ( flag_1_id ) );
insert into main_tab values ( 1, 'Main 1', 1 ), ( 2, 'Main 2', 2 ), ( 
3, 'Main 3', 3 );


create  temporary table sub_tab ( sub_id integer primary key, sub_t 
text, main_id integer references main_tab ( main_id ), flag_1_id 
integer references flag_1 ( flag_1_id ), flag_2_id integer references 
flag_2 ( flag_2_id ) );
insert into sub_tab values ( 1, 'Sub 1', 1, 3, 1 ), ( 2, 'Sub 2', 2, 
2, 2 ), ( 3, 'Sub 3', 3, 1, 3 );



select  m.main_id, m.main_t, f.flag_1_t,
x.sub_id, x.sub_t, x.flag_1_t, x.flag_2_t
frommain_tabas  m
joinflag_1  as  f   using   ( flag_1_id )
left join
(
sub_tab as  s
joinflag_2  as  f2  using   ( flag_2_id )   -- 
that way it works

joinflag_1  as  f   using   ( flag_1_id )
--joinflag_2  as  f2  using   ( flag_2_id ) -- 
that way it doesn't work

)   as  x   using   ( main_id );







--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: PG 9.3 complains about specified more than once ??? Those views worked in PG 9.1 + 9.2

2013-09-11 Thread David Johnston
Andreas-3-2 wrote
 I never placed a bug-report.  :}
 Could you give a hand?

Sure.

PostgreSQL homepage (postgresql.org)
Report a Bug link on right-hand side of page.
Fill in the form.

SEND THIS


 droptable if exists sub_tab;
 droptable if exists main_tab;
 droptable if exists flag_1;
 droptable if exists flag_2;
 
 create  temporary table flag_1 ( flag_1_id integer primary key, flag_1_t 
 text );
 insert into flag_1 values (0, null ), ( 1, 'a' ), ( 2, 'b' ), ( 3, 'c' );
 
 create  temporary table flag_2 ( flag_2_id integer primary key, flag_2_t 
 text );
 insert into flag_2 values (0, null ), ( 1, 'a' ), ( 2, 'b' ), ( 3, 'c' );
 
 create  temporary table main_tab ( main_id integer primary key, main_t 
 text, flag_1_id integer references flag_1 ( flag_1_id ) );
 insert into main_tab values ( 1, 'Main 1', 1 ), ( 2, 'Main 2', 2 ), ( 3, 
 'Main 3', 3 );
 
 create  temporary table sub_tab ( sub_id integer primary key, sub_t 
 text, main_id integer references main_tab ( main_id ), flag_1_id integer 
 references flag_1 ( flag_1_id ), flag_2_id integer references flag_2 ( 
 flag_2_id ) );
 insert into sub_tab values ( 1, 'Sub 1', 1, 3, 1 ), ( 2, 'Sub 2', 2, 2, 
 2 ), ( 3, 'Sub 3', 3, 1, 3 );
 
 
 select  m.main_id, m.main_t, f.flag_1_t,
  x.sub_id, x.sub_t, x.flag_1_t, x.flag_2_t
 frommain_tabas  m
  joinflag_1  as  f   using   ( flag_1_id )
  left join
  (
  sub_tab as  s
  joinflag_2  as  f2  using   ( flag_2_id )   -- that 
 way it works
  joinflag_1  as  f   using   ( flag_1_id )
 --joinflag_2  as  f2  using   ( flag_2_id ) -- that 
 way it doesn't work
  )   as  x   using   ( main_id );





--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/PG-9-3-complains-about-specified-more-than-once-Those-views-worked-in-PG-9-1-9-2-tp5770489p5770534.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Re: PG 9.3 complains about specified more than once ??? Those views worked in PG 9.1 + 9.2

2013-09-11 Thread Andreas

I took your sample and modified it a bit.

ERROR:  table name tblb specified more than once
Switch tblB and tblC in the subquery and it works.

WITH
  tblA (a_id, a_val) AS ( VALUES (1::integer, 'one-1'::varchar) )
, tblB (b_id, b_val) AS ( VALUES (1::integer, 'one-b'::varchar) )
, tblC (c_id, c_val) AS ( VALUES (1::integer, 'one-c'::varchar) )

select  *
fromtblA
jointblBon  tblA.a_id = tblB.b_id
join(
tblB
jointblCon  tblC.c_id = tblB.b_id
)   as  x   on  tblA.a_id = x.c_id;




Am 12.09.2013 00:52, schrieb David Johnston:

Andreas-3-2 wrote

I never placed a bug-report.  :}
Could you give a hand?

Sure.

PostgreSQL homepage (postgresql.org)
Report a Bug link on right-hand side of page.
Fill in the form.


SEND THIS



droptable if exists sub_tab;
droptable if exists main_tab;
droptable if exists flag_1;
droptable if exists flag_2;

create  temporary table flag_1 ( flag_1_id integer primary key, flag_1_t
text );
insert into flag_1 values (0, null ), ( 1, 'a' ), ( 2, 'b' ), ( 3, 'c' );

create  temporary table flag_2 ( flag_2_id integer primary key, flag_2_t
text );
insert into flag_2 values (0, null ), ( 1, 'a' ), ( 2, 'b' ), ( 3, 'c' );

create  temporary table main_tab ( main_id integer primary key, main_t
text, flag_1_id integer references flag_1 ( flag_1_id ) );
insert into main_tab values ( 1, 'Main 1', 1 ), ( 2, 'Main 2', 2 ), ( 3,
'Main 3', 3 );

create  temporary table sub_tab ( sub_id integer primary key, sub_t
text, main_id integer references main_tab ( main_id ), flag_1_id integer
references flag_1 ( flag_1_id ), flag_2_id integer references flag_2 (
flag_2_id ) );
insert into sub_tab values ( 1, 'Sub 1', 1, 3, 1 ), ( 2, 'Sub 2', 2, 2,
2 ), ( 3, 'Sub 3', 3, 1, 3 );


select  m.main_id, m.main_t, f.flag_1_t,
  x.sub_id, x.sub_t, x.flag_1_t, x.flag_2_t
frommain_tabas  m
  joinflag_1  as  f   using   ( flag_1_id )
  left join
  (
  sub_tab as  s
  joinflag_2  as  f2  using   ( flag_2_id )   -- that
way it works
  joinflag_1  as  f   using   ( flag_1_id )
--joinflag_2  as  f2  using   ( flag_2_id ) -- that
way it doesn't work
  )   as  x   using   ( main_id );





--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/PG-9-3-complains-about-specified-more-than-once-Those-views-worked-in-PG-9-1-9-2-tp5770489p5770534.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Re: PG 9.3 complains about specified more than once ??? Those views worked in PG 9.1 + 9.2

2013-09-11 Thread Chris Travers
Here is a minimal query that demonstrates the problem.  In 9.1 it works:

chris=# select * FROM current_user u join (current_user u cross join
current_user v) x on true;
   u   |   u   |   v
---+---+---
 chris | chris | chris
(1 row)

On 9.3 it fails:
ERROR:  table name u specified more than once

It may be a silly example but it works.




On Wed, Sep 11, 2013 at 5:07 PM, Andreas maps...@gmx.net wrote:

 I took your sample and modified it a bit.

 ERROR:  table name tblb specified more than once
 Switch tblB and tblC in the subquery and it works.


 WITH
   tblA (a_id, a_val) AS ( VALUES (1::integer, 'one-1'::varchar) )
 , tblB (b_id, b_val) AS ( VALUES (1::integer, 'one-b'::varchar) )
 , tblC (c_id, c_val) AS ( VALUES (1::integer, 'one-c'::varchar) )

 select  *
 fromtblA
 jointblBon  tblA.a_id = tblB.b_id
 join(
 tblB
 jointblCon  tblC.c_id = tblB.b_id
 )   as  x   on  tblA.a_id = x.c_id;




 Am 12.09.2013 00:52, schrieb David Johnston:

  Andreas-3-2 wrote

 I never placed a bug-report.  :}
 Could you give a hand?

 Sure.

 PostgreSQL homepage (postgresql.org)
 Report a Bug link on right-hand side of page.
 Fill in the form.

  SEND THIS


  droptable if exists sub_tab;
 droptable if exists main_tab;
 droptable if exists flag_1;
 droptable if exists flag_2;

 create  temporary table flag_1 ( flag_1_id integer primary key, flag_1_t
 text );
 insert into flag_1 values (0, null ), ( 1, 'a' ), ( 2, 'b' ), ( 3, 'c' );

 create  temporary table flag_2 ( flag_2_id integer primary key, flag_2_t
 text );
 insert into flag_2 values (0, null ), ( 1, 'a' ), ( 2, 'b' ), ( 3, 'c' );

 create  temporary table main_tab ( main_id integer primary key, main_t
 text, flag_1_id integer references flag_1 ( flag_1_id ) );
 insert into main_tab values ( 1, 'Main 1', 1 ), ( 2, 'Main 2', 2 ), ( 3,
 'Main 3', 3 );

 create  temporary table sub_tab ( sub_id integer primary key, sub_t
 text, main_id integer references main_tab ( main_id ), flag_1_id integer
 references flag_1 ( flag_1_id ), flag_2_id integer references flag_2 (
 flag_2_id ) );
 insert into sub_tab values ( 1, 'Sub 1', 1, 3, 1 ), ( 2, 'Sub 2', 2, 2,
 2 ), ( 3, 'Sub 3', 3, 1, 3 );


 select  m.main_id, m.main_t, f.flag_1_t,
   x.sub_id, x.sub_t, x.flag_1_t, x.flag_2_t
 frommain_tabas  m
   joinflag_1  as  f   using   ( flag_1_id )
   left join
   (
   sub_tab as  s
   joinflag_2  as  f2  using   ( flag_2_id )   -- that
 way it works
   joinflag_1  as  f   using   ( flag_1_id )
 --joinflag_2  as  f2  using   ( flag_2_id ) -- that
 way it doesn't work
   )   as  x   using   ( main_id );





 --
 View this message in context: http://postgresql.1045698.n5.**
 nabble.com/PG-9-3-complains-**about-specified-more-than-**
 once-Those-views-worked-in-PG-**9-1-9-2-tp5770489p5770534.htmlhttp://postgresql.1045698.n5.nabble.com/PG-9-3-complains-about-specified-more-than-once-Those-views-worked-in-PG-9-1-9-2-tp5770489p5770534.html
 Sent from the PostgreSQL - general mailing list archive at Nabble.com.





 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/**mailpref/pgsql-generalhttp://www.postgresql.org/mailpref/pgsql-general




-- 
Best Wishes,
Chris Travers

Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor
lock-in.
http://www.efficito.com/learn_more.shtml


Re: [GENERAL] Hosting PG on AWS in 2013

2013-04-08 Thread Vincent Veyron
Le dimanche 07 avril 2013 à 11:19 -0700, Ben Chobot a écrit :

 
 Overall I won't say that you can get amazing DB performance inside
 AWS, but you can certainly get reasonable performance with enough
 PIOPs volumes and memory, and while the on-demand cost is absurd
 compared to what you can build with bare metal, the reserved-instance
 cost is more reasonable (even if not cheap). 

Indeed.

Could someone explain to me the point of using an AWS instance in the
case of the OP, whose site is apparently very busy, versus renting a
bare metal server in a datacenter?

As an example, the site in my sig, which admittedly has much lower
requirements, since I only have a handful of users, has been hosted very
reliably on a rented server at online.net for the past two years on
their smallest server for 15 euros/month. Average load sits at 0.3%.

Using this feels like having a machine in your facility, only better
protected. I use several of those for redundancy.

Is there something I'm missing?


-- 
Salutations, Vincent Veyron
http://marica.fr/site/demonstration
Logiciel de gestion des contentieux juridiques et des sinistres d'assurance



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Hosting PG on AWS in 2013

2013-04-08 Thread Ben Chobot
On Apr 8, 2013, at 2:15 AM, Vincent Veyron wrote:

 Could someone explain to me the point of using an AWS instance in the
 case of the OP, whose site is apparently very busy, versus renting a
 bare metal server in a datacenter?

Well, at least in my experience, you don't go to AWS because the databases 
there are awesome. You go to AWS because you have highly cyclical load 
patterns, can't predict your future capacity needs, tend to have very large 
batch jobs, etc. So then you have most of your servers living in AWS, and if 
you need low latencies to your database (which most people do) then it often 
makes sense to try to make your database live in AWS as well, instead of 
putting it a VPN hop away.

I wouldn't claim that AWS is the best place to run a database, but for running 
a service, of which a database is just one part, you could do a lot worse if 
you do it right.

Re: [GENERAL] Hosting PG on AWS in 2013

2013-04-08 Thread David Boreham

On 4/8/2013 3:15 AM, Vincent Veyron wrote:
Could someone explain to me the point of using an AWS instance in the 
case of the OP, whose site is apparently very busy, versus renting a 
bare metal server in a datacenter?


I am the OP, but I can't provide a complete answer, since personally 
(e.g. all servers that my income depends on), I do not use cloud 
hosting. However, some reasons I have heard mentioned include:


1. The rest of the site is already hosted in AWS so deploying a DB 
outside AWS adds to network costs, latency, and adds yet more moving 
parts and things to worry about.
2. These days many companies just do not have the capability to deploy 
bare metal. The people who understand how to do it are gone, the 
management feel like it is outside their comfort zone vs. Cloud, and so 
on. Conversely, there are plenty of people you can hire who are familiar 
with AWS, its deployment tools, management, monitoring and so on.
3. Peer pressure to use AWS (from finance people, VCs, industry pundits, 
etc).
4. AWS is the new IBM Mainframe (nobody ever got fired for buying 
one...). If 1/2 the Internet is on AWS, and something breaks, then 
well...you can point to the 1/2 the Internet that's also down..





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Hosting PG on AWS in 2013

2013-04-07 Thread Tomas Vondra
Hi David,

On 7.4.2013 03:51, David Boreham wrote:
 
 First I need to say that I'm asking this question on behalf of a
 friend, who asked me what I thought on the subject -- I host all the
 databases important to me and my livelihood, on physical machines I own
 outright. That said, I'm curious as to the current thinking on a)
 whether it is wise, and b) if so how to deploy, PG servers on AWS. As I
 recall, a couple years ago it just wasn't a wise plan because Amazon's
 I/O performance and reliability wasn't acceptable. Perhaps that's no
 longer the case..

That depends on what you mean by reliability and (poor) performance.

Amazon says the AFR for EBS is 0.1-0.5% (under some conditions, see
http://aws.amazon.com/ebs/). I have no reason not to trust them in this
case. Maybe it was much worse a few years ago, but I haven't been
working with AWS back then so I can't compare.

As for the performance, AFAIK the EBS volumes always had, and probably
will have, a 32 MB/s limit. Thanks to caching, built into the EBS, the
performance may seem much better initially (say twice as good), but
after a sustained write workload (say 15-30 minutes), you're back at the
32 MB/s per volume.

The main problem with regular EBS is the variability - the numbers above
are for cases where everything operates fine. When something goes wrong,
you can get 1 MB/s for a period of time. And when you create 10 volumes,
each will have a bit different performance.

There are ways to handle this, though - the old way is to build a
RAID10 array on top of regular EBS volumes, the new way is to use EBS
with Provisioned IOPS (possibly with RAID0).

 Just to set the scene -- the application is a very high traffic web
 service where any down time is very costly, processing a few hundred
 transactions/s.

What high traffic means for the database? Does that mean a lot of
reads or writes, or something else?

 Scanning through the latest list of AWS instance types, I can see two
 plausible approaches:
 
 1. High I/O Instances:  (regular AWS instance but with SSD local
 storage) + some form of replication. Replication would be needed because
 (as I understand it) any AWS instance can be vanished at any time due
 to Amazon screwing something up, maintenance on the host, etc (I believe
 the term of art is ephemeral).

Yes. You'll get great I/O performance with these SSD-based instances
(easily ~1GB/s in), so you'll probably hit CPU bottlenecks instead.

You're right that to handle the instance / ephemeral failures, you'll
have to use some sort of replication - might be your custom
application-specific application, or some sort of built-in (async/sync
streamin, log shipping, Slony, Londiste, whatever suits your needs ...).

If you really value the availability, you should deploy the replica in
different availability zone or data center.

 2. EBS-Optimized Instances: these allow the use of EBS storage (SAN-type
 service) from regular AWS instances. Assuming that EBS is maintained to
 a high level of availability and performance (it doesn't, afaik, feature
 the vanishing property of AWS machines), this should in theory work out
 much the same as a traditional cluster of physical machines using a
 shared SAN, with the appropriate voodoo to fail over between nodes.

No, that's not what EBS Optimized instances are for. All AWS instance
types can use EBS, using a SHARED network link. That means that e.g.
HTTP or SSH traffic influences EBS performance, because they use the
same ethernet link. The EBS Optimized says that the instance has a
network link dedicated for EBS traffic, with guaranteed throughput.

That is not going to fix the variability or EBS performance, though ...

What you're looking for is called Provisioned IOPS (PIOPS) which
guarantees the EBS volume performance, in terms of IOPS with 16kB block.
For example you may create an EBS volume with 2000 IOPS, which is
~32MB/s (with 16kB blocks). It's not much, but it's much easier to build
RAID0 array on top of those volumes. We're using this for some of our
databases and are very happy with it.

Obviously, you want to use PIOPS with EBS Optimized instances. I don't
see much point in using only one of them.

But still, depends on the required I/O performance - you can't really
get above 125MB/s (m2.4xlarge) or 250MB/s (cc2.8xlarge).

And you can't really rely on this if you need quick failover to a
different availability zone or data center, because it's quite likely
the EBS is going to be hit by the issue (read the analysis of AWS outage
from April 2011: http://aws.amazon.com/message/65648/).

 Any thoughts, wisdom, and especially from-the-trenches experience, would
 be appreciated.

My recommendation is to plan for zone/datacenter failures first. That
means build a failover replica in a different zone/datacenter.

You might be able to handle isolated EBS failures e.g. using snapshots
and/or backups and similar recovery procedures, but it may require
unpredictable downtimes (e.g. while we don't see 

Re: [GENERAL] Hosting PG on AWS in 2013

2013-04-07 Thread David Boreham


I thanks very much for your detailed response. A few answers below inline:

On 4/7/2013 9:38 AM, Tomas Vondra wrote:
As for the performance, AFAIK the EBS volumes always had, and probably 
will have, a 32 MB/s limit. Thanks to caching, built into the EBS, the 
performance may seem much better initially (say twice as good), but 
after a sustained write workload (say 15-30 minutes), you're back at 
the 32 MB/s per volume. The main problem with regular EBS is the 
variability - the numbers above are for cases where everything 
operates fine. When something goes wrong, you can get 1 MB/s for a 
period of time. And when you create 10 volumes, each will have a bit 
different performance. There are ways to handle this, though - the 
old way is to build a RAID10 array on top of regular EBS volumes, 
the new way is to use EBS with Provisioned IOPS (possibly with RAID0).

Just to set the scene -- the application is a very high traffic web
service where any down time is very costly, processing a few hundred
transactions/s.

What high traffic means for the database? Does that mean a lot of
reads or writes, or something else?


I should have been more clear : the transactions/s above is all writes. 
The read load is effectively cached. My assessment is that the load is 
high enough that careful attention must be paid to I/O performance, but 
no so high that sharding/partitioning is required (yet).
Part of the site is already using RDS with PIOPS, and runs at a constant 
500 w/s, as viewed in CloudWatch. I don't know for sure how the PG-based 
elements relate to this on load -- they back different functional areas 
of the site.



Scanning through the latest list of AWS instance types, I can see two
plausible approaches:

1. High I/O Instances:  (regular AWS instance but with SSD local
storage) + some form of replication. Replication would be needed because
(as I understand it) any AWS instance can be vanished at any time due
to Amazon screwing something up, maintenance on the host, etc (I believe
the term of art is ephemeral).

Yes. You'll get great I/O performance with these SSD-based instances
(easily ~1GB/s in), so you'll probably hit CPU bottlenecks instead.

You're right that to handle the instance / ephemeral failures, you'll
have to use some sort of replication - might be your custom
application-specific application, or some sort of built-in (async/sync
streamin, log shipping, Slony, Londiste, whatever suits your needs ...).

If you really value the availability, you should deploy the replica in
different availability zone or data center.


2. EBS-Optimized Instances: these allow the use of EBS storage (SAN-type
service) from regular AWS instances. Assuming that EBS is maintained to
a high level of availability and performance (it doesn't, afaik, feature
the vanishing property of AWS machines), this should in theory work out
much the same as a traditional cluster of physical machines using a
shared SAN, with the appropriate voodoo to fail over between nodes.

No, that's not what EBS Optimized instances are for. All AWS instance
types can use EBS, using a SHARED network link. That means that e.g.
HTTP or SSH traffic influences EBS performance, because they use the
same ethernet link. The EBS Optimized says that the instance has a
network link dedicated for EBS traffic, with guaranteed throughput.


Ah, thanks for clarifying that. I knew about PIOPS, but hadn't realized 
that EBS Optimized meant a dedicated SAN cable. Makes sense...




That is not going to fix the variability or EBS performance, though ...

What you're looking for is called Provisioned IOPS (PIOPS) which
guarantees the EBS volume performance, in terms of IOPS with 16kB block.
For example you may create an EBS volume with 2000 IOPS, which is
~32MB/s (with 16kB blocks). It's not much, but it's much easier to build
RAID0 array on top of those volumes. We're using this for some of our
databases and are very happy with it.

Obviously, you want to use PIOPS with EBS Optimized instances. I don't
see much point in using only one of them.

But still, depends on the required I/O performance - you can't really
get above 125MB/s (m2.4xlarge) or 250MB/s (cc2.8xlarge).


I don't forsee this application being limited by bulk data throughput 
(MB/s). It will be limited more by writes/s due to the small 
transaction, OLTP-type workload.




And you can't really rely on this if you need quick failover to a
different availability zone or data center, because it's quite likely
the EBS is going to be hit by the issue (read the analysis of AWS outage
from April 2011: http://aws.amazon.com/message/65648/).


Right, assume that there can be cascading and correlated failures. I'm 
not sure I could ever convince myself that a cloud-hosted solution is 
really safe, because honestly I don't trust Amazon to design out their 
single failure points and thermal-runaway problems. However in the 
industry now there seems to be wide acceptance of the view that if 
you're 

Re: [GENERAL] Hosting PG on AWS in 2013

2013-04-07 Thread Ben Chobot

On Apr 6, 2013, at 6:51 PM, David Boreham wrote:

 First I need to say that I'm asking this question on behalf of a friend, 
 who asked me what I thought on the subject -- I host all the databases 
 important to me and my livelihood, on physical machines I own outright. That 
 said, I'm curious as to the current thinking on a) whether it is wise, and b) 
 if so how to deploy, PG servers on AWS. As I recall, a couple years ago it 
 just wasn't a wise plan because Amazon's I/O performance and reliability 
 wasn't acceptable. Perhaps that's no longer the case..

Tomas gave you a pretty good run-down, but I should just emphasis that you need 
to view AWS instances as disposable, if only because that's how Amazon views 
them. You have multiple AZs in every region use them for replication, 
because its only a matter of time before your master DB goes offline (or the 
entire AZ it's in does). So script up your failover and have it ready to run, 
because you will need to do it. Also, copy data to another region and have a DR 
plan to fail over to it, because history shows AZ aren't always as independent 
as Amazon intends. 

Of course, these are things you should do regardless of if you're in AWS or 
not, but AWS makes it more necessary. (Which arguably pushes you to have a more 
resilient service.)

Also, if you go the route of CC-sized instances, you don't need to bother with 
EBS optimization, because the CC instances have 10Gb network links already. 

Also, if you go the ephemeral instance route, be aware that an instance 
stop/start (not reboot) means you loose your data. There are still too many 
times where we've found an instance needs to be restarted, so you need to be 
really, really ok with your failover if you want those local SSDs. I would say 
synchronous replication would be mandatory. 


Overall I won't say that you can get amazing DB performance inside AWS, but you 
can certainly get reasonable performance with enough PIOPs volumes and memory, 
and while the on-demand cost is absurd compared to what you can build with bare 
metal, the reserved-instance cost is more reasonable (even if not cheap). 

Re: [GENERAL] Hosting PG on AWS in 2013

2013-04-07 Thread Tomas Vondra
On 7.4.2013 19:43, David Boreham wrote:
 
 I thanks very much for your detailed response. A few answers below inline:
 
 On 4/7/2013 9:38 AM, Tomas Vondra wrote:
 As for the performance, AFAIK the EBS volumes always had, and probably
 will have, a 32 MB/s limit. Thanks to caching, built into the EBS, the
 performance may seem much better initially (say twice as good), but
 after a sustained write workload (say 15-30 minutes), you're back at
 the 32 MB/s per volume. The main problem with regular EBS is the
 variability - the numbers above are for cases where everything
 operates fine. When something goes wrong, you can get 1 MB/s for a
 period of time. And when you create 10 volumes, each will have a bit
 different performance. There are ways to handle this, though - the
 old way is to build a RAID10 array on top of regular EBS volumes,
 the new way is to use EBS with Provisioned IOPS (possibly with RAID0).
 Just to set the scene -- the application is a very high traffic web
 service where any down time is very costly, processing a few hundred
 transactions/s.
 What high traffic means for the database? Does that mean a lot of
 reads or writes, or something else?
 
 I should have been more clear : the transactions/s above is all writes.
 The read load is effectively cached. My assessment is that the load is
 high enough that careful attention must be paid to I/O performance, but
 no so high that sharding/partitioning is required (yet).
 Part of the site is already using RDS with PIOPS, and runs at a constant
 500 w/s, as viewed in CloudWatch. I don't know for sure how the PG-based
 elements relate to this on load -- they back different functional areas
 of the site.

Thats 500 * 16kB of writes, i.e. ~8MB/s. Not a big deal, IMHO,
especially if only part of this are writes from PostgreSQL.

 But still, depends on the required I/O performance - you can't really
 get above 125MB/s (m2.4xlarge) or 250MB/s (cc2.8xlarge).
 
 I don't forsee this application being limited by bulk data throughput
 (MB/s). It will be limited more by writes/s due to the small
 transaction, OLTP-type workload.

There's not much difference between random and sequential I/O on EBS.
You may probably get a bit better sequential performance thanks to
coalescing smaller requests (the PIOPS work with 16kB blocks, while
PostgreSQL uses 8kB), but we don't see that in practice.

And the writes to the WAL are sequential anyway.

 And you can't really rely on this if you need quick failover to a
 different availability zone or data center, because it's quite likely
 the EBS is going to be hit by the issue (read the analysis of AWS outage
 from April 2011: http://aws.amazon.com/message/65648/).
 
 Right, assume that there can be cascading and correlated failures. I'm
 not sure I could ever convince myself that a cloud-hosted solution is
 really safe, because honestly I don't trust Amazon to design out their
 single failure points and thermal-runaway problems. However in the
 industry now there seems to be wide acceptance of the view that if
 you're shafted by Amazon, that's ok (you don't get fired). I'm looking
 at this project from that perspective. Netflix-reliable, something
 like that ;)

Well, even if you could prevent all those failures, there's still a
possibility of a human error (as in 2011) or Godzilla eating the data
center (and it's not going to eat a single availability zone).

I believe Amazon is working hard on this and I trust their engineers,
but this simply is not a matter of trust. Mistakes and unexpected
failures do happen all the time. Anyone who believes that moving to
Amazon somehow magicaly makes them disappear is naive.

The only good thing is that when such crash happens, half of the
internet goes down so noone really notices the smaller sites. If you
can't watch funny cat pictures on reddit, it's all futile anyway.

Tomas



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Hosting PG on AWS in 2013

2013-04-06 Thread David Boreham


First I need to say that I'm asking this question on behalf of a 
friend, who asked me what I thought on the subject -- I host all the 
databases important to me and my livelihood, on physical machines I own 
outright. That said, I'm curious as to the current thinking on a) 
whether it is wise, and b) if so how to deploy, PG servers on AWS. As I 
recall, a couple years ago it just wasn't a wise plan because Amazon's 
I/O performance and reliability wasn't acceptable. Perhaps that's no 
longer the case..


Just to set the scene -- the application is a very high traffic web 
service where any down time is very costly, processing a few hundred 
transactions/s.


Scanning through the latest list of AWS instance types, I can see two 
plausible approaches:


1. High I/O Instances:  (regular AWS instance but with SSD local 
storage) + some form of replication. Replication would be needed because 
(as I understand it) any AWS instance can be vanished at any time due 
to Amazon screwing something up, maintenance on the host, etc (I believe 
the term of art is ephemeral).


2. EBS-Optimized Instances: these allow the use of EBS storage (SAN-type 
service) from regular AWS instances. Assuming that EBS is maintained to 
a high level of availability and performance (it doesn't, afaik, feature 
the vanishing property of AWS machines), this should in theory work out 
much the same as a traditional cluster of physical machines using a 
shared SAN, with the appropriate voodoo to fail over between nodes.


Any thoughts, wisdom, and especially from-the-trenches experience, would 
be appreciated.


In the Googlesphere I found this interesting presentation : 
http://www.pgcon.org/2012/schedule/attachments/256_pg-aws.pdf which 
appears to support option #2 with s/w (obviously) RAID on the PG hosts, 
but with replication rather than SAN cluster-style failover, or perhaps 
in addition to.


Note that I'm not looking for recommendations on PG hosting providers 
(in fact my friend is looking to transition off one of them, to bare-AWS 
machines, for a variety of reasons).


Thanks.





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Tcl PG on Win 7 64 bit - is it working for anyone?

2013-02-12 Thread Carlo Stonebanks
I have found the only certain way to get PG and TCL to talk to each other
with Win 7 x64 (and perhaps all other x64 environments) is to go 32-bit all
the way.

32 bit ActiveState Tcl 8.5.13, 
PG 32 bit (9.1) client interface - libpq.dll
Win32 build Pgtcl driver from the sourceforge pgtclng project v2.0.0 from
https://sourceforge.net/projects/pgtclng/files/pgtclng/

If anyone manages to get a driver that works with Tcl  the PG 64-bit
client, I would love to hear about it.

-Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Adrian Klaver
Sent: February 11, 2013 6:09 PM
To: Carlo Stonebanks
Cc: 'Postgres General'
Subject: Re: [GENERAL] Tcl  PG on Win 7 64 bit - is it working for anyone?

On 02/11/2013 08:18 AM, Carlo Stonebanks wrote:
 I had seen that thread, and that's how I came in contact with L J 
 Bayuk (the person who owns the pgtclng sourceforge project and who 
 answered the
 questions) and followed his advice, which is how I came to create the 
 64 bit build. I am still experiencing problems.

 It's possible that I have missed something, but I have been scouring 
 the internet for a solution before I posted this. In the meantime, I 
 just need proof that SOMEONE has gotten Tcl to talk to PG in Win 7 x64 
 - then maybe I can work towards a solution.


Well we have moved past my Windows knowledge. Good luck.


--
Adrian Klaver
adrian.kla...@gmail.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Tcl PG on Win 7 64 bit - is it working for anyone?

2013-02-11 Thread Carlo Stonebanks
I had seen that thread, and that's how I came in contact with L J Bayuk (the
person who owns the pgtclng sourceforge project and who answered the
questions) and followed his advice, which is how I came to create the 64 bit
build. I am still experiencing problems.

It's possible that I have missed something, but I have been scouring the
internet for a solution before I posted this. In the meantime, I just need
proof that SOMEONE has gotten Tcl to talk to PG in Win 7 x64 - then maybe I
can work towards a solution.

-Original Message-
From: Adrian Klaver [mailto:adrian.kla...@gmail.com] 
Sent: February 10, 2013 10:13 AM
To: Carlo Stonebanks
Cc: 'Postgres General'
Subject: Re: [GENERAL] Tcl  PG on Win 7 64 bit - is it working for anyone?

On 02/09/2013 09:39 PM, Carlo Stonebanks wrote:
 I am actually in the same folder as the libpgtcl.dll, and that 
 particular failure would raise a different error in any case:

 'couldn't load library libpgtc: this library or a dependent library 
 could not be found in library path'

I did find this thread:

http://pgtclng.sourceforge.net/thread1.html

I do not see a definitive answer in it, but there are some troubleshooting
tips.






--
Adrian Klaver
adrian.kla...@gmail.com



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Tcl PG on Win 7 64 bit - is it working for anyone?

2013-02-11 Thread Adrian Klaver

On 02/11/2013 08:18 AM, Carlo Stonebanks wrote:

I had seen that thread, and that's how I came in contact with L J Bayuk (the
person who owns the pgtclng sourceforge project and who answered the
questions) and followed his advice, which is how I came to create the 64 bit
build. I am still experiencing problems.

It's possible that I have missed something, but I have been scouring the
internet for a solution before I posted this. In the meantime, I just need
proof that SOMEONE has gotten Tcl to talk to PG in Win 7 x64 - then maybe I
can work towards a solution.



Well we have moved past my Windows knowledge. Good luck.


--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Tcl PG on Win 7 64 bit - is it working for anyone?

2013-02-10 Thread Adrian Klaver

On 02/09/2013 09:39 PM, Carlo Stonebanks wrote:

I am actually in the same folder as the libpgtcl.dll, and that particular
failure would raise a different error in any case:

'couldn't load library libpgtc: this library or a dependent library could
not be found in library path'


I did find this thread:

http://pgtclng.sourceforge.net/thread1.html

I do not see a definitive answer in it, but there are some 
troubleshooting tips.









--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Tcl PG on Win 7 64 bit - is it working for anyone?

2013-02-09 Thread Carlo Stonebanks
I have been trying to get the Tcl package for client applications PgTcl to
work for some time now. I have tried the SourceForge pgtclng 2.0 project, I
have recompiled it with MinGW-64, I have installed PG 9.1.8 64 bit and made
sure  PATH points to its lib folder so it can find libpq.dll. NO matter what
I try, the Tcl command 'load libpgtcl' returns: 'couldn't load library
libpgtcl: invalid argument'

 

The Pure Tcl interface pgintcl package works on Win 7 64, but is very slow
to load so I imagine slow to run as well.

 

If anyone is running Tcl client apps to access PostgreSQL on Win 7 64 using
I would love to hear about it.

 

Thanks,

 

Carlo



Re: [GENERAL] Tcl PG on Win 7 64 bit - is it working for anyone?

2013-02-09 Thread Adrian Klaver

On 02/09/2013 09:52 AM, Carlo Stonebanks wrote:

I have been trying to get the Tcl package for client applications PgTcl
to work for some time now. I have tried the SourceForge pgtclng 2.0
project, I have recompiled it with MinGW-64, I have installed PG 9.1.8
64 bit and made sure  PATH points to its lib folder so it can find
libpq.dll. NO matter what I try, the Tcl command ‘load libpgtcl’
returns: ‘couldn't load library libpgtcl: invalid argument’


It would seem you need to point the PATH at libpgtcl(.dll) also.



The “Pure Tcl interface” pgintcl package works on Win 7 64, but is very
slow to load so I imagine slow to run as well.

If anyone is running Tcl client apps to access PostgreSQL on Win 7 64
using I would love to hear about it.

Thanks,

Carlo




--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Tcl PG on Win 7 64 bit - is it working for anyone?

2013-02-09 Thread Carlo Stonebanks
I am actually in the same folder as the libpgtcl.dll, and that particular
failure would raise a different error in any case:

'couldn't load library libpgtc: this library or a dependent library could
not be found in library path'


-Original Message-
From: Adrian Klaver [mailto:adrian.kla...@gmail.com] 
Sent: February 9, 2013 1:01 PM
To: Carlo Stonebanks
Cc: 'Postgres General'
Subject: Re: [GENERAL] Tcl  PG on Win 7 64 bit - is it working for anyone?

On 02/09/2013 09:52 AM, Carlo Stonebanks wrote:
 I have been trying to get the Tcl package for client applications 
 PgTcl to work for some time now. I have tried the SourceForge pgtclng 
 2.0 project, I have recompiled it with MinGW-64, I have installed PG 
 9.1.8
 64 bit and made sure  PATH points to its lib folder so it can find 
 libpq.dll. NO matter what I try, the Tcl command 'load libpgtcl'
 returns: 'couldn't load library libpgtcl: invalid argument'

It would seem you need to point the PATH at libpgtcl(.dll) also.


 The Pure Tcl interface pgintcl package works on Win 7 64, but is 
 very slow to load so I imagine slow to run as well.

 If anyone is running Tcl client apps to access PostgreSQL on Win 7 64 
 using I would love to hear about it.

 Thanks,

 Carlo



--
Adrian Klaver
adrian.kla...@gmail.com



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Customising pg directories

2012-06-28 Thread Ben Carbery
I am building a new server with a separate partition for data and
xlogs. What is the correct way to do this?

Can I create a symlink from /var/lib/pgsql/9.1/data - /pg_data (and
then a symlink in /pgdata for xlogs- /pg_xlog)
Or do I have to modify $PGDATA in the init script?

This is all after installing packages but prior to 'service
postgresql-91 initdb'

cheers

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Customising pg directories

2012-06-28 Thread Raghavendra
On Thu, Jun 28, 2012 at 1:20 PM, Ben Carbery ben.carb...@gmail.com wrote:

 I am building a new server with a separate partition for data and
 xlogs. What is the correct way to do this?

 Can I create a symlink from /var/lib/pgsql/9.1/data - /pg_data (and
 then a symlink in /pgdata for xlogs- /pg_xlog)
 Or do I have to modify $PGDATA in the init script?

 This is all after installing packages but prior to 'service
 postgresql-91 initdb'

 cheers


initdb --pgdata datadirectory --xlogdir pg_xlog location, is the
command which suit's your requirement.

This command will take care of creating symblinks as well.

---
Regards,
Raghavendra
EnterpriseDB Corporation
Blog: http://raghavt.blogspot.com/


  1   2   3   4   5   >