Re: [GENERAL] Replication between 64/32bit systems?

2011-09-20 Thread Scott Marlowe
On Tue, Sep 20, 2011 at 2:49 PM, Chris Ernst  wrote:
> Hi Hannes,
>
> You can't use streaming replication, but you can use slony to replicate
> between different architectures.

But that's not all!  You can also use slony to migrate from one major
version to another!  This is one of the things it was built for.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COLUMNAR postgreSQL ?

2011-09-20 Thread Ondrej Ivanič
Hi,

2011/9/21 Tomas Vondra :
>> Columnar store is good if:
>> - you are selecting less than 60% of the total row size (our table has
>> 400 cols and usual query needs 5 - 10 cols)
>> - aggregates: count(*), avg(), ...
>
> Where did those numbers come from? What columnar database are you using?
> What options were used (e.g. compression)?

Aster's nCluster and Greenplum with no and maximum compression (there
was no difference between compression level 5 and 9 but hoge
difference between compression level 0 and 9) and partitioned.

>> In some cases columnar store is able to beat Postgres + High IOPS
>> (250k+) SSD card
>
> What do you mean by "in some cases"? If that means a DWH/DSS workloads,
> then it's apples to oranges I guess.

> SSDs are great for OLTP workloads (with a lot of random I/O). With DWH/DSS
> workloads, the performance gain is much smaller (not worth the money in
> some cases).

Yes, our DB is hybrid: we need OLAP solution with OLTP performance.
Schema si very simple star schema and is multitenant. So "random io"
to "seq io" is 80% : 20% but most of the queries are simple aggregates
and select queries (drill downs, dicing, slicing, summaries, and
queries generated by machine learning algos). Users are anxious if
they have to wait for more than 30 sec.

> With this kind of workload the IOPS is not that important, the sequential
> reads is. And SSDs are not significantly faster in case of sequential I/O
> - you can usually achieve the same sequential performance with spinners
> for less money).

yes, you are right:
seq IO: FusionIO is 3-5 times faster than our Hitachi SAN. SAN is 5-10
times faster than local SAS 15k drive. Random IO is completely
different story.

-- 
Ondrej Ivanic
(ondrej.iva...@gmail.com)

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COLUMNAR postgreSQL ?

2011-09-20 Thread Tomas Vondra
On 21 Září 2011, 0:13, Ondrej Ivanič wrote:
> Hi,
>
> On 20 September 2011 18:16, Simon Riggs  wrote:
>> It would be useful to get some balanced viewpoints on this. I see you
>> have Alterian experience, so if you are using both it could be
>> valuable info. I've never heard anyone describe the downsides of
>> columnar datastores, presumably there are some?
>
> Inserts are slower. I haven't done proper benchmark because there is
> no need for thousands inserts per sec in our database.
>
>> My understanding is that columnar will work well for queries like this
>>
>> SELECT count(*)
>> FROM table
>> WHERE col1 AND col2 AND col3
>>
>> but less well when we include the columns in the SELECT clause.
>
> Columnar store is good if:
> - you are selecting less than 60% of the total row size (our table has
> 400 cols and usual query needs 5 - 10 cols)
> - aggregates: count(*), avg(), ...

Where did those numbers come from? What columnar database are you using?
What options were used (e.g. compression)?

> In some cases columnar store is able to beat Postgres + High IOPS
> (250k+) SSD card

What do you mean by "in some cases"? If that means a DWH/DSS workloads,
then it's apples to oranges I guess.

SSDs are great for OLTP workloads (with a lot of random I/O). With DWH/DSS
workloads, the performance gain is much smaller (not worth the money in
some cases).

With this kind of workload the IOPS is not that important, the sequential
reads is. And SSDs are not significantly faster in case of sequential I/O
- you can usually achieve the same sequential performance with spinners
for less money).

So I wouldn't be surprised if a columnar store on a spinner beats
PostgreSQL on a SSD, when running a DWH/DSS workload. Actually I'd expect
that.

This is not a rant against columnar stores - I find them very interesting.


Tomas


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Column Privileges: NULL instead of permission denied

2011-09-20 Thread Matthew Hawn
> From: Stephen Frost [mailto:sfr...@snowman.net]
> 
> * Matthew Hawn (matth...@donaanacounty.org) wrote:
> > I have a table with privileged data that is restricted using column
> level
> > permissions.  I would like to have single query  that returns data
> from the
> > table.  If the user has permission, it should return the data but
> return
> > NULL if the user does not have permission.   I do not want to create
> > separate queries in my application for different users.
> 
> I think you'll have to build the query in the application to either
> have
> the NULL or not have it.  If you try to reference it in the query, PG
> is
> going to give you that permission denied error and I don't think
> there's
> an easy way to change that (and I'm not sure that we'd want to...).
> 
>   Thanks,
> 
>   Stephen

I definitely agree the default behavior should be to generate a permission
error.  However, to build my query in the application, I would have to
  * Query the database for all column permissions
  * Dynamically construct a sql statement to pull the relevant data.

My application has fairly fine grained control on columns so this could be
almost any column on any table referenced. In addition, dynamically creating
a query is expensive and error prone and prevents prepared queries and query
caching.

For a web application, all this becomes very expensive. 

Implicitly, there should be an error message, but it would be nice to
explicitly state a column should return null on permission denied. Something
like:

Select name, NULLIFDENIED(ssn) from people.

Or a special keyword, or a special table function that converts denied
fields to null:

Select name, ssn from NULLIFDENIED(people)





smime.p7s
Description: S/MIME cryptographic signature


Re: [GENERAL] Is libpq version indifferent? I.E. can 8.3.x compiled binaries run without issue against a 9.1 backend, or are clients required to be re-compiled against the 9.1 libpq

2011-09-20 Thread Merlin Moncure
On Tue, Sep 20, 2011 at 7:37 AM, Reid Thompson  wrote:
> I believe that there should be no issue, but am asking to be sure.

yeah -- as long as you stick with v3 protocol supporting libpq (7.4+)
you should be ok.  all of the stuff that can change (binary wire
format, integer/float date time) is abstracted out of libpq itself and
is up to the application to handle.  note, some features  have been
added to libpq of course, but those are checked at link time with your
application.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COLUMNAR postgreSQL ?

2011-09-20 Thread Ondrej Ivanič
Hi,

On 20 September 2011 18:16, Simon Riggs  wrote:
> It would be useful to get some balanced viewpoints on this. I see you
> have Alterian experience, so if you are using both it could be
> valuable info. I've never heard anyone describe the downsides of
> columnar datastores, presumably there are some?

Inserts are slower. I haven't done proper benchmark because there is
no need for thousands inserts per sec in our database.

> My understanding is that columnar will work well for queries like this
>
> SELECT count(*)
> FROM table
> WHERE col1 AND col2 AND col3
>
> but less well when we include the columns in the SELECT clause.

Columnar store is good if:
- you are selecting less than 60% of the total row size (our table has
400 cols and usual query needs 5 - 10 cols)
- aggregates: count(*), avg(), ...

In some cases columnar store is able to beat Postgres + High IOPS
(250k+) SSD card

> Would you be able to give some viewpoints and measurements on that?

Check this: 
http://www.greenplum.com/community/forums/showthread.php?499-enable_mergejoin-and-random_page_cost&p=1553#post1553



-- 
Ondrej Ivanic
(ondrej.iva...@gmail.com)

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Replication between 64/32bit systems?

2011-09-20 Thread Chris Ernst
Hi Hannes,

You can't use streaming replication, but you can use slony to replicate
between different architectures.

- Chris

On 09/20/2011 02:26 PM, Hannes Erven wrote:
> Hi folks,
>
>
> I'm planning to set up streaming replication from one master to one
> slave. I've read at
> http://www.postgresql.org/docs/9.1/static/warm-standby.html that the
> "hardware architecture" of both systems must be the same.
>
> Sure enough, what I'd really like to do is replicate from a Windows (or
> Linux) 64bit master to a Linux 32bit slave -- that's what I currently
> have easily available.
>
> I tried just following the instructions and copied the data directory to
> the slaves, but postgres does not start:
>  FATAL:  falsche Prüfsumme in Kontrolldatei
> (FATAL: wrong checksum in control file)
>
>
> So I'd like to ask if there is anything I could do by e.g. changing
> compile-time options at the slave to get things going?
>
>
> Thank you for any advice!,
> best regards
>
>   -hannes
>


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pg-9.1 for fedora 15 available?

2011-09-20 Thread Stuart McGraw
I tried to use yum to install postgresql-9.1 on my Fedora 15 box 
by installing the yum conf file:
  rpm -ivh http://yum.pgrpms.org/reporpms/9.1/pgdg-fedora-9.1-3.noarch.rpm

That was fine but when I then try to install postgresql, I get:
  http://yum.postgresql.org/9.1/fedora/fedora-15-i386/repodata/repomd.xml: 
[Errno 14] HTTP Error 404 - Not Found

And indeed, in http://yum.postgresql.org/9.1/fedora/, the only f15
directory I see is fedora-15-x86_64/.

Am I doing something wrong, or is f15 32-bit not supported, or is 
something borked on the site, or...?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Replication between 64/32bit systems?

2011-09-20 Thread Guillaume Lelarge
On Tue, 2011-09-20 at 22:37 +0200, Guillaume Lelarge wrote:
> On Tue, 2011-09-20 at 22:26 +0200, Hannes Erven wrote:
> > [...]
> > I'm planning to set up streaming replication from one master to one
> > slave. I've read at
> > http://www.postgresql.org/docs/9.1/static/warm-standby.html that the
> > "hardware architecture" of both systems must be the same.
> > 
> > Sure enough, what I'd really like to do is replicate from a Windows (or
> > Linux) 64bit master to a Linux 32bit slave -- that's what I currently
> > have easily available.
> > 
> > I tried just following the instructions and copied the data directory to
> > the slaves, but postgres does not start:
> >  FATAL:  falsche Prüfsumme in Kontrolldatei
> > (FATAL: wrong checksum in control file)
> > 
> > 
> > So I'd like to ask if there is anything I could do by e.g. changing
> > compile-time options at the slave to get things going?
> > 
> 
> There's nothing you can do. It won't work between a 32bit and a 64bit
> machines.
> 

Just when I hit the Send button, I wondered if that would be possible if
he installed a 32bit PostgreSQL on his 64bit Windows. It could be OK
with the same OS, but with different OS, there is less chance, I guess.


-- 
Guillaume
  http://blog.guillaume.lelarge.info
  http://www.dalibo.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


RES: [GENERAL] Replication between 64/32bit systems?

2011-09-20 Thread Edson Carlos Ericksson Richter
>From the manual, "In any case the hardware architecture must be the same —
shipping from, say, a 32-bit to a 64-bit system will not work."...

I don't even believe you can copy a 64bit database into 32bit machine...
what makes warm standby impossible...

I've tried this in the past, and give up - without even contacting the list.
I was sure I was doing wrong. What I did was moving all my database servers
(I've 7: 2 production, 2 slaves, 2 development, 1 test release) into 64 bit
platform.

Other well known and equally respected databases poses same limitation (not
able to replicate from 32 to 64 or vice versa, or to other operating systems
due the "big endian"-"little endian" issue).


My2c,

Edson.


-Mensagem original-
De: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] Em nome de Hannes Erven
Enviada em: terça-feira, 20 de setembro de 2011 17:26
Para: pgsql
Assunto: [GENERAL] Replication between 64/32bit systems?

Hi folks,


I'm planning to set up streaming replication from one master to one slave.
I've read at http://www.postgresql.org/docs/9.1/static/warm-standby.html
that the "hardware architecture" of both systems must be the same.

Sure enough, what I'd really like to do is replicate from a Windows (or
Linux) 64bit master to a Linux 32bit slave -- that's what I currently have
easily available.

I tried just following the instructions and copied the data directory to the
slaves, but postgres does not start:
 FATAL:  falsche Prüfsumme in Kontrolldatei
(FATAL: wrong checksum in control file)


So I'd like to ask if there is anything I could do by e.g. changing
compile-time options at the slave to get things going?


Thank you for any advice!,
best regards

-hannes

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make
changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Replication between 64/32bit systems?

2011-09-20 Thread Guillaume Lelarge
On Tue, 2011-09-20 at 22:26 +0200, Hannes Erven wrote:
> [...]
> I'm planning to set up streaming replication from one master to one
> slave. I've read at
> http://www.postgresql.org/docs/9.1/static/warm-standby.html that the
> "hardware architecture" of both systems must be the same.
> 
> Sure enough, what I'd really like to do is replicate from a Windows (or
> Linux) 64bit master to a Linux 32bit slave -- that's what I currently
> have easily available.
> 
> I tried just following the instructions and copied the data directory to
> the slaves, but postgres does not start:
>  FATAL:  falsche Prüfsumme in Kontrolldatei
> (FATAL: wrong checksum in control file)
> 
> 
> So I'd like to ask if there is anything I could do by e.g. changing
> compile-time options at the slave to get things going?
> 

There's nothing you can do. It won't work between a 32bit and a 64bit
machines.


-- 
Guillaume
  http://blog.guillaume.lelarge.info
  http://www.dalibo.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Replication between 64/32bit systems?

2011-09-20 Thread Alan Hodgson
On September 20, 2011 01:26:06 PM Hannes Erven wrote:
> So I'd like to ask if there is anything I could do by e.g. changing
> compile-time options at the slave to get things going?
> 

No.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Replication between 64/32bit systems?

2011-09-20 Thread Hannes Erven
Hi folks,


I'm planning to set up streaming replication from one master to one
slave. I've read at
http://www.postgresql.org/docs/9.1/static/warm-standby.html that the
"hardware architecture" of both systems must be the same.

Sure enough, what I'd really like to do is replicate from a Windows (or
Linux) 64bit master to a Linux 32bit slave -- that's what I currently
have easily available.

I tried just following the instructions and copied the data directory to
the slaves, but postgres does not start:
 FATAL:  falsche Prüfsumme in Kontrolldatei
(FATAL: wrong checksum in control file)


So I'd like to ask if there is anything I could do by e.g. changing
compile-time options at the slave to get things going?


Thank you for any advice!,
best regards

-hannes

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Millions of largeobjects the production databases.

2011-09-20 Thread Dmitriy Igrishin
Hey Community,

Just curious, is there are heavily loaded servers with databases
in production with tons (millions) of largeobjects (pics, movies)?
Theoretically, everything should be fine with it, but it is always
interesting to know how things works in practice.

Thanks!

-- 
// Dmitriy.


Re: [GENERAL] \d+ not showing TOAST table size?

2011-09-20 Thread Jon Nelson
On Tue, Sep 20, 2011 at 2:09 PM, Josh Kupershmidt  wrote:
> On Tue, Sep 20, 2011 at 1:26 PM, Jon Nelson  wrote:
>> I have a table with a fair bit of TOAST data in it.
>> I noticed that \d+ does /not/ include that information (but
>> pg_total_relation_size does).
>
> I assume by "\d+" you meant "\dt+" (\d+ doesn't show sizes at all). On
> version 9.0 and up, \dt+ uses pg_relation_size() internally, which
> actually does include TOAST data as this comment in dbsize.c explains:

I meant "\d+".  I'm using 9.0.3 and "\d+" clearly shows sizes.
Per Tom Lane's comments later in the thread, the behavior of 9.0 (and
prior) matches my recollection. I just need to wait for 9.1.

Thanks!

-- 
Jon

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Dynamic constraint names in ALTER TABLE

2011-09-20 Thread patrick keshishian
On Tue, Sep 20, 2011 at 7:36 AM, Adrian Klaver  wrote:
> On Monday, September 19, 2011 8:09:04 pm patrick keshishian wrote:
>> On Mon, Sep 19, 2011 at 6:08 PM, Adrian Klaver 
> wrote:
>> > On Monday, September 19, 2011 5:10:45 pm patrick keshishian wrote:
>> >> Hi,
>> >>
>> >> Is there any way the .sql scripts could make use of this query to get
>> >> the foreign key name from pg_constraint table, regardless of PG
>> >> version (7.4.x or 9.x)?
>> >
>> > Use the information schema?  As example:
>> > http://www.postgresql.org/docs/7.4/static/infoschema-table-constraints.ht
>> > ml
>> > http://www.postgresql.org/docs/9.0/interactive/infoschema-table-constrai
>> > nts.html
>>
>> I think you you missed the intent of my question; unless I've missed
>> depth of your answer.
>
> My mistake. I misread the question and I thought you where looking for a way 
> to
> get the information without using the system catalogs.
>
>>
>> The question wasn't where does one find the name of the constraint. My
>> example demonstrated that I knew how to get that value. The question,
>> however, is how do you get that in an ALTER TABLE statement? A
>> sub-select doesn't seem to work.
>>
>> e.g., ALTER TABLE sales DROP CONSTRAINT (SELECT conname FROM
>> pg_constraint JOIN pg_class ON (conrelid=pg_class.oid) WHERE
>> pg_class.relname='sales' AND conkey[1] = 1 AND contype='f') ;
>>
>> That does not work.
>>
>> I can generate the SQL statements using SELECTs, output (\o) them to a
>> /tmp/really-hacky-way-to-do-this.sql files, then read (\i) them into
>> psql, but as the file name says, this is getting perverse.
>>
>
>
> Just out of curiosity, what do you do if there is more than one constraint on 
> a
> table and you want to apply different changes?

You mean in a situation like this:

foo=# CREATE TABLE employee (id INTEGER PRIMARY KEY, name TEXT);
foo=# CREATE TABLE product (id INTEGER PRIMARY KEY, description TEXT) ;
foo=# CREATE TABLE sales (seller INTEGER, amount INTEGER, item
INTEGER, FOREIGN KEY (seller) REFERENCES employee (id), FOREIGN KEY
(item) REFERENCES product (id));
foo=# \d sales
 Table "public.sales"
 Column |  Type   | Modifiers
+-+---
 seller | integer |
 amount | integer |
 item   | integer |
Foreign-key constraints:
"sales_item_fkey" FOREIGN KEY (item) REFERENCES product(id)
"sales_seller_fkey" FOREIGN KEY (seller) REFERENCES employee(id)


The conkey field in the pg_constraint table helps you choose the
specific constraint you are referring to.

i.e., when I am looking for the FOREIGN KEY referenced by seller
(column 1) conkey[1] = 1:

foo=# SELECT conname FROM pg_constraint JOIN pg_class ON
(conrelid=pg_class.oid) WHERE pg_class.relname='sales' AND conkey[1] =
1 AND contype='f';
  conname
---
 sales_seller_fkey
(1 row)


FOREIGN KEY referenced by item (column 3) conkey[1] = 3:

foo=# SELECT conname FROM pg_constraint JOIN pg_class ON
(conrelid=pg_class.oid) WHERE pg_class.relname='sales' AND conkey[1] =
3 AND contype='f';
 conname
-
 sales_item_fkey
(1 row)


Best,
--patrick

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] \d+ not showing TOAST table size?

2011-09-20 Thread Tom Lane
Jon Nelson  writes:
> I have a table with a fair bit of TOAST data in it.
> I noticed that \d+ does /not/ include that information (but
> pg_total_relation_size does).
> Is that intentional?

This is changed as of (IIRC) 9.1.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] \d+ not showing TOAST table size?

2011-09-20 Thread Josh Kupershmidt
On Tue, Sep 20, 2011 at 1:26 PM, Jon Nelson  wrote:
> I have a table with a fair bit of TOAST data in it.
> I noticed that \d+ does /not/ include that information (but
> pg_total_relation_size does).

I assume by "\d+" you meant "\dt+" (\d+ doesn't show sizes at all). On
version 9.0 and up, \dt+ uses pg_relation_size() internally, which
actually does include TOAST data as this comment in dbsize.c explains:

 * Calculate total on-disk size of a given table,
 * including FSM and VM, plus TOAST table if any.
 * Indexes other than the TOAST table's index are not included.

The extra space displayed by pg_total_relation_size() is from indexes
on the table.

> Is that intentional? It seems a bit misleading, insofar as "\d+" feels
> like it is meant to be a rough indication of the table size, but if
> 90% of the data is in TOAST then the *real* data stored is much more
> than displayed.

FYI, the implementation was agreed upon here:
  http://archives.postgresql.org/pgsql-hackers/2011-03/msg01268.php

There were some ideas tossed around in that thread about ways to also
include index size as well, which I do think would be nice to have. It
would be a challenge to format that information nicely, particularly
without messing up the tabular output of different types of relations
(e.g. "\dts+").

Josh

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] \d+ not showing TOAST table size?

2011-09-20 Thread Jon Nelson
I have a table with a fair bit of TOAST data in it.
I noticed that \d+ does /not/ include that information (but
pg_total_relation_size does).
Is that intentional? It seems a bit misleading, insofar as "\d+" feels
like it is meant to be a rough indication of the table size, but if
90% of the data is in TOAST then the *real* data stored is much more
than displayed.


-- 
Jon

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] upgrade postgres to 8.4.8, centos 5.3

2011-09-20 Thread Devrim GÜNDÜZ
On Tue, 2011-09-20 at 07:49 -0700, MirrorX wrote:
> 
> 
> do u mean something like that? -> yum list | grep *PGDG*rpm
> or i shouldn't search in the yum repos? 

What is the output of:

cat /etc/yum.repos.d/pgdg*|grep baseurl

Regards,
-- 
Devrim GÜNDÜZ
Principal Systems Engineer @ EnterpriseDB: http://www.enterprisedb.com
PostgreSQL Danışmanı/Consultant, Red Hat Certified Engineer
Community: devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr
http://www.gunduz.org  Twitter: http://twitter.com/devrimgunduz


signature.asc
Description: This is a digitally signed message part


Re: [GENERAL] extensions in 9.1

2011-09-20 Thread Tom Lane
"Sebastian P. Luque"  writes:
> Tom Lane  wrote:
>> It sounds like you already have the old "unpackaged" version of that
>> module installed.

> How can I check if this is the case?  I was getting that same error when
> trying to install the adminpack onto the postgres database (as postgres
> user of course).  If I try "CREATE EXTENSION adminpack FROM unpackaged"
> onto the postgres database this way, I get:

> ERROR:  extension "adminpack" has no update path from version "unpackaged" to 
> version "1.0"

> and "CREATE EXTENSION tablefunc FROM unpackaged" onto the template1
> database, I get:

> ERROR:  function normal_rand(integer, double precision, double precision) 
> does not exist

Well, maybe you had better explain the history of this database a bit
better.  It seems like you've got a subset of the tablefunc functions
installed; how did that come to be?  And how did the adminpack functions
get in there?  Checking the archives, the reason we didn't bother with
an "unpackaged" upgrade path for adminpack is that it's always installed
its functions into the pg_catalog schema, meaning pg_dump wouldn't dump
them, so they should not have been there after an upgrade.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] extensions in 9.1

2011-09-20 Thread Sebastian P . Luque
On Tue, 20 Sep 2011 11:36:15 -0400,
Tom Lane  wrote:

> Seb  writes:
>> I'm starting to migrate to 9.1 and see that there's a new mechanism
>> to install extensions via de "CREATE EXTENSION" command.  It seems
>> very simple and convenient, but with "CREATE EXTENSION tablefunc" I'm
>> getting the error:

>> ERROR: function "normal_rand" already exists with same argument types

> It sounds like you already have the old "unpackaged" version of that
> module installed.  You need to do a one-time conversion to get from
> that state to having the objects wrapped in an extension.  For the
> standard contrib modules that's done with "CREATE EXTENSION ... FROM
> unpackaged".

How can I check if this is the case?  I was getting that same error when
trying to install the adminpack onto the postgres database (as postgres
user of course).  If I try "CREATE EXTENSION adminpack FROM unpackaged"
onto the postgres database this way, I get:

ERROR:  extension "adminpack" has no update path from version "unpackaged" to 
version "1.0"

and "CREATE EXTENSION tablefunc FROM unpackaged" onto the template1
database, I get:

ERROR:  function normal_rand(integer, double precision, double precision) does 
not exist


-- 
Seb


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] extensions in 9.1

2011-09-20 Thread Tom Lane
Seb  writes:
> I'm starting to migrate to 9.1 and see that there's a new mechanism to
> install extensions via de "CREATE EXTENSION" command.  It seems very
> simple and convenient, but with "CREATE EXTENSION tablefunc" I'm getting
> the error:

> ERROR:  function "normal_rand" already exists with same argument types

It sounds like you already have the old "unpackaged" version of that
module installed.  You need to do a one-time conversion to get from that
state to having the objects wrapped in an extension.  For the standard
contrib modules that's done with "CREATE EXTENSION ... FROM unpackaged".

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Seeing foreign key lookups in explain output

2011-09-20 Thread Andreas Kretschmer
Marti Raudsepp  wrote:

> On Tue, Sep 20, 2011 at 16:12, Marti Raudsepp  wrote:
> > On Tue, Sep 20, 2011 at 15:35, Vincent de Phily
> >  wrote:
> >> The explain output will tell me it's using the index on t1's id, but it 
> >> tells
> >> me nothing about the seqscan that happens on t2 (because I forgot to add an
> >> index on t2.ref).
> >
> > +1 for a TODO on adding foreign key trigger time to EXPLAIN ANALYZE output.
> 
> Sorry, that was too hasty. We already have that now in 9.0 and 9.1
> (not sure when it was introduced)
> 
> create table a as select generate_series(1,1) i;
> create table b as select generate_series(1,1) i;
> alter table a add primary key (i);
> alter table b add foreign key (i) references a (i) on delete cascade;
> explain analyze delete from a;
> 
>  QUERY PLAN
>  
> ---
>  Delete  (cost=0.00..145.00 rows=1 width=6) (actual
> time=16.308..16.308 rows=0 loops=1)
>->  Seq Scan on a  (cost=0.00..145.00 rows=1 width=6) (actual
> time=0.008..2.208 rows=1 loops=1)
>  Trigger for constraint b_i_fkey: time=6324.652 calls=1
>  Total runtime: 6342.406 ms
> 
> Notice the line "Trigger for constraint b_i_fkey"

Unfortunately, there is no information about the plan for the trigger.
With extra index:

test=*# create index idx_b on b(i);
CREATE INDEX   
Time: 10,645 ms
test=*# explain analyze delete from a;
QUERY PLAN
---
 Delete on a  (cost=0.00..140.00 rows=1 width=6) (actual 
time=18.881..18.881 rows=0 loops=1)
   ->  Seq Scan on a  (cost=0.00..140.00 rows=1 width=6) (actual 
time=0.015..3.800 rows=1 loops=1)
 Trigger for constraint b_i_fkey: time=231.084 calls=1
 Total runtime: 254.033 ms


without index:

test=*# explain analyze delete from a;
QUERY PLAN
---
 Delete on a  (cost=0.00..140.00 rows=1 width=6) (actual 
time=19.090..19.090 rows=0 loops=1)
   ->  Seq Scan on a  (cost=0.00..140.00 rows=1 width=6) (actual 
time=0.014..3.868 rows=1 loops=1)
 Trigger for constraint b_i_fkey: time=9823.762 calls=1
 Total runtime: 9846.789 ms
(4 rows)

The same plan, but not the same execution time. Bad...
(version ist 9.1.0)


Andreas
-- 
Really, I'm not out to destroy Microsoft. That will just be a completely
unintentional side effect.  (Linus Torvalds)
"If I was god, I would recompile penguin with --enable-fly."   (unknown)
Kaufbach, Saxony, Germany, Europe.  N 51.05082°, E 13.56889°

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] upgrade postgres to 8.4.8, centos 5.3

2011-09-20 Thread Scott Marlowe
If it's in the repo, just do

yum update

and it should get updated.

On Tue, Sep 20, 2011 at 8:49 AM, MirrorX  wrote:
> thx for your answer.
>
> do u mean something like that? -> yum list | grep *PGDG*rpm
> or i shouldn't search in the yum repos?
>
> --
> View this message in context: 
> http://postgresql.1045698.n5.nabble.com/upgrade-postgres-to-8-4-8-centos-5-3-tp4822762p4822823.html
> Sent from the PostgreSQL - general mailing list archive at Nabble.com.
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
To understand recursion, one must first understand recursion.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] extensions in 9.1

2011-09-20 Thread Seb
Hi,

I'm starting to migrate to 9.1 and see that there's a new mechanism to
install extensions via de "CREATE EXTENSION" command.  It seems very
simple and convenient, but with "CREATE EXTENSION tablefunc" I'm getting
the error:

ERROR:  function "normal_rand" already exists with same argument types

and similar errors when installing the adminpack into the postgres
database.  I'm not sure what I'm missing here.  Any pointers welcome.

Cheers,

-- 
Seb


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] upgrade postgres to 8.4.8, centos 5.3

2011-09-20 Thread MirrorX
thx for your answer.

do u mean something like that? -> yum list | grep *PGDG*rpm
or i shouldn't search in the yum repos?

--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/upgrade-postgres-to-8-4-8-centos-5-3-tp4822762p4822823.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Dynamic constraint names in ALTER TABLE

2011-09-20 Thread Adrian Klaver
On Monday, September 19, 2011 8:09:04 pm patrick keshishian wrote:
> On Mon, Sep 19, 2011 at 6:08 PM, Adrian Klaver  
wrote:
> > On Monday, September 19, 2011 5:10:45 pm patrick keshishian wrote:
> >> Hi,
> >> 
> >> Is there any way the .sql scripts could make use of this query to get
> >> the foreign key name from pg_constraint table, regardless of PG
> >> version (7.4.x or 9.x)?
> > 
> > Use the information schema?  As example:
> > http://www.postgresql.org/docs/7.4/static/infoschema-table-constraints.ht
> > ml
> > http://www.postgresql.org/docs/9.0/interactive/infoschema-table-constrai
> > nts.html
> 
> I think you you missed the intent of my question; unless I've missed
> depth of your answer.

My mistake. I misread the question and I thought you where looking for a way to 
get the information without using the system catalogs.

> 
> The question wasn't where does one find the name of the constraint. My
> example demonstrated that I knew how to get that value. The question,
> however, is how do you get that in an ALTER TABLE statement? A
> sub-select doesn't seem to work.
> 
> e.g., ALTER TABLE sales DROP CONSTRAINT (SELECT conname FROM
> pg_constraint JOIN pg_class ON (conrelid=pg_class.oid) WHERE
> pg_class.relname='sales' AND conkey[1] = 1 AND contype='f') ;
> 
> That does not work.
> 
> I can generate the SQL statements using SELECTs, output (\o) them to a
> /tmp/really-hacky-way-to-do-this.sql files, then read (\i) them into
> psql, but as the file name says, this is getting perverse.
> 


Just out of curiosity, what do you do if there is more than one constraint on a 
table and you want to apply different changes?


-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] upgrade postgres to 8.4.8, centos 5.3

2011-09-20 Thread Scott Marlowe
On Tue, Sep 20, 2011 at 8:30 AM, MirrorX  wrote:
> hello all,
> i have a centos 5.3 which has postgres 8.4.4 installed from the repos. I
> want to upgrade to 8.4.8 but when i try to install the .bin file of 8.4.8
> then it's a new installation and when i try to run yum check-update nothing
> new is there. any ideas? tnx in advance

You need packages from the same place etc not a .bin file.  Look for
packages that have PGDG in the middle and .rpm on the end.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Transaction ordering on log-shipping standby

2011-09-20 Thread Simon Riggs
On Tue, Sep 20, 2011 at 11:46 AM, Andrew Rose
 wrote:
> I've got a question about transaction ordering in a log-shipping replication 
> environment.
>
> Here's the setup...
>
> - A pair of PostgreSQL 9 servers in active/standby configuration, using 
> log-shipping
> - A single client, using a single connection
> - The client commits transaction 1
> - The client commits transaction 2
> - The active server fails and the standby is promoted to be the active server
>
> The client re-establishes the connection and is attempting to determine which 
> transactions have made it onto the standby server (and which never made it 
> because of replication delays).
>
> Here's the question...
>
> If the client can confirm that transaction 2 has happened on the standby, 
> does that imply that transaction 1 has also made it onto the standby?
>
> Or to put the question another way, is the ordering of transactions on the 
> active and standby servers guaranteed to be the same?


Yes, the transaction ordering is log serializable because that's the
only way that provably works.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] upgrade postgres to 8.4.8, centos 5.3

2011-09-20 Thread MirrorX
hello all,
i have a centos 5.3 which has postgres 8.4.4 installed from the repos. I
want to upgrade to 8.4.8 but when i try to install the .bin file of 8.4.8
then it's a new installation and when i try to run yum check-update nothing
new is there. any ideas? tnx in advance

--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/upgrade-postgres-to-8-4-8-centos-5-3-tp4822762p4822762.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Seeing foreign key lookups in explain output

2011-09-20 Thread Vincent de Phily
On Tuesday 20 September 2011 16:32:50 Marti Raudsepp wrote:
> On Tue, Sep 20, 2011 at 16:12, Marti Raudsepp  wrote:
> > +1 for a TODO on adding foreign key trigger time to EXPLAIN ANALYZE
> > output.
> Sorry, that was too hasty. We already have that now in 9.0 and 9.1
> (not sure when it was introduced)
> 
> create table a as select generate_series(1,1) i;
> create table b as select generate_series(1,1) i;
> alter table a add primary key (i);
> alter table b add foreign key (i) references a (i) on delete cascade;
> explain analyze delete from a;
> 
>  QUERY PLAN
>  ---
>  Delete  (cost=0.00..145.00 rows=1
> width=6) (actual
> time=16.308..16.308 rows=0 loops=1)
>->  Seq Scan on a  (cost=0.00..145.00 rows=1 width=6) (actual
> time=0.008..2.208 rows=1 loops=1)
>  Trigger for constraint b_i_fkey: time=6324.652 calls=1
>  Total runtime: 6342.406 ms
> 
> Notice the line "Trigger for constraint b_i_fkey"

Ahhh I never spoted that in the changelogs, and haven't upgraded yet. Great :) 
Yet another reason to upgrade ASAP. Thanks.

-- 
Vincent de Phily

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Column Privileges: NULL instead of permission denied

2011-09-20 Thread Harald Fuchs
In article ,
Matthew Hawn  writes:

> I have a table with privileged data that is restricted using column level
> permissions.  I would like to have single query  that returns data from
> the table.  If the user has permission, it should return the data but
> return NULL if the user does not have permission.   I do not want to
> create separate queries in my application for different users.

 

> Ex:

> Table people:  Name, SSN

 

> If I try:

> Select name, ssn from people;

 

> I get if the user does not have permission:

> **ERROR: permission denied for relation people **

 

> I would like to get:

> No Permission:

>  Dave, NULL

>  Bill, NULL

>  Steve, NULL

> Permission:

>  Dave, 456

>  Bill, 789

>   Steve, 123

The only thing I can think of is an ugly kludge:

-- Define the following function as a user with SELECT privilege

CREATE FUNCTION doselect(IN currusr text, OUT name text, OUT ssn int) RETURNS 
SETOF RECORD AS $$
SELECT name,
   CASE has_column_privilege($1, 'people', 'ssn', 'SELECT')
   WHEN true THEN ssn
   ELSE NULL END AS ssn
FROM people
$$ LANGUAGE sql SECURITY DEFINER;

CREATE VIEW people_view AS
SELECT * FROM doselect(current_user);

-- The following query will do what you want

SELECT * FROM people_view;


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Transaction ordering on log-shipping standby

2011-09-20 Thread Marti Raudsepp
On Tue, Sep 20, 2011 at 13:46, Andrew Rose  wrote:
> Or to put the question another way, is the ordering of transactions on the 
> active and standby servers guaranteed to be the same?

Yes. The WAL serializes the order of transactions. It is applied to
slaves in the same order that it's written on the master.

Regards,
Marti

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] Seeing foreign key lookups in explain output

2011-09-20 Thread Marti Raudsepp
On Tue, Sep 20, 2011 at 16:12, Marti Raudsepp  wrote:
> On Tue, Sep 20, 2011 at 15:35, Vincent de Phily
>  wrote:
>> The explain output will tell me it's using the index on t1's id, but it tells
>> me nothing about the seqscan that happens on t2 (because I forgot to add an
>> index on t2.ref).
>
> +1 for a TODO on adding foreign key trigger time to EXPLAIN ANALYZE output.

Sorry, that was too hasty. We already have that now in 9.0 and 9.1
(not sure when it was introduced)

create table a as select generate_series(1,1) i;
create table b as select generate_series(1,1) i;
alter table a add primary key (i);
alter table b add foreign key (i) references a (i) on delete cascade;
explain analyze delete from a;

 QUERY PLAN
 
---
 Delete  (cost=0.00..145.00 rows=1 width=6) (actual
time=16.308..16.308 rows=0 loops=1)
   ->  Seq Scan on a  (cost=0.00..145.00 rows=1 width=6) (actual
time=0.008..2.208 rows=1 loops=1)
 Trigger for constraint b_i_fkey: time=6324.652 calls=1
 Total runtime: 6342.406 ms

Notice the line "Trigger for constraint b_i_fkey"

Regards,
Marti

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Is libpq version indifferent? I.E. can 8.3.x compiled binaries run without issue against a 9.1 backend, or are clients required to be re-compiled against the 9.1 libpq

2011-09-20 Thread Reid Thompson

I believe that there should be no issue, but am asking to be sure.

Thanks,
reid

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Seeing foreign key lookups in explain output

2011-09-20 Thread Marti Raudsepp
On Tue, Sep 20, 2011 at 15:35, Vincent de Phily
 wrote:
> The explain output will tell me it's using the index on t1's id, but it tells
> me nothing about the seqscan that happens on t2 (because I forgot to add an
> index on t2.ref).

+1 for a TODO on adding foreign key trigger time to EXPLAIN ANALYZE output.

Regards,
Marti

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] not enough disk space

2011-09-20 Thread Szymon Guz
Hi,
If there is not enough disk space for database during loading a lot of data
(under normal db load), could the data for other databases/schemas corrupt?
If yes, would that be enough to run vacuum full and reindex to make sure
everything is OK with the data files, or something else?

regards
Szymon


[GENERAL] Seeing foreign key lookups in explain output

2011-09-20 Thread Vincent de Phily
Hi list,

an explain analyze wish :

create table t1(id serial primary key);
create table t2(id serial primary key,
ref integer references t1(id) on delete cascade);
...insert many rows in both tables...
explain delete from t1 where id < 1;
...
The explain output will tell me it's using the index on t1's id, but it tells 
me nothing about the seqscan that happens on t2 (because I forgot to add an 
index on t2.ref).

It isn't the first time I get bitten by this, and I bet I'm not the only one. 
The explain tells me everything will work fast, but reallity is 100 times 
slower. Is there a way I can extend explain output to show the rest of the 
work done behind the scene ? Fixing that performance issue is easy once you 
see it, but most people will just look at the explain output and erroneously 
conclude "it's as good as it gets".

-- 
Vincent de Phily

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] cache oblivious indexes (tokudb, fractal indexes)

2011-09-20 Thread Leonardo Francalanci
Hi,

trying to find how to store a large amount (>1 rows/sec) of rows in a table 
that 
has indexes on "random values" columns, I found:

http://en.wikipedia.org/wiki/TokuDB


Basically, instead of using btrees (which kill insert performance for random 
values
on large tables) they use a different type of index, which they call "fractal".

If what they claim is true, insert performance in those cases (as I said, 
indexes on
columns with highly random data) is much faster (x80 times faster!!!)

I read some of the papers at:
http://supertech.csail.mit.edu/cacheObliviousBTree.html



I think it's a very interesting approach... instead of relying on disks random 
access
times, they use sequential access...

I was wondering:

1) has anyone looked at the papers?
2) I don't understand how they made it concurrent...

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COLUMNAR postgreSQL ?

2011-09-20 Thread Craig Ringer

On 09/20/2011 03:48 PM, Juan Bru wrote:


Hi,

Is there any plan to develop a PostgreSQL Columnar release?

I'm a researcher in the scope of Health (computer scientist, DBA 
background), used to work very well with PostgrSQL. Recently I'm 
facing analysis on 100M record tables so a columnar database could be 
a better option, but I would like to stay using postgreSQL.




To what extent would your needs be satisfied by a covering index or 
index-oriented table?


That is: Are you doing deep data mining where you have LOTS of different 
columns you're interested in finding patterns in, or do you only have a 
few important columns you need to retrieve most of the time, plus a lot 
of less-used data?


--
Craig Ringer


[GENERAL] Transaction ordering on log-shipping standby

2011-09-20 Thread Andrew Rose
I've got a question about transaction ordering in a log-shipping replication 
environment.

Here's the setup...

- A pair of PostgreSQL 9 servers in active/standby configuration, using 
log-shipping
- A single client, using a single connection
- The client commits transaction 1
- The client commits transaction 2
- The active server fails and the standby is promoted to be the active server

The client re-establishes the connection and is attempting to determine which 
transactions have made it onto the standby server (and which never made it 
because of replication delays).

Here's the question...

If the client can confirm that transaction 2 has happened on the standby, does 
that imply that transaction 1 has also made it onto the standby?

Or to put the question another way, is the ordering of transactions on the 
active and standby servers guaranteed to be the same?

Thanks,

Andrew

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COLUMNAR postgreSQL ?

2011-09-20 Thread Simon Riggs
On Tue, Sep 20, 2011 at 8:48 AM, Juan Bru  wrote:

> Is there any plan to develop a PostgreSQL Columnar release?
>
> I’m a researcher in the scope of Health (computer scientist, DBA
> background), used to work very well with PostgrSQL. Recently I’m facing
> analysis on 100M record tables so a columnar database could be a better
> option, but I would like to stay using postgreSQL.

It would be useful to get some balanced viewpoints on this. I see you
have Alterian experience, so if you are using both it could be
valuable info. I've never heard anyone describe the downsides of
columnar datastores, presumably there are some?

My understanding is that columnar will work well for queries like this

SELECT count(*)
FROM table
WHERE col1 AND col2 AND col3

but less well when we include the columns in the SELECT clause.

Would you be able to give some viewpoints and measurements on that?

Do you think some funding could be available for that?

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] COLUMNAR postgreSQL ?

2011-09-20 Thread Juan Bru
Hi,

 

 

Is there any plan to develop a PostgreSQL Columnar release?

 

I'm a researcher in the scope of Health (computer scientist, DBA
background), used to work very well with PostgrSQL. Recently I'm facing
analysis on 100M record tables so a columnar database could be a better
option, but I would like to stay using postgreSQL.

 

Thanks in advance

 

 

 

Juan Bru

http://www.linkedin.com/in/juanbru