Re: [GENERAL] postgresql-8.1.18-2.1 information

2016-10-06 Thread John R Pierce

On 10/6/2016 10:05 PM, Yogesh Sharma wrote:


I need to take source rpm for below version for rhel.

postgresql-8.1.18-2.1



8.1 is a very obsolete version, it was EOL circa 2010.8.1.18 wasn't 
even the last release of 8.1, it made it up to 8.1.23.


the community sources for 8.1.18 are here,
https://www.postgresql.org/ftp/source/v8.1.18/
the postgres developers group never released anything called 
8.1.18-2.1... any RPM releases from PGDG would have PGDG in their filenames.


is this a version that was distributed by Red Hat?  the source would be 
in Redhat's source archives.btw, what version of RHEL, for what CPU 
architecture? I don't think Red Hat has bundled 8.1 since RHEL 
5.x...   I poked around in Red Hat's public archives, all I can find is...

http://ftp.redhat.com/pub/redhat/linux/enterprise/5Server/en/os/SRPMS/postgresql-8.1.18-2.el5_4.1.src.rpm
which was current when RHEL 5.4 was current, long since superceded (last 
update of RHEL 5 was 5.11)




--
john r pierce, recycling bits in santa cruz



Re: [GENERAL] postgresql-8.1.18-2.1 information

2016-10-06 Thread Thomas Munro
On Fri, Oct 7, 2016 at 6:19 PM, Adrian Klaver  wrote:
> On 10/06/2016 10:05 PM, Yogesh Sharma wrote:
>>
>> Dear All,
>>
>>
>>
>> Thanks for your support.
>>
>>
>>
>> I need to take source rpm for below version for rhel.
>>
>> postgresql-8.1.18-2.1
>>
>>
>>
>> Could you please share link, so that I will take.
>>
>> I have tried to search but could not found.
>
>
> Probably because it is 6 years past End-of-Life:
>
> https://www.postgresql.org/support/versioning/
>
> Close as I can come is the source version:
>
> https://www.postgresql.org/ftp/source/v8.1.18/

Ancient Red Hat source RPMs are apparently still be available for
archeology projects though:

ftp://ftp.redhat.com/pub/redhat/linux/enterprise/5Server/en/os/SRPMS/

(That's "5Server", they have other flavours too.)

-- 
Thomas Munro
http://www.enterprisedb.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgresql-8.1.18-2.1 information

2016-10-06 Thread Adrian Klaver

On 10/06/2016 10:05 PM, Yogesh Sharma wrote:

Dear All,



Thanks for your support.



I need to take source rpm for below version for rhel.

postgresql-8.1.18-2.1



Could you please share link, so that I will take.

I have tried to search but could not found.


Probably because it is 6 years past End-of-Life:

https://www.postgresql.org/support/versioning/

Close as I can come is the source version:

https://www.postgresql.org/ftp/source/v8.1.18/







Regards,

Yogesh




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] postgresql-8.1.18-2.1 information

2016-10-06 Thread Yogesh Sharma
Dear All,

Thanks for your support.

I need to take source rpm for below version for rhel.
postgresql-8.1.18-2.1

Could you please share link, so that I will take.
I have tried to search but could not found.

Regards,
Yogesh


Re: [GENERAL] BRIN indexes and ORDER BY

2016-10-06 Thread Merlin Moncure
On Wed, Oct 5, 2016 at 3:27 PM, Stephen Frost  wrote:
> Darren,
>
> * Darren Lafreniere (dlafreni...@onezero.com) wrote:
>> Tom Lane  wrote:
>> > > Gavin Wahl wrote:
>> > >> It seems trivial to accelerate a MAX or MIN query with a BRIN index. You
>> > >> just find the page range with the largest/smallest value, and then only
>> > >> scan that one. Would that be hard to implement? I'm interested in
>> > working
>> > >> on it if someone can give me some pointers.
>> >
>> > I think this proposal is fairly broken anyway.  The page range with the
>> > largest max-value may once have contained the largest live row, but
>> > there's no guarantee that it still does.  It might even be completely
>> > empty.  You could imagine an algorithm like this:
>> >
>> > 1. Find page-range with largest max.  Scan it to identify live row with
>> > largest value.  If *no* live values, find page-range with next largest
>> > max, repeat until no page ranges remain (whereupon return NULL).
>> >
>> > 2. For each remaining page-range whose indexed max exceeds the value
>> > currently in hand, scan that page-range to see if any value exceeds
>> > the one in hand, replacing the value if so.
>> >
>> > This'd probably allow you to omit scanning some of the page-ranges
>> > in the table, but in a lot of cases you'd end up scanning many of them;
>> > and you'd need a lot of working state to remember which ranges you'd
>> > already looked at.  It'd certainly always be a lot more expensive than
>> > answering the same question with a btree index, because in no case do
>> > you get to avoid scanning the entire contents of the index.
> [...]
>> A b-tree index would certainly be faster for ordering. But in scenarios
>> where you have huge datasets that can't afford the space or update time
>> required for b-tree, could such a BRIN-accelerated ordering algorithm at
>> least be faster than ordering with no index?
>
> For at least some of the common BRIN use-cases, where the rows are
> inserted in-order and never/very-rarely modified or deleted, this
> approach would work very well.
>
> Certainly, using this would be much cheaper than a seqscan/top-N sort,
> for small values of 'N', relative to the number of rows in the table,
> in those cases.
>
> In general, I like the idea of supporting this as BRIN indexes strike me
> as very good for very large tables which have highly clumped data in
> them and being able to do a top-N query on those can be very useful at
> times.

Yeah.  If the brin average page overlap and % dead tuple coefficients
are low it absolutely makes sense to drive top N with brin.  It will
never beat a btree but typically brin is used when the btree index is
no good for various reasons.

brin indexes are pretty neat; they can provide stupefying amounts of
optimization in many common warehousing workloads.   They even beat
out index only scans for a tiny fraction of the storage.  Of course,
you have to work around the limitations... :-)

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Online course for those who want tot contribute

2016-10-06 Thread Andrew Borodin
Cutting ties by my own interest: I want better Postgres. I want to teach
those who probably will push patches.
Best regards, Andrey Borodin.

четверг, 6 октября 2016 г. пользователь Joshua D. Drake написал:

> On 10/06/2016 10:39 AM, Andrew Borodin wrote:
>
>> Hi everyone!
>>
>> From time to time I teach at Ural Federal University. Currently
>> university wants me to make up online course. They are going to put it
>> to platform like edX or something.
>>
>> I do not want to do another general programming course, so I made up
>> my mind to do a “postgres-hacker” course. But according to my research
>> topic course must be related to access methods, this is one of
>> requirements from university.
>>
>> I need to compose list of homeworks related to usual skills necessary
>> for a hacker.
>>
>> I’m going to put on the list training on building from source, testing
>> and reviewing patches, reading mailing lists.
>>
>> How do you think, what skills are most important for someone who wants
>> to contribute to PostgreSQL? How they can be converted to homework
>> task? What homework tasks do you see useful in spite of AM accent?
>> M.B. Like track and fix a bug in a specific patch?
>>
>>
> There are two types of hackers within this community:
>
> Hacking internals (similar to Linux Kernel)
> Hacking Generalist (Working around PG's limitations)
>
> The first one would be more beneficial for the community but the second
> would be more beneficial for someone using something like EdX.
>
> Sincerely,
>
> jD
>
>
> --
> Command Prompt, Inc.  http://the.postgres.company/
> +1-503-667-4564
> PostgreSQL Centered full stack support, consulting and development.
> Everyone appreciates your honesty, until you are honest with them.
>


Re: [GENERAL] Online course for those who want tot contribute

2016-10-06 Thread Joshua D. Drake

On 10/06/2016 10:39 AM, Andrew Borodin wrote:

Hi everyone!

From time to time I teach at Ural Federal University. Currently
university wants me to make up online course. They are going to put it
to platform like edX or something.

I do not want to do another general programming course, so I made up
my mind to do a “postgres-hacker” course. But according to my research
topic course must be related to access methods, this is one of
requirements from university.

I need to compose list of homeworks related to usual skills necessary
for a hacker.

I’m going to put on the list training on building from source, testing
and reviewing patches, reading mailing lists.

How do you think, what skills are most important for someone who wants
to contribute to PostgreSQL? How they can be converted to homework
task? What homework tasks do you see useful in spite of AM accent?
M.B. Like track and fix a bug in a specific patch?



There are two types of hackers within this community:

Hacking internals (similar to Linux Kernel)
Hacking Generalist (Working around PG's limitations)

The first one would be more beneficial for the community but the second 
would be more beneficial for someone using something like EdX.


Sincerely,

jD


--
Command Prompt, Inc.  http://the.postgres.company/
+1-503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Everyone appreciates your honesty, until you are honest with them.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pgadmin III for PostgreSQL 9.6

2016-10-06 Thread Adrian Klaver

On 10/06/2016 10:55 AM, Klaus wrote:

PostgreSQL 9.6 for Windows x64 comes packaged with pgadmin 4.



For me, that implies two major issues:



- It is somewhat sluggish, much slower than the previous pgadmin III

- Even more important: on my notebook with a high resolution
screen (3.840 x 2.160), all menues and display items are so tiny that
you virtually can’t read them. Systems settings are fine and all other
applications (including pgadmin III) scale their GUI accordingly.



Will there be a pgadmin III supporting postgreSQL 9.6 available until
issues like these are resolved?


I would say you will have more success getting your questions answered 
and concerns dealt with by posting to:


https://www.pgadmin.org/support/list.php

Or creating an issue here:

https://redmine.postgresql.org/projects/pgadmin4/issues/new

FYI you will need a Postgres community account to use the issue tracker.





n.b. the latest version of pgadmin III seems to work after a couple of
error messages during the start.



Thanks Klaus




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pgadmin III for PostgreSQL 9.6

2016-10-06 Thread Klaus
PostgreSQL 9.6 for Windows x64 comes packaged with pgadmin 4. 

 

For me, that implies two major issues: 

 

- It is somewhat sluggish, much slower than the previous pgadmin III

- Even more important: on my notebook with a high resolution screen
(3.840 x 2.160), all menues and display items are so tiny that you virtually
can't read them. Systems settings are fine and all other applications
(including pgadmin III) scale their GUI accordingly. 

 

Will there be a pgadmin III supporting postgreSQL 9.6 available until issues
like these are resolved? 

 

n.b. the latest version of pgadmin III seems to work after a couple of error
messages during the start. 

 

Thanks Klaus



[GENERAL] Online course for those who want tot contribute

2016-10-06 Thread Andrew Borodin
Hi everyone!

From time to time I teach at Ural Federal University. Currently
university wants me to make up online course. They are going to put it
to platform like edX or something.

I do not want to do another general programming course, so I made up
my mind to do a “postgres-hacker” course. But according to my research
topic course must be related to access methods, this is one of
requirements from university.

I need to compose list of homeworks related to usual skills necessary
for a hacker.

I’m going to put on the list training on building from source, testing
and reviewing patches, reading mailing lists.

How do you think, what skills are most important for someone who wants
to contribute to PostgreSQL? How they can be converted to homework
task? What homework tasks do you see useful in spite of AM accent?
M.B. Like track and fix a bug in a specific patch?

Regards, Andrey Borodin.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Transactional-DDL DROP/CREATE TABLE

2016-10-06 Thread Tom Lane
Geoff Winkless  writes:
> On 6 October 2016 at 16:47, Kevin Grittner  wrote:
>> I recommend using a transactional advisory lock to serialize these.

> Thanks Kevin, that does seem like the best (although not particularly
> pleasant) solution.

I'm a bit confused about exactly what the context is here.  AFAICS,
the fragment you quoted should work as you expect, as long as the
table always exists beforehand.  Then, the DROPs serialize the
transactions' access to the table and all is well[1].  On the other hand,
if the table *doesn't* exist beforehand, there is nothing to serialize
on and the behavior Adrian exhibited is what I'd expect.  (It's a bit
unfortunate that the complaint is about a duplicate type name not
a duplicate relation name: that's an artifact of the order in which
the rows are stored into the catalogs, and maybe we should try to
change it, because it's confusing.  But one way or the other you're
going to get a unique-index failure in whichever transaction is slightly
behind.)

But if all your transactions are doing this and only this to the table,
then there should never be an instant where the table doesn't exist.
Is there more DDL going on that you have not shown us?

regards, tom lane

[1] at least, since PG 9.2 or thereabouts.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Transactional-DDL DROP/CREATE TABLE

2016-10-06 Thread Adrian Klaver

On 10/06/2016 09:09 AM, Geoff Winkless wrote:

On 6 October 2016 at 16:57, Francisco Olarte  wrote:

You are contradicting yourself. First you say after the command it
must not exist. Then you say to do it at commit time. If it is done at
commit time you cannot guarantee it does not exist after the command.


I'm not contradicting myself at all, and frankly I'd rather this
argument didn't degenerate into the level of playground sarcasm that
you seem to be employing.


I do not see sarcasm, I see someone trying to work through what is a 
complex scenario.




_As far as the transaction is concerned_, after the command the table


What command?


should not exist. The rest of the system should not give two hoots
about what happens inside my transaction until after COMMIT. This is
how I can DROP a table, then roll it back and magically that table
still exists; it's how I can insert values into a table and roll back,
and those values aren't in the table; it's how I can delete rows from
a table, roll back and those values "magically" (as you put it) still
exist. So the DROP is done at COMMIT time, as far as everyone else is
concerned, but immediately as far as the transaction is concerned.


And using the branch for testing is not logically bankrupt, atomic
operations with responses are there for a second, this is why
processors have 'test and set' and 'compare-exchange' and similar.


But that's NOT what this function is for. It's designed to remove a
table without producing an error if it didn't exist. The fact that its
RETURN value is "DROP TABLE", whether it dropped or not, shows this.


What function?

Part of the problem with trying to sort out what you want is working off 
only snippets of code at a time.


So is it possible to show a complete example of what you are doing?




And the notice is not the reason it is not done
at commit time, the reason is the one you said, action must be taken
when you issue the command, not a magic convenient time in the future


I've no idea what this paragraph means.

Geoff





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Transactional-DDL DROP/CREATE TABLE

2016-10-06 Thread Geoff Winkless
On 6 October 2016 at 16:47, Kevin Grittner  wrote:
> I recommend using a transactional advisory lock to serialize these.

Thanks Kevin, that does seem like the best (although not particularly
pleasant) solution.

Geoff


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Transactional-DDL DROP/CREATE TABLE

2016-10-06 Thread Geoff Winkless
On 6 October 2016 at 16:57, Francisco Olarte  wrote:
> You are contradicting yourself. First you say after the command it
> must not exist. Then you say to do it at commit time. If it is done at
> commit time you cannot guarantee it does not exist after the command.

I'm not contradicting myself at all, and frankly I'd rather this
argument didn't degenerate into the level of playground sarcasm that
you seem to be employing.

_As far as the transaction is concerned_, after the command the table
should not exist. The rest of the system should not give two hoots
about what happens inside my transaction until after COMMIT. This is
how I can DROP a table, then roll it back and magically that table
still exists; it's how I can insert values into a table and roll back,
and those values aren't in the table; it's how I can delete rows from
a table, roll back and those values "magically" (as you put it) still
exist. So the DROP is done at COMMIT time, as far as everyone else is
concerned, but immediately as far as the transaction is concerned.

> And using the branch for testing is not logically bankrupt, atomic
> operations with responses are there for a second, this is why
> processors have 'test and set' and 'compare-exchange' and similar.

But that's NOT what this function is for. It's designed to remove a
table without producing an error if it didn't exist. The fact that its
RETURN value is "DROP TABLE", whether it dropped or not, shows this.

> And the notice is not the reason it is not done
> at commit time, the reason is the one you said, action must be taken
> when you issue the command, not a magic convenient time in the future

I've no idea what this paragraph means.

Geoff


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Transactional-DDL DROP/CREATE TABLE

2016-10-06 Thread Francisco Olarte
Geoff:

On Thu, Oct 6, 2016 at 5:43 PM, Geoff Winkless  wrote:
> Nope. Serializable ignores the DROP, and then freezes on CREATE (and
> then fails when the first transaction COMMITs).

Yep, I tested it too.

> Which is also broken,
> because the transaction should fail if (at COMMIT time) the table
> cannot be CREATEd, but that's no reason to not create a table within a
> temporary namespace and perform actions against that table until the
> COMMIT, at which point the table can either be validated systemwide or
> the transaction rolled back.

Well, that maybe a nice new addition to the standard, but I doubt it
would fly. If you want that kind of behaviour you should implement
them app-side, they are not that difficult.


>
>> And drop table if exsits means if it exists when the
>> server executes your command, not on the future ( the server cannot
>> know if it will exist then, your own transaction may recreate it or
>> not. Maybe you know your command sequence is not going to depend on
>> intermediate results, the server does not ).
> Then that effectively makes the IF EXISTS useless, because it might in
> fact exist by the time the transaction is committed.

Lots of people find it useful as it is. Is just that normally people
do not try to interleave conditional drop plus create on interleaved
transactions without an upper level retrying loop and expect it to
magically work as they think it should.

And, following that train of thought unconditional drop is useless,
because by commit time table may not exists, and select is useless,
because at commit time rows may not exist or have other values.


> The point of a DROP ... IF EXISTS should surely be that after the
> command, that table should no longer exist, either because it didn't
> exist or because it has been dropped (indeed, the result of "DROP...IF
> EXISTS" is "DROP TABLE").

That exactly what is does ( unless your transaction aborts in the command ).

> The idea that this can't be done at
> commit-time because people might use the NOTICE response as some sort
> of branch is (IMO) logically bankrupt: you can quite happily test for
> existence without requiring any sort of atomic DROP, if that's your
> intention.

You are contradicting yourself. First you say after the command it
must not exist. Then you say to do it at commit time. If it is done at
commit time you cannot guarantee it does not exist after the command.
And using the branch for testing is not logically bankrupt, atomic
operations with responses are there for a second, this is why
processors have 'test and set' and 'compare-exchange' and similar.
This one is similar to a test and set, you set existence to false and
test whether it existed before. I can easily test and then set, but is
not the same as TAS. And the notice is not the reason it is not done
at commit time, the reason is the one you said, action must be taken
when you issue the command, not a magic convenient time in the future.


Francisco Olarte.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Transactional-DDL DROP/CREATE TABLE

2016-10-06 Thread Kevin Grittner
On Thu, Oct 6, 2016 at 9:31 AM, Adrian Klaver  wrote:

> This is how I can trigger the ERROR:
>
> Session 1:
>
> test=# begin ;
> BEGIN
> test=# drop table if exists ddl_test;
> NOTICE:  table "ddl_test" does not exist, skipping
> DROP TABLE
> test=# create table ddl_test(id int);
> CREATE TABLE
> test=# commit ;
> COMMIT
>
> Session 2 (concurrent to session1):
>
> test=# begin ;
> BEGIN
> test=# drop table if exists ddl_test;
> NOTICE:  table "ddl_test" does not exist, skipping
> DROP TABLE
> test=# create table ddl_test(id int);
> ERROR:  duplicate key value violates unique constraint
> "pg_type_typname_nsp_index"
> DETAIL:  Key (typname, typnamespace)=(ddl_test, 2200) already exists.
> test=# commit ;
> ROLLBACK

I recommend using a transactional advisory lock to serialize these.

-- 
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Transactional-DDL DROP/CREATE TABLE

2016-10-06 Thread Francisco Olarte
Adrian:

On Thu, Oct 6, 2016 at 4:31 PM, Adrian Klaver  wrote:
> This is how I can trigger the ERROR:

This is how you can trigger the ISSUE, IMO it is a correct behaviour.

Anyway, your example lacks some important details:
1.- SHOW your isolation level.
2.- SHOW your command interleaving.

Here is an example session where IT WORKS like you pretend, and the
table exists before starting showing those details:

\set PROMPT1 'session1-%`date +%H:%M:%S` [%x]'
session1-17:27:26 []start transaction isolation level serializable;
START TRANSACTION
session1-17:27:35 [*]drop table if exists ddl_test;
DROP TABLE
session1-17:27:44 [*]create table ddl_test(id int);
CREATE TABLE
session1-17:28:03 [*]commit;
COMMIT
session1-17:28:10
-
\set PROMPT1 'session2-%`date +%H:%M:%S` [%x]'
session2-17:27:29 []start transaction isolation level serializable;
START TRANSACTION
session2-17:27:39 [*]drop table if exists ddl_test;
GAP
DROP TABLE
session2-17:28:10 [*]create table ddl_test(id int);
CREATE TABLE
session2-17:28:23 [*]commit;
COMMIT
session2-17:28:28

in the ***GAP*** mark session 2 was blocked, and it unblocked when I
issued commit in session 1. ( note the timestamps of command end are
the ones starting the next line, and except the one I marked they were
nearly instant )

Note how you can follow the command interleaving and the isolation level.

OTOH, as you point, not having the table shows the issue again:

session1-17:33:56 []start transaction isolation level serializable;
START TRANSACTION
session1-17:33:59 [*]drop table if exists ddl_test;
NOTICE:  table "ddl_test" does not exist, skipping
DROP TABLE
session1-17:34:08 [*]create table ddl_test(id int);
CREATE TABLE
session1-17:34:19 [*]commit;
COMMIT

session2-17:28:28 []start transaction isolation level serializable;
START TRANSACTION
session2-17:34:04 [*]drop table if exists ddl_test;
NOTICE:  table "ddl_test" does not exist, skipping
DROP TABLE
session2-17:34:13 [*]create table ddl_test(id int);
ERROR:  duplicate key value violates unique constraint
"pg_type_typname_nsp_index"
DETAIL:  Key (typname, typnamespace)=(ddl_test, 2200) already exists.
session2-17:34:30 [!]

This time session 2 stopped at the create table and direcly aborted
when session1 commited. Correct, IMO, although antiestetic behaviour.
I think it is due to drop being a no-op if table did not exist, as
commands are not postponed ( it must show you the notice or not before
completing ), so you are just issuing to create commands for the same
table.


Your serial postponed execution is a nice desire, but I doubt it is necessary .

Francisco Olarte.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Transactional-DDL DROP/CREATE TABLE

2016-10-06 Thread Geoff Winkless
On 6 October 2016 at 15:04, Francisco Olarte  wrote:
> And anyway, what isolation level are you working on? Because it seems
> you are using a weaker one than serializable, as I think serializable
> should give you more or less what you are expecting ( not on commit
> time, but second drop could probably get stuck until first transaction
> commits ).

Nope. Serializable ignores the DROP, and then freezes on CREATE (and
then fails when the first transaction COMMITs). Which is also broken,
because the transaction should fail if (at COMMIT time) the table
cannot be CREATEd, but that's no reason to not create a table within a
temporary namespace and perform actions against that table until the
COMMIT, at which point the table can either be validated systemwide or
the transaction rolled back.

> And drop table if exsits means if it exists when the
> server executes your command, not on the future ( the server cannot
> know if it will exist then, your own transaction may recreate it or
> not. Maybe you know your command sequence is not going to depend on
> intermediate results, the server does not ).

Then that effectively makes the IF EXISTS useless, because it might in
fact exist by the time the transaction is committed.

> Bear in mind you
> are supposed to be informed of the result of your commands. I mean,
> you issue drop if exists and then you are allowed to issue a different
> command depending on the result of the drop, i.e., you may be willing
> to recreate the table if it existed and not create it if not, so the
> drop must record your intentions and lock the table definition, like a
> delete does with data rows.

The point of a DROP ... IF EXISTS should surely be that after the
command, that table should no longer exist, either because it didn't
exist or because it has been dropped (indeed, the result of "DROP...IF
EXISTS" is "DROP TABLE"). The idea that this can't be done at
commit-time because people might use the NOTICE response as some sort
of branch is (IMO) logically bankrupt: you can quite happily test for
existence without requiring any sort of atomic DROP, if that's your
intention.

Geoff


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Transactional-DDL DROP/CREATE TABLE

2016-10-06 Thread Adrian Klaver

On 10/06/2016 02:21 AM, Geoff Winkless wrote:

Hi

I have code that does (inside a single transaction)

DROP TABLE IF EXISTS mytable; CREATE TABLE mytable 

Occasionally this produces

ERROR: duplicate key value violates unique constraint
"pg_type_typname_nsp_index" DETAIL: Key (typname,
typnamespace)=(mytable, 2200) already exists.

I can get away from this by using CREATE TABLE IF NOT EXISTS in the
same code, but there's the potential that the wrong data will end up
in the table if that happens, and it also seems a little odd.

Would you not expect this transaction to be atomic? ie at commit time,
the transaction should drop any table with the same name that has been
created by another transaction.


This is how I can trigger the ERROR:

Session 1:

test=# begin ;
BEGIN
test=# drop table if exists ddl_test;
NOTICE:  table "ddl_test" does not exist, skipping
DROP TABLE
test=# create table ddl_test(id int);
CREATE TABLE
test=# commit ;
COMMIT

Session 2 (concurrent to session1):

test=# begin ;
BEGIN
test=# drop table if exists ddl_test;
NOTICE:  table "ddl_test" does not exist, skipping
DROP TABLE
test=# create table ddl_test(id int);
ERROR:  duplicate key value violates unique constraint 
"pg_type_typname_nsp_index"

DETAIL:  Key (typname, typnamespace)=(ddl_test, 2200) already exists.
test=# commit ;
ROLLBACK


So not having the table when you start both sessions seems to be the issue.




Geoff





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Transactional-DDL DROP/CREATE TABLE

2016-10-06 Thread Francisco Olarte
Hi Geoff:

On Thu, Oct 6, 2016 at 1:58 PM, Geoff Winkless  wrote:
> But surely Transactional DDL implies that (it should appear that) nothing
> happens until transaction-commit. That means "drop table if exists" should
> drop the table if it exists at commit time, not drop the table if it didn't
> exist when the code was first run.

I'm not sure even transactional DML works that way. Bear in mind you
are supposed to be informed of the result of your commands. I mean,
you issue drop if exists and then you are allowed to issue a different
command depending on the result of the drop, i.e., you may be willing
to recreate the table if it existed and not create it if not, so the
drop must record your intentions and lock the table definition, like a
delete does with data rows.

> If the other transaction hasn't committed, then it should either fail with
> rollback when committed (because it tried to create a table that exists at
> commit time) or drop the new table (because it also has a drop clause).

It depends on the implementation. IIRC with serializable isolation
level you are guaranteeed a final result coherent with some serial
order of execution of all the completed transactions, but even there
you are not guaranteed it will find an order of execution for all of
them, some may be aborted. Trying to do what you pretend will result
in an extremely deadlock-prone system.

And anyway, what isolation level are you working on? Because it seems
you are using a weaker one than serializable, as I think serializable
should give you more or less what you are expecting ( not on commit
time, but second drop could probably get stuck until first transaction
commits ).

And surely Transactional D*L does not imply what you state in all
isolation levels. And drop table if exsits means if it exists when the
server executes your command, not on the future ( the server cannot
know if it will exist then, your own transaction may recreate it or
not. Maybe you know your command sequence is not going to depend on
intermediate results, the server does not ).


Francisco Olarte.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Transactional-DDL DROP/CREATE TABLE

2016-10-06 Thread Geoff Winkless
On 6 Oct 2016 12:06 p.m., "Francisco Olarte"  wrote:
>
> On Thu, Oct 6, 2016 at 11:21 AM, Geoff Winkless 
wrote:
> > DROP TABLE IF EXISTS mytable; CREATE TABLE mytable 
> >
> > Occasionally this produces
> >
> > ERROR: duplicate key value violates unique constraint
> > "pg_type_typname_nsp_index" DETAIL: Key (typname,
> > typnamespace)=(mytable, 2200) already exists.
> >
> > I can get away from this by using CREATE TABLE IF NOT EXISTS in the
> > same code, but there's the potential that the wrong data will end up
> > in the table if that happens, and it also seems a little odd.
> >
> > Would you not expect this transaction to be atomic? ie at commit time,
> > the transaction should drop any table with the same name that has been
> > created by another transaction.
>
> It seems to be atomic, either it drop/creates or does nothing. What
> you want is a beyond atomicity. What does the other transaction do?
> What if the other transaction hasn't commited? or it has created the
> table anew ( no drop, the table wasn't there ). What are the isolation
> levels involved?

But surely Transactional DDL implies that (it should appear that) nothing
happens until transaction-commit. That means "drop table if exists" should
drop the table if it exists at commit time, not drop the table if it didn't
exist when the code was first run.

If the other transaction hasn't committed, then it should either fail with
rollback when committed (because it tried to create a table that exists at
commit time) or drop the new table (because it also has a drop clause).

Geoff


Re: [GENERAL] Transactional-DDL DROP/CREATE TABLE

2016-10-06 Thread Francisco Olarte
On Thu, Oct 6, 2016 at 11:21 AM, Geoff Winkless  wrote:
> DROP TABLE IF EXISTS mytable; CREATE TABLE mytable 
>
> Occasionally this produces
>
> ERROR: duplicate key value violates unique constraint
> "pg_type_typname_nsp_index" DETAIL: Key (typname,
> typnamespace)=(mytable, 2200) already exists.
>
> I can get away from this by using CREATE TABLE IF NOT EXISTS in the
> same code, but there's the potential that the wrong data will end up
> in the table if that happens, and it also seems a little odd.
>
> Would you not expect this transaction to be atomic? ie at commit time,
> the transaction should drop any table with the same name that has been
> created by another transaction.

It seems to be atomic, either it drop/creates or does nothing. What
you want is a beyond atomicity. What does the other transaction do?
What if the other transaction hasn't commited? or it has created the
table anew ( no drop, the table wasn't there ). What are the isolation
levels involved?

If all the transactions operating in the table are doing just what you
show an nothing more, and they are all serializable, I MAY expect
that, but wouldn't put much money in it.

Francisco Olarte.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ZSON, PostgreSQL extension for compressing JSONB

2016-10-06 Thread Aleksander Alekseev
Hello, Eduardo.

> Why do you use a dictionary compression and not zlib/lz4/bzip/anyother?

Internally PostgreSQL already has LZ77 family algorithm - PGLZ. I didn't
try to replace it, only to supplement. PGLZ compresses every piece of
data (JSONB documents in this case) independently. What I did is removed
redundant data that exists between documents and that PGLZ can't
compress since every single document usually uses every key and similar
strings (some sort of string tags in arrays, etc) only once.

> Compress/Decompress speed?

By my observations PGLZ has characteristics similar to GZIP. I didn't
benchmark ZSON encoding/decoding separately from DBMS because end
user is interested only in TPS which depends on IO, amount of documents
that we could fit into memory and other factors.

> As I understand, postgresql must decompress before use.

Only if you try to read document fields. For deleting a tuple, doing
vacuum, etc there is no need to decompress a data.

> Some compressing algs (dictionary transforms where a token is word)
> allow search for tokens/words directly on compressed data transforming
> the token/word to search in dictionary entry and searching it in
> compressed data. From it, replace, substring, etc... string
> manipulations algs at word level can be implemented.

Unfortunately I doubt that current ZSON implementation can use these
ideas. However I must agree that it's a very interesting field of
research. I don't think anyone tried to do something like this in
PostgreSQL yet.

> My passion is compression, do you care if I try other algorithms? For
> that, some dict id numbers (>1024 or >1<<16 or <128 for example) say
> which compression algorithm is used or must change zson_header to store
> that information. Doing that, each document could be compressed with
> the best compressor (size or decompression speed) at idle times or at
> request.

By all means! Naturally if you'll find a better encoding I would be happy
to merge corresponding code in ZSON's repository.

> Thanks for sharing and time.

Thanks for feedback and sharing your thoughts!

-- 
Best regards,
Aleksander Alekseev


signature.asc
Description: PGP signature


[GENERAL] Transactional-DDL DROP/CREATE TABLE

2016-10-06 Thread Geoff Winkless
Hi

I have code that does (inside a single transaction)

DROP TABLE IF EXISTS mytable; CREATE TABLE mytable 

Occasionally this produces

ERROR: duplicate key value violates unique constraint
"pg_type_typname_nsp_index" DETAIL: Key (typname,
typnamespace)=(mytable, 2200) already exists.

I can get away from this by using CREATE TABLE IF NOT EXISTS in the
same code, but there's the potential that the wrong data will end up
in the table if that happens, and it also seems a little odd.

Would you not expect this transaction to be atomic? ie at commit time,
the transaction should drop any table with the same name that has been
created by another transaction.

Geoff


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ZSON, PostgreSQL extension for compressing JSONB

2016-10-06 Thread Simon Riggs
On 5 October 2016 at 16:58, Aleksander Alekseev
 wrote:

> What about evolving schema of JSON/JSONB/XML? For instance,
> adding/removing keys in new versions of the application. UPDATE
> COMPRESSION DICTIONARY?

You can add to a dictionary, but not remove things. I'm not sure
that's a big issue.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgres failed to start from services manager on windows 2008 r2

2016-10-06 Thread PHANIKUMAR G
hi Thomas,

  thanks for your reply. small correction for my previous reply, we use
postgresql zip and bundle with our products installer, during install time
of our product, on target host this zip will be extracted. Data directory
is under pgsql along with bin and other directories.



we created/registered  the service with below commands. And could you
please tell me how to create postgres service with "Local Network Service"

*pg_ctl.exe register -N "prostgresService" -D
"fullpath_to_data_directory" -W*

*>sc description prostgresService "prostgresService Applicatio*
*n Server Database Service"*

On Wed, Oct 5, 2016 at 4:04 PM, PHANIKUMAR G 
wrote:

> hi,
>   thanks for your reply. we use postgres zip, during install time of our
> product, on target host this zip will be extracted. We are not creating
> data directory, when service is started by default data directory will be
> created.
>
> we created/registered  the service with below commands
>
> *pg_ctl.exe register -N "prostgresService" -D
> "fullpath_to_data_directory" -W*
>
> *>sc description prostgresService "prostgresService Applicatio*
> *n Server Database Service"*
>
>
> On Sat, Oct 1, 2016 at 9:36 PM, Thomas Kellerer 
> wrote:
>
>> PHANIKUMAR G schrieb am 01.10.2016 um 17:30:
>>
>>> We then tried to start the service with the following changes.
>>>
>>>
>>>  a. we changed the "Log on as" type to "This account"
>>>  b. changed the account name to ".\>> privileges>"
>>>  c. And entered the password for this account and saved.
>>>
>>> After that we are able to start the postgresql service successfully.
>>>
>>> Found that the pg_log directory is created under data directory with
>>> postgres-Sep.log
>>>
>>>
>>> Why it is failing with local system account, the local system account
>>> is part of administrators group. If we specifically provide
>>> credentials to the service as explained above, service getting
>>> started. Please help me to understand what is causing.
>>>
>>
>>
>> Where is the data directory? And how did you create the data directory?
>>
>> Check the privileges on the data directory.
>> This sounds as if the local system account does not have the privilege to
>> write to the data directory.
>>
>> The local system account (or "Administrator") don't have the privilege to
>> read and write all files.
>> Those accounts only have the ability to give themselves these privileges.
>>
>> Also: if I'm not mistaken, Postgres 9.3 creates the service with the
>> "Local Network Service", not with "Local System Account"
>>
>> So how did you create the initial service?
>>
>>
>>
>>
>>
>>
>> --
>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-general
>>
>
>