Re: [GENERAL] Uber migrated from Postgres to MySQL

2016-07-27 Thread Jason Dusek
With regards to write amplification, it makes me think about about OIDs.
Used to be, every row had an OID and that OID persisted across row versions.


https://www.postgresql.org/docs/9.5/static/runtime-config-compatible.html#GUC-DEFAULT-WITH-OIDS

Would reintroducing such a feature address some of Uber's concerns about
multiple indexes? It could, and would do so without the implicit
requirement of a foreign key; but it would also require a fast OID to CTID
mapping.

On Tue, 26 Jul 2016 at 10:40 Guyren Howe  wrote:

> Honestly, I've never heard of anyone doing that. But it sounds like they
> had good reasons.
>
> https://eng.uber.com/mysql-migration/
>
> Thoughts?
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] Uber migrated from Postgres to MySQL

2016-07-27 Thread James Keener
So, millions is a lot, but it's not difficult to get to a place where
you have thousands or tables.

Image a case in which census data and the associated geometries.
https://github.com/censusreporter/census-postgres has 22 surveys, each
with 230+ tables. That's 5000+ tables right there.  Now, the TIGER
tables for all of that is another 50 tables per year, so another 350
tables.

If these were to be partitioned by state, instead of all records for
all states in a single table, then we're looking at 270,000.

Jim

On Thu, Jul 28, 2016 at 12:48 AM, John R Pierce  wrote:
> On 7/27/2016 9:39 PM, Jeff Janes wrote:
>>
>> That depends on how how many objects there are consuming that 1 TB.
>> With millions of small objects, you will have problems.  Not as many
>> in 9.5 as there were in 9.1, but still it does not scale linearly in
>> the number of objects.  If you only have thousands of objects, then as
>> far as I know -k works like a charm.
>
>
> millions of tables?  thats akin to having millions of classes in an object
> oriented program, seems a bit excessive.
>
>
>
> --
> john r pierce, recycling bits in santa cruz
>
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Uber migrated from Postgres to MySQL

2016-07-27 Thread John R Pierce

On 7/27/2016 9:39 PM, Jeff Janes wrote:

That depends on how how many objects there are consuming that 1 TB.
With millions of small objects, you will have problems.  Not as many
in 9.5 as there were in 9.1, but still it does not scale linearly in
the number of objects.  If you only have thousands of objects, then as
far as I know -k works like a charm.


millions of tables?  thats akin to having millions of classes in an 
object oriented program, seems a bit excessive.




--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Uber migrated from Postgres to MySQL

2016-07-27 Thread Jeff Janes
On Wed, Jul 27, 2016 at 7:23 PM, Greg Sabino Mullane  wrote:

> Marc wrote:
>> I donât have a 1TB database to try it on, mind you, so your
>> âwait couple of daysâ might be *with* the âlink option?
>
> I think you mean *without*, but yeah, there is no way the --link
> option is going to take that long.

That depends on how how many objects there are consuming that 1 TB.
With millions of small objects, you will have problems.  Not as many
in 9.5 as there were in 9.1, but still it does not scale linearly in
the number of objects.  If you only have thousands of objects, then as
far as I know -k works like a charm.

> Hard links are  awesome. We've upgraded
> some really big databases, and --link is really, really fast.
> If you can't use --link (usually because you want to get over
> the checksum hump), we use something like Bucardo to help out.
> No need to ever wait a "couple of days" as OP claims. :)
>
> What really bites is the analyze afterwards. That's the part
> that takes too long (yes, --in-stages helps some). Would love
> to see progress made there.

I'm of two minds about --in-stages.  On the one hand, it is totally
unprincipled.  Only two stages are in principle plausible, the stage
before you open your database for general use, and stage after you do
so.  (you could say there is a third stage, the one in which you do
EOW/EOM/EOY processing.  So don't run pg_upgrade on the second to last
day of the week or month, or towards the end of the second to the last
week of the year).  The current implementation of --in-stages uses
four stages, not two, and offers no convenient way to pause between
stages to open your database for general use.  On the other hand,
--in-stages is supposed to be pragmatic, not principled.  It is
supposed to work adequately over a wide variety of scenarios, even if
don't know ahead of time which scenario applies to you.

Cheers,

Jeff


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Uber migrated from Postgres to MySQL

2016-07-27 Thread Bruce Momjian
On Thu, Jul 28, 2016 at 02:23:18AM -, Greg Sabino Mullane wrote:
> 
> Marc wrote:
> > I donât have a 1TB database to try it on, mind you, so your 
> > âwait couple of daysâ might be *with* the âlink option?
> 
> I think you mean *without*, but yeah, there is no way the --link 
> option is going to take that long. Hard links are  awesome. We've upgraded 
> some really big databases, and --link is really, really fast. 
> If you can't use --link (usually because you want to get over 
> the checksum hump), we use something like Bucardo to help out. 
> No need to ever wait a "couple of days" as OP claims. :)
> 
> What really bites is the analyze afterwards. That's the part 
> that takes too long (yes, --in-stages helps some). Would love 
> to see progress made there.

Agreed.  I will try to prioritize it.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+ Ancient Roman grave inscription +


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Uber migrated from Postgres to MySQL

2016-07-27 Thread Greg Sabino Mullane

-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160


Marc wrote:
> I donât have a 1TB database to try it on, mind you, so your 
> âwait couple of daysâ might be *with* the âlink option?

I think you mean *without*, but yeah, there is no way the --link 
option is going to take that long. Hard links are  awesome. We've upgraded 
some really big databases, and --link is really, really fast. 
If you can't use --link (usually because you want to get over 
the checksum hump), we use something like Bucardo to help out. 
No need to ever wait a "couple of days" as OP claims. :)

What really bites is the analyze afterwards. That's the part 
that takes too long (yes, --in-stages helps some). Would love 
to see progress made there.

- -- 
Greg Sabino Mullane g...@turnstep.com
End Point Corporation http://www.endpoint.com/
PGP Key: 0x14964AC8 201607272220
http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8
-BEGIN PGP SIGNATURE-

iEYEAREDAAYFAleZbGEACgkQvJuQZxSWSsiAngCgmsP0O9rMMmu43RQ6LkQJwMtP
IZcAn25UJWNs03q75iIEOVAabKHaBrWV
=zdDO
-END PGP SIGNATURE-




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Multiple clusters with same tablespace location

2016-07-27 Thread Nate Dudenhoeffer
Tom, thanks for the advice. I brought up a new instance yesterday, with the
intent of trying it, and discovered that Wal-e with the "blind-restore"
option would put everything in the pg_tblspc directory, instead of
symlinking it. For this use case, that worked great.

Nate

On Wed, Jul 20, 2016 at 11:16 AM, Tom Lane  wrote:

> Nate Dudenhoeffer  writes:
> > The issue is that both clusters are using a base_backup and wal restore
> > from the same master database, so when they are restored, the tablespace
> > will already exist. Is there a way to change the tablespace location
> during
> > the recovery process?
>
> You would definitely need each slave to have its own copy of the
> tablespace.  I've not done this myself and would strongly recommend
> testing on non-production instances, but I believe you can make it work
> by adjusting each slave's $PGDATA/pg_tblspc symlinks to point to different
> locations.  When setting up new slave instances, pg_basebackup's
> --tablespace-mapping option would help you with that.  For an existing
> slave instance, you'd need to shut it down while manually moving the
> tablespace directory(s) and re-pointing the symlink(s).
>
> regards, tom lane
>


Re: [GENERAL] Uber migrated from Postgres to MySQL

2016-07-27 Thread Bruce Momjian
On Wed, Jul 27, 2016 at 07:02:52PM -0400, Alvaro Herrera wrote:
> Patrick B wrote:
> > >
> > > I think it's safe to say that that has absolutely nothing to do
> > > with the size being 3TB.  They symptoms you report are a little
> > > thin to diagnose the actual cause.
> > 
> > might be... we're using SATA disks... and that's a big problem. But still..
> > the size of the DB is indeed a problem.
> 
> Andrew is correct -- the size of the database is not a limitation for
> pg_upgrade.  Disk tech is not relevant either.  You may run into the
> problem that you don't have enough disk space, but then that is not a
> database or pg_upgrade problem, is it?
> 
> Other things might cause issues, but since you haven't actually reported
> the problem, we don't know what is or whether there is any possible fix.
> 
> Then again, if you want to report a pg_upgrade failure, I suggest you
> open a thread of your own rather than hijack this one.

You need only minimal disk space when using pg_upgrade --link.  I agree
we would like a full bug report so we can find a fix for you.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+ Ancient Roman grave inscription +


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Uber migrated from Postgres to MySQL

2016-07-27 Thread Alvaro Herrera
Patrick B wrote:
> >
> > I think it's safe to say that that has absolutely nothing to do
> > with the size being 3TB.  They symptoms you report are a little
> > thin to diagnose the actual cause.
> 
> might be... we're using SATA disks... and that's a big problem. But still..
> the size of the DB is indeed a problem.

Andrew is correct -- the size of the database is not a limitation for
pg_upgrade.  Disk tech is not relevant either.  You may run into the
problem that you don't have enough disk space, but then that is not a
database or pg_upgrade problem, is it?

Other things might cause issues, but since you haven't actually reported
the problem, we don't know what is or whether there is any possible fix.

Then again, if you want to report a pg_upgrade failure, I suggest you
open a thread of your own rather than hijack this one.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Locking issue

2016-07-27 Thread Thomas Munro
On Thu, Jul 28, 2016 at 10:18 AM, Josh Berkus  wrote:
> On 07/26/2016 01:16 AM, David Harrison wrote:
>   where sps.sessionid = ses and
> sps.rankid = rank and
>   ...
> pg_try_advisory_lock(seatid)
>   order by s.row_number, s.seat_number_in_row
>   limit 1
>   for update of sps;

Don't you want pg_try_advisory_xact_lock (note "xact") here?
Otherwise you hold the advisory lock for the rest of the session,
unless you explicitly release it later.

> You appear to be trying to implement your own "SKIP LOCKED" (which you
> should maybe use instead).

+1

One difference between WHERE foo = 42 AND
pg_try_advisory_xact_lock(...) ... FOR UPDATE and WHERE foo = 42 ...
FOR UPDATE SKIP LOCKED is that the order of evaluation of the bits of
the WHERE clause linked by AND is probably undefined, so you could in
theory be acquiring advisory locks corresponding rows that don't have
foo = 42, depending on how the evaluator finished up processing that.
That could mess things up a bit for concurrent sessions.  SKIP LOCKED
on the other hand operates after any WHERE clause filtering.

> I'm not sure this works as-is; SELECT FOR UPDATE with LIMIT/ORDER is
> always fairly tricky, and tends to block the whole set, not just the
> LIMITed row.

There are complications with ORDER BY  FOR UPDATE, because row locking happens later
than ordering and causes you to see newer values that still match the
WHERE clause.  It gives you an isolation level more similar to READ
COMMITTED in non-snapshot based databases, except that the plan
implementing the ORDER BY didn't get the memo, and you'd have to fix
that with an outer query that sorts.  But I don't see why it would
block the whole set: LockRows sits on top and only gets its hands on
tuples emitted by nodes below it, so if there is a LIMIT then how
could it lock anything outside the limited set of rows that are
returned?

-- 
Thomas Munro
http://www.enterprisedb.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Uber migrated from Postgres to MySQL

2016-07-27 Thread Patrick B
>
> I think it's safe to say that that has absolutely nothing to do
> with the size being 3TB.  They symptoms you report are a little
> thin to diagnose the actual cause.


might be... we're using SATA disks... and that's a big problem. But still..
the size of the DB is indeed a problem.


Re: [GENERAL] Locking issue

2016-07-27 Thread Josh Berkus
On 07/26/2016 01:16 AM, David Harrison wrote:
> Hi Josh,
> 
> Attached is the function and below the query that calls it, below that the 
> result of SELECT version();
> 
> SELECT tl_guest_list('13313880', '174880', null, '151094636600', null, null);
> 
> 
> 
> 
> 
> "PostgreSQL 8.4.9 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 
> 4.4.7 20120313 (Red Hat 4.4.7-3), 64-bit”
> 
> We have a job queue manager (beanstalkd) that we push jobs onto for it to 
> process, previously there was only one worker so tl_guest_list would only get 
> called once at a time, after execution it would return the results and the 
> next job would run firing tl_guest_list again. 
> 
> I’ve now upped the number of workers to 10 so it could be that 10 jobs pushed 
> into the queue are all processed concurrently. Firing tl_guest_list up to 10 
> times at the same time. I expect that the update on ste_seatspersessions is 
> locking the table and I expect the function to wait at the select on 
> ste_seatspersessions. However the function is processing the select query and 
> returning null. Seems like an issue with table locking or ignoring table 
> locking.


This is interesting:

  select seatid
  into seat
  from ste_seatspersessions sps join
ste_seats s using (seatid) join
ste_usergroupsaccessseatsets uss using (seat_setid)
  where sps.sessionid = ses and
sps.rankid = rank and
  ...
pg_try_advisory_lock(seatid)
  order by s.row_number, s.seat_number_in_row
  limit 1
  for update of sps;

You appear to be trying to implement your own "SKIP LOCKED" (which you
should maybe use instead).

I'm not sure this works as-is; SELECT FOR UPDATE with LIMIT/ORDER is
always fairly tricky, and tends to block the whole set, not just the
LIMITed row.

What I suggest is that you walk this through several concurrent sessions
yourself.  Use explicit transactions so that each concurrent session
will hold onto its locks.

-- 
--
Josh Berkus
Red Hat OSAS
(any opinions are my own)


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Uber migrated from Postgres to MySQL

2016-07-27 Thread Kevin Grittner
On Wed, Jul 27, 2016 at 4:34 PM, Patrick B  wrote:
> We can't use the pg_upgrade in our 3TB database just does not work..
> that's the main reason we're still using 9.2.

I think it's safe to say that that has absolutely nothing to do
with the size being 3TB.  They symptoms you report are a little
thin to diagnose the actual cause.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


pg_upgrade and not working (was Re: [GENERAL] Uber migrated from Postgres to MySQL)

2016-07-27 Thread Andrew Sullivan
I bet there are some people around here who could help you
troubleshoot.  Just sayin'  (I'm not among them.)

A

On Thu, Jul 28, 2016 at 09:34:05AM +1200, Patrick B wrote:
> We can't use the pg_upgrade in our 3TB database just does not work..
> that's the main reason we're still using 9.2.

-- 
Andrew Sullivan
a...@crankycanuck.ca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] question on parsing postgres sql queries

2016-07-27 Thread Alvaro Herrera
Kevin Grittner wrote:

> On the other hand, try connecting to a database with
> psql and typing:
> 
> \h create index
> 
> ... (or any other command name).  The help you get there is fished
> out of the docs.

BTW I noticed a few days ago that we don't have a "where BLAH can be one
of" section for the window_definition replaceable term in the help for
SELECT.  We omit these sections for trivial clauses, but I think WINDOW
is elaborate enough that it should have one.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] question on parsing postgres sql queries

2016-07-27 Thread Kevin Grittner
On Tue, Jul 26, 2016 at 4:20 AM, Jerome Wagner  wrote:

> I am doing some research on postgres sql query parsing.

> I was wondering what people think of the conformance with regards to the
> real parser of the documentations on
>  - https://www.postgresql.org/docs/current/static/sql-select.html
>  - https://www.postgresql.org/docs/current/static/sql-copy.html
> ... and more generally sgmls in
> https://github.com/postgres/postgres/tree/master/doc/src/sgml/ref
>
> Would it make sense to use these sgml synopsis as some kind of source of
> truth, parse them, and automatically generate a parser for a specifc
> language ?

It might be interesting to do as an academic exercise or to audit
the accuracy of the synopses, but I don't think it's practical for
generating production-quality parsers -- at least in the short
term.  Besides issues mentioned by others (e.g., parser support for
legacy syntax we don't want to document or encourage), we sometimes
allow things through the parser so that we can code more
user-friendly messages off of the parse tree than a generated
parser would provide.

I also don't remember seeing anyone mention the problems with
forward references and metadata from system catalogs.  These either
need to be handled by a "rewind and try again" approach or (better
IMO) an additional pass or two walking the parse tree to emit a
version where generic "place-holders" are replaced by something
more specific.  See the "parse analysis" and "rewrite" steps in
PostgreSQL for how that is currently handled.  Before working in
the PostgreSQL source I had helped develop a SQL parser in ANTLR,
where the same basic parser generator is used for lexer, parser,
and tree-walker phases (using pretty much the same grammar
specifier for all of them), just taking characters, tokens, or
parse tree nodes as input -- automatic generation of "main" parser
might be feasible in such an environment (possibly with some sort
of annotations or hand-written light initial parsing phase), but I
think the later tree walkers would need to be hand-coded.

> I feel like the conformance level of the documentation is high and that the
> sgml synopis seem to be nearly programmatically sufficient to create
> parsers.
>
> what do you think ?

Nearly.

> Could the parser commiters share some lights on how the documentation
> process interacts with the parser commits ?

There is no automated interaction there -- it depends on human
attention.  On the other hand, try connecting to a database with
psql and typing:

\h create index

... (or any other command name).  The help you get there is fished
out of the docs.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Uber migrated from Postgres to MySQL

2016-07-27 Thread Patrick B
We can't use the pg_upgrade in our 3TB database just does not work..
that's the main reason we're still using 9.2.


Re: [GENERAL] Server side backend permanent session memory usage ?

2016-07-27 Thread Tom Lane
"Day, David"  writes:
> We have a server side function written in plpgsql called by the client side 
> application. That fx does work that includes
> Invoking a server side plpython2u function.
> If I repeat calling  this severer side logic/function outside our app from a 
> a psql invoked connection and I monitor
> the memory growth on the backend side I see top reporting a  continual growth 
> trend after each burst of invocations.
> The premise was that the issue of the repeated invocation of the python 
> interpreter and/or the python function  use of a "subprocess" method
> does not cleanup correctly and this  accounts for the memory growth and 
> system performance degradation.   
> FreeBSD 10.1, postgres 9.5.3 or postgres 9.3.11, running on VM,  no relevant 
> problems indicated in log files.

It'd be very useful to find out whether you can reproduce the leakage in
9.6beta (any beta release will do).  There are a couple of significant
fixes in plpython that might be relevant --- I'm looking at these in
particular:

https://git.postgresql.org/gitweb/?p=postgresql.git=commitdiff=8c75ad436
https://git.postgresql.org/gitweb/?p=postgresql.git=commitdiff=1d2fe56e4

We did not back-patch those changes because they seemed rather
invasive/risky compared to the value for average users.

If you still see misbehavior in 9.6, it'd be worth trying to extract
a self-contained example.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Server side backend permanent session memory usage ?

2016-07-27 Thread Day, David
Hi,

I've been knocking heads with this issue for a while without updating this 
thread.

I have implemented a refresh connection behavior from our pooler/client side 
which 
seems to be a successful  work-around for memory loss indications on the 
backend side.

With that "solution" in my pocket and at the suggestion/hunch of one our 
developers,  I ran this experiment.

We have a server side function written in plpgsql called by the client side 
application. That fx does work that includes
Invoking a server side plpython2u function.

If I repeat calling  this severer side logic/function outside our app from a a 
psql invoked connection and I monitor
the memory growth on the backend side I see top reporting a  continual growth 
trend after each burst of invocations.

The premise was that the issue of the repeated invocation of the python 
interpreter and/or the python function  use of a "subprocess" method
does not cleanup correctly and this  accounts for the memory growth and system 
performance degradation.   


FreeBSD 10.1, postgres 9.5.3 or postgres 9.3.11, running on VM,  no relevant 
problems indicated in log files.


I could attempt to create a self contained example for the pg community 
exploration or preferably can attach gdb to the backend for those
who would like to suggest to me some particular details that might confirm or 
rule out this theory.


Thanks


Dave Day




Tom,

Thank you for that very considered answer.
I will put that information to use over the next couple of days and get back to 
the gen-list with my findings.

I hear what you are saying about item 1. I suspect it is at least an indicator 
of the degradation if not the actual cause of it. The only thing I have had to 
restart historically to recover is the pooling agent to recover normal 
operation.  ( And the poolers/client memory stats are pretty stable with run 
time ) 

There are only about 120 tables in two schemas that could be accessed by these 
session,  and I suspect what they are actually repeatedly accessing is a very 
small subset of that,  20-30 tables and perhaps 30-40 functions,  which perhaps 
makes Item 2 unlikely. 

Item 3 - should be doable to get these results - might have some information by 
tomorrow.
 


Regards

Dave Day


-Original Message-
From: Tom Lane [mailto:t...@sss.pgh.pa.us] 
Sent: Thursday, July 14, 2016 11:22 AM
To: Day, David
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Server side backend permanent session memory usage ?

"Day, David"  writes:
> There seems to be an unbounded growth of  memory usage by the backend 
> postgres process representing a "permanent" session in our system.

It's hard to evaluate this report with so little information, but there are at 
least three possible explanations:

1. Many people misunderstand "top"'s output and believe they are seeing memory 
bloat when they aren't.  This happens because top only charges pages of shared 
memory to a process after the process has first physically touched those pages. 
 So a PG process's claimed use of shared memory will gradually grow from 
nothing to the whole shared-memory area, as it has occasion to make use of 
different shared buffers, lock table entries, etc.
You can correct for this by subtracting the SHR (shared) column from the 
process's reported size, but people often fail to.  (Note: it's possible that 
FreeBSD's implementation doesn't suffer from this problem, but the issue 
definitely exists on e.g. Linux.)

2. If, over time, the queries issued to the process touch many different tables 
(I'm talking thousands of tables), or execute large numbers of distinct plpgsql 
functions, etc, then you will get bloating of the internal caches that hold 
copies of that catalog data.  PG generally operates on the principle that 
cached is better than not cached, so it doesn't try to limit the size of those 
caches; but in some installations that can cause problems.  If this is your 
situation, then indeed restarting the sessions periodically may be necessary.

3. Or you might have found an actual memory leak.  PG's memory usage 
conventions are such that true leaks that persist across transactions are 
pretty rare ... but I won't say it doesn't happen.

If you've eliminated point 1 and want to try to look into the other theories, 
you could do this: attach to a recently-started session with gdb, and execute
call MemoryContextStats(TopMemoryContext)
quit
This will cause a memory usage map to get dumped to stderr (hopefully you are 
starting the postmaster in such a way that that gets captured to a log file 
rather than sent to /dev/null).  Save that.  Wait until you see bloat, reattach 
and repeat, compare the memory maps.  Let us know what you see.  If possible, 
compare maps taken at points where the session is idle and waiting for input.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make 

Re: [GENERAL] Uber migrated from Postgres to MySQL

2016-07-27 Thread Andrew Sullivan
On Wed, Jul 27, 2016 at 02:33:54PM -0500, Kevin Grittner wrote:
> Until you get to the end of the upgrade and *start the cluster
> under the new version*  you can fall back to the old version.

Yeah, but to be fair a lot of well-funded businesses (note what
started this discussion) are pico-managed by people way up the stack
who want a bite-sized answer.  If you do not have a happy story for,
"What if we're 48 hours into the upgrade and discover some critical
bug corner case that makes us need to roll back?" then you're going to
lose them.  Never mind that such cases literally never happen (if you
have a 48 hour old bug in an Internet system today, you have an
emergency bugfix, not a rollback).

A great deal of practical delivery of technology involves managing
expectations of management who do not understand what they are asking
for and basically want a glib happy answer.  As people delivering such
things, we must find a glib happy answer that does not get us fired if
it turns out to be false.  The poor story Postgres has about
downgrade, _even if it's a stupid problem_, is a problem.  It might
not be worth fixing because it's a stupid problem.  But one has to
face the critique in its own terms.

A

-- 
Andrew Sullivan
a...@crankycanuck.ca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Uber migrated from Postgres to MySQL

2016-07-27 Thread Kevin Grittner
On Wed, Jul 27, 2016 at 9:22 AM, Scott Mead  wrote:
> On Wed, Jul 27, 2016 at 3:34 AM, Achilleas Mantzios 
>  wrote:

>> Our last 1TB upgrade from 9.0 -> 9.3 went like a charm in something like
>> seconds. (with the -k option)
>> However, be warned that the planing and testing took one full week.

Agreed -- you should spend a lot more time on planning and testing
than the actual upgrade will take; but that will probably be true
with any product.

> That being said, it doesn't really provide a back-out plan.

Until you get to the end of the upgrade and *start the cluster
under the new version*  you can fall back to the old version.  I
remember a couple times that we saw something during a pg_upgrade
--link run that we weren't expecting, and did exactly that so we
could investigate and try again later.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Uber migrated from Postgres to MySQL

2016-07-27 Thread Kevin Grittner
On Wed, Jul 27, 2016 at 2:15 AM, Condor  wrote:

> They are right for upgrades.
> It's a hard to shutdown 1 TB database and wait couple of days pg_upgrade to
> finish upgrade and meanwhile database is offline.

What?  I had a cluster over 3TB and it was offline for only 10
minutes doing a major version upgrade using pg_upgrade's --link
option.  Of course, that was with minimal statistics -- just enough
to keep from getting truly stupid plans.  If it takes *days* to run
pg_upgrade on a 1TB cluster either you have an insane number of
database objects or you are not taking advantage of the available
features.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to give complete ownership of a new DB to a new user (non-SU)?

2016-07-27 Thread Tom Lane
Billal Mahmood  writes:
> *Problem Summary:*
> I'm facing changing ownership issue for a DB and all its DB objects (DB,
> schema, tables, sequences etc ownership).

While there's no magic single statement for this, I think you'll find
that REASSIGN OWNED takes most of the drudgery out of it, if you have
the template database set up properly.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to give complete ownership of a new DB to a new user (non-SU)?

2016-07-27 Thread Billal Mahmood
Hi,

*Problem Summary:*
I'm facing changing ownership issue for a DB and all its DB objects (DB,
schema, tables, sequences etc ownership).

*Experience So Far:*
As per my knowledge and experience every DB being created use a DB template
at the time of creation (default to "template1"). If "WITH OWNER" is not
mentioned in the CREATE DATABASE command then ownership of all the template
objects e.g. schema, tables etc is also inherited to the new DB but if
"WITH OWNER" option is defined then only the DB ownership is changed to the
new user but not rest of the objects like schema, tables etc which means
these db objects would always be in the ownership of the user who has
ownership of the template db objects.

*Problem Explanation:*
I'm implementing a secure multi tenancy system in a way that each
subscriber/client would have his own corresponding DB and a DB user. At the
time of subscription I would have a pre-installed "template_db" with
ownership of "sys_admin" user (not "SUPERUSER") and when creating a copy of
this template the newly created db e.g. "delta_db_1" should have
"delta_user_1" as owner and not only DB but all its objects like Schema,
tables, sequences etc. should have owner as "delta_user_1".

Please note that my "template0" is in the ownership of "postgres" user and
so all the objects of "template_db" were in ownership of "postgres"
initially so for "template_db" I changed the ownership of all its objects
to "sys_admin" by-hand (manually) but, of course, I can't do that manual
work for each DB being created on run-time subscription.

Please help me. I would prefer to give ownership of the new DB to the new
user in one simple sql statement at the time of DB creation. Thanks.

-- 
Best Regards,

Billal Mahmood

Pr. Software Engineer

*Email*: billal.mahm...@rolustech.com

*Mobile*: +92.321.4193915 <%2B92.345.4253072> | *Skype*:
rolustech.billal.mahmood

-

*Office (PK)*: +92.42.35717906 | *Office (US)*: +1 (310) 492-5564
<%2B1.310.492.5564>

Rolustech Dynamic IT Solutions

*Disclaimer:*
This e-mail is confidential and may also be legally privileged. If you are
not the addressee you may not copy, forward, disclose or use any part of
it. If you have received this message in error, please delete it and all
copies from your system and notify the sender immediately by return e-mail.
Neither Rolustech nor any of its affiliates shall be held liable for the
e-mail if altered or falsified nor shall they be liable for any damage
caused by any virus that might be transmitted with this e-mail.


[GENERAL] question on parsing postgres sql queries

2016-07-27 Thread Jerome Wagner
Hello,

I am doing some research on postgres sql query parsing.

I have found the https://github.com/lfittl/libpg_query project which
manages to re-use the native postgres server parser. For using this, you
need to accept an external dependency on a lib compiled out of the postgres
source.

I was wondering what people think of the conformance with regards to the
real parser of the documentations on
 - https://www.postgresql.org/docs/current/static/sql-select.html
 - https://www.postgresql.org/docs/current/static/sql-copy.html
... and more generally sgmls in
https://github.com/postgres/postgres/tree/master/doc/src/sgml/ref

Would it make sense to use these sgml synopsis as some kind of source of
truth, parse them, and automatically generate a parser for a specifc
language ?

This could enable the creation of parsers for different languages using
parser generators based on the synopsis.

I feel like the conformance level of the documentation is high and that the
sgml synopis seem to be nearly programmatically sufficient to create
parsers.

what do you think ?

Could the parser commiters share some lights on how the documentation
process interacts with the parser commits ?

Thanks,
Jerome


Re: [GENERAL] Uber migrated from Postgres to MySQL

2016-07-27 Thread Andrew Sullivan
On Wed, Jul 27, 2016 at 01:58:25PM -0400, Rakesh Kumar wrote:
> 
> I am surprised PG does not even allow minor version rollback.

It almost never happens that a minor version (N.M.x, x is minor)
requires an upgrade at all.  Change your binaries and you're done.
Catalogue incompatibility historically was the basis for something
becoming a major version upgrade.  (I can recall a couple bugs where
you had to tickle the catalogues, so it's not exactly true that
they're never incompatible, but it's incredibly rare.)

Best regards,

A

-- 
Andrew Sullivan
a...@crankycanuck.ca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Uber migrated from Postgres to MySQL

2016-07-27 Thread Rakesh Kumar
On Wed, Jul 27, 2016 at 2:07 PM, Andrew Sullivan  wrote:

> It almost never happens that a minor version (N.M.x, x is minor)
> requires an upgrade at all.  Change your binaries and you're done.
> Catalogue incompatibility historically was the basis for something
> becoming a major version upgrade.  (I can recall a couple bugs where
> you had to tickle the catalogues, so it's not exactly true that
> they're never incompatible, but it's incredibly rare.)

Yeah good enough.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Uber migrated from Postgres to MySQL

2016-07-27 Thread Bruce Momjian
On Wed, Jul 27, 2016 at 01:58:25PM -0400, Rakesh Kumar wrote:
> On Wed, Jul 27, 2016 at 1:54 PM, Marc Fournier
>  wrote:
> 
> > Stupid question here, but do we provide any less then what MySQL does?   I’m
> > reading:
> 
> mysql provides same functionality for rollback like oracle/db2
> provides. That is,
> rollback on a minor version upgrade possible, but not on major version 
> upgrade.
> 
> I am surprised PG does not even allow minor version rollback.

Uh, I thought we did.  Perhaps there a few that didn't.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+ Ancient Roman grave inscription +


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Uber migrated from Postgres to MySQL

2016-07-27 Thread Bruce Momjian
On Wed, Jul 27, 2016 at 10:54:25AM -0700, Marc Fournier wrote:
> http://dev.mysql.com/doc/refman/5.7/en/downgrading.html#downgrade-paths
> 
> ==
> 
>Unless otherwise documented, the following downgrade paths are
> supported:
> 
>   • Downgrading from a release series version to an older release series
> version is supported using all downgrade methods. For example, downgrading
> from 5.7.10 to 5.7.9 is supported. Skipping release series versions is 
> also
> supported. For example, downgrading from 5.7.11 to 5.7.9 is supported.
> 
>   • Downgrading one release level is supported using the logical downgrade
> method. For example, downgrading from 5.7 to 5.6 is supported.
> 
>   • Downgrading more than one release level is supported using the logical
> downgrade method, but only if you downgrade one release level at a time.
> For example, you can downgrade from 5.7 to 5.6, and then to 5.5. 
> 
> ==
> 
> So, downgrade minor releases can be done by just changing the binaries …
> downgrading an older ‘major release’ requires a dump/reload … 
> 
> Unless I’m missing something, whether on PostgreSQL or MySQL, if you want to 
> go
> back a major release, you would need to dump./ reload that 1TB database …

What they wanted, and I think was mentioned in the document, was that
they wanted to upgrade the slaves independently, then the master.  I
think MySQL supports that, Postgres doesn't.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+ Ancient Roman grave inscription +


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Uber migrated from Postgres to MySQL

2016-07-27 Thread Rakesh Kumar
On Wed, Jul 27, 2016 at 1:54 PM, Marc Fournier
 wrote:

> Stupid question here, but do we provide any less then what MySQL does?   I’m
> reading:

mysql provides same functionality for rollback like oracle/db2
provides. That is,
rollback on a minor version upgrade possible, but not on major version upgrade.

I am surprised PG does not even allow minor version rollback.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Uber migrated from Postgres to MySQL

2016-07-27 Thread Marc Fournier

> On Jul 27, 2016, at 09:59, Bruce Momjian  wrote:
> 
> I think long-term we are looking at pg_logical for zero-downtime
> upgrades and _downgrades_, and pg_upgrade for less overhead (I don't
> want to make a second copy of my data) upgrades (but not downgrades).
> 
> I think this is probably the best we are going to be able to do for a
> long time.

Stupid question here, but do we provide any less then what MySQL does?   I’m 
reading:

http://dev.mysql.com/doc/refman/5.7/en/downgrading.html#downgrade-paths 


==
   Unless otherwise documented, the following downgrade paths are supported:

Downgrading from a release series version to an older release series version is 
supported using all downgrade methods 
. 
For example, downgrading from 5.7.10 to 5.7.9 is supported. Skipping release 
series versions is also supported. For example, downgrading from 5.7.11 to 
5.7.9 is supported.

Downgrading one release level is supported using the logical downgrade method. 
For example, downgrading from 5.7 to 5.6 is supported.

Downgrading more than one release level is supported using the logical 
downgrade method, but only if you downgrade one release level at a time. For 
example, you can downgrade from 5.7 to 5.6, and then to 5.5. 

==

So, downgrade minor releases can be done by just changing the binaries … 
downgrading an older ‘major release’ requires a dump/reload … 

Unless I’m missing something, whether on PostgreSQL or MySQL, if you want to go 
back a major release, you would need to dump./ reload that 1TB database …


--
Marc G Fournier   
http://www.2ndQuadrant.com 
PostgreSQL Development, 24x7 Support, RemoteDBA, Training & Services



Re: [GENERAL] Uber migrated from Postgres to MySQL

2016-07-27 Thread Vik Fearing
On 27/07/16 18:54, Chris Travers wrote:
> Another one I think they obliquely referred to (in the subtle problems
> section) was the fact that if you have longer-running queries on the
> replica with a lot of updates, you can get funny auto-vacuum-induced
> errors (writes from autovacuum on the master can interrupt queries on
> the slave).  BTW if there is interest in what could be done for that,
> something which allows autovacuum to decide how long to wait before
> cleaning up dead tuples would be a great enhancement.

You mean something like hot_standby_feedback?

https://www.postgresql.org/docs/current/static/runtime-config-replication.html#GUC-HOT-STANDBY-FEEDBACK
-- 
Vik Fearing  +33 6 46 75 15 36
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Uber migrated from Postgres to MySQL

2016-07-27 Thread Bruce Momjian
On Wed, Jul 27, 2016 at 12:59:59PM -0400, Bruce Momjian wrote:
> On Wed, Jul 27, 2016 at 12:47:24PM -0400, Bruce Momjian wrote:
> > On Wed, Jul 27, 2016 at 12:33:27PM -0400, Rakesh Kumar wrote:
> > > On Wed, Jul 27, 2016 at 11:45 AM, Bruce Momjian  wrote:
> > > 
> > > > I agree, but I am not sure how to improve it.  The big complaint I have
> > > > heard is that once you upgrade and open up writes on the upgraded
> > > > server, you can't re-apply those writes to the old server if you need to
> > > > fall back to the old server.  I also don't see how to improve that 
> > > > either.
> > > 
> > > doesn't and pg_logical solve this by logically replicating and allowing 
> > > for
> > > different architecture/version between the replication nodes ?
> > 
> > Yes.  I was saying I don't know how to improve pg_upgrade to address it.
> 
> I think long-term we are looking at pg_logical for zero-downtime
> upgrades and _downgrades_, and pg_upgrade for less overhead (I don't
> want to make a second copy of my data) upgrades (but not downgrades).
> 
> I think this is probably the best we are going to be able to do for a
> long time.

Oh, let me give credit to Simon, who has always seen pg_logical as
providing superior upgrade options where the logical replication setup
isn't a problem.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+ Ancient Roman grave inscription +


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Uber migrated from Postgres to MySQL

2016-07-27 Thread Bruce Momjian
On Wed, Jul 27, 2016 at 09:17:58AM +0200, Chris Travers wrote:
> The replication section made me wonder though if they were using the right
> replication solution for the job.  If you don't want an on-disk copy, don't 
> use
> physical replication.  This being said there is one serious issue here that is
> worth mentioning, which is that since autovacuum on the master has no 
> knowledge
> of autovacuum on the slave, it is easy to have longer-running queries on a
> slave that have rows they need to see removed by autovacuum and replication. 

Uh, see hot_standby_feedback:

#hot_standby_feedback = off # send info from standby to 
prevent
# query conflicts

> All of the above being said, there are solutions to all the major problems. 
> But you have to know about them, where to look, and what to do.  And with
> higher scale, one very important aspect is that attention to detail starts to
> matter a whole lot.  I agree that there are some good points raised but I
> wonder what the solutions are.  There is room for some improvement in the
> backend (it would really be nice to instrument and measure toasting/detoasting
> overhead in explain analyze) but for a lot of these  I wonder if that is
> secondary.   PostgreSQL is very well optimized for a certain series of tasks,
> and one can build well optimized solutions well outside that.  At a certain
> point (including a certain scale) therewill be no substitute for a teamof
> people who really know the db backend inside and out who can design around
> limitations and I think that is true for all databases I have worked with.

Watching the video was helpful:

https://vimeo.com/145842299

You can see the failover happened because of various user errors.  That
doesn't excuse our bug, but I am not sure exactly how much they
understood of Postgres behavior.  His talk near the end about the
replication infrastucture being exposed to them was also interesting.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+ Ancient Roman grave inscription +


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Uber migrated from Postgres to MySQL

2016-07-27 Thread Bruce Momjian
On Wed, Jul 27, 2016 at 12:47:24PM -0400, Bruce Momjian wrote:
> On Wed, Jul 27, 2016 at 12:33:27PM -0400, Rakesh Kumar wrote:
> > On Wed, Jul 27, 2016 at 11:45 AM, Bruce Momjian  wrote:
> > 
> > > I agree, but I am not sure how to improve it.  The big complaint I have
> > > heard is that once you upgrade and open up writes on the upgraded
> > > server, you can't re-apply those writes to the old server if you need to
> > > fall back to the old server.  I also don't see how to improve that either.
> > 
> > doesn't and pg_logical solve this by logically replicating and allowing for
> > different architecture/version between the replication nodes ?
> 
> Yes.  I was saying I don't know how to improve pg_upgrade to address it.

I think long-term we are looking at pg_logical for zero-downtime
upgrades and _downgrades_, and pg_upgrade for less overhead (I don't
want to make a second copy of my data) upgrades (but not downgrades).

I think this is probably the best we are going to be able to do for a
long time.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+ Ancient Roman grave inscription +


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Uber migrated from Postgres to MySQL

2016-07-27 Thread Chris Travers
On Wed, Jul 27, 2016 at 4:22 PM, Scott Mead  wrote:

> On Wed, Jul 27, 2016 at 3:34 AM, Achilleas Mantzios <
> ach...@matrix.gatewaynet.com> wrote:
>
>> On 27/07/2016 10:15, Condor wrote:
>>
>>> On 26-07-2016 21:04, Dorian Hoxha wrote:
>>>
 Many comments: https://news.ycombinator.com/item?id=12166585

 https://www.reddit.com/r/programming/comments/4uph84/why_uber_engineering_switched_from_postgres_to/

 On Tue, Jul 26, 2016 at 7:39 PM, Guyren Howe  wrote:

 Honestly, I've never heard of anyone doing that. But it sounds like
> they had good reasons.
>
> https://eng.uber.com/mysql-migration/
>
> Thoughts?
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

>>>
>>> They are right for upgrades.
>>> It's a hard to shutdown 1 TB database and wait couple of days pg_upgrade
>>> to finish upgrade and meanwhile database is offline.
>>> In some distros after upgrade of PG version you don't have old binary
>>> and library, need to do full dump and restore that take time and disk space.
>>>
>>
>> Our last 1TB upgrade from 9.0 -> 9.3 went like a charm in something like
>> seconds. (with the -k option)
>> However, be warned that the planing and testing took one full week.
>>
>
> That being said, it doesn't really provide a back-out plan.  The beauty of
> replication is that you can halt the upgrade at any point if need be and
> cut your (hopefully small) losses.
>

Replication though does have limits and one aspect of incremental backups
is you cannot restore from one major version to the next.  Another one I
think they obliquely referred to (in the subtle problems section) was the
fact that if you have longer-running queries on the replica with a lot of
updates, you can get funny auto-vacuum-induced errors (writes from
autovacuum on the master can interrupt queries on the slave).  BTW if there
is interest in what could be done for that, something which allows
autovacuum to decide how long to wait before cleaning up dead tuples would
be a great enhancement.

I was on a project once where I was told, "we use pg_dump for our upgrades"
for a multi-TB database.  When asked why, the answer made a lot of sense.
Namely if something goes wrong you need to do a restore on the new version
from a logical backup anyway, so you have to take a pg_dump backup before
you start, and you might have to restore anyway.  So the thinking was  that
it was better to keep expectations low than promise low downtime and have a
two-week outage.


> If you use -k, you are all in.  Sure, you could setup a new standby, stop
> traffic, upgrade whichever node you'd like (using -k) and still have the
> other ready in the event of total catastrophe.  More often than not, I see
> DBAs and sysads lead the conversation with "well, postgres can't replicate
> from one version to another, so instead " followed by a fast-glazing of
> management's eyes and a desire to buy a 'commercial database'.
>

This is one area where we need better presentation of what we have and what
it does.

Streaming replication works great for certain things, such as where you
have lots of small queries against the replica, where they don't have to be
absolutely up to date, or where what you are really after is guarantees
that you can keep moving after one of your servers suffers a catastrophic
failure.

Where the guarantee that the two systems are guaranteed identical on the
filesystem level, it is great.  Where that is not what you want, it is a
pretty bad solution.  But then there is Slony, Bucardo, and other logical
replication solutions out there (plus the newer logical replication
approaches in PostgreSQL) which handle the other situations very well (with
a very different sort of added complexity).


>
> All in all, Evan's blog seemed to start out decently technical, it quickly
> took a turn with half-truths, outdated information and, in some cases,
> downright fud:
>
>  "The bug we ran into only affected certain releases of Postgres 9.2 and
> has been fixed for a long time now. However, we still find it worrisome
> that this class of bug can happen at all. A new version of Postgres could
> be released at any time that has a bug of this nature, and because of the
> way replication works, this issue has the potential to spread into all of
> the databases in a replication hierarchy."
>
>
> ISTM that they needed a tire swing
> 
> and were using a dump truck.  Hopefully they vectored somewhere in the
> middle and got themselves a nice sandbox.
>

My first thought was, "If they know the database that well, surely they
could have built something that would work well!"

However, for what they seem to want to do specifically, MySQL might not
actually be a bad choice.  

Re: [GENERAL] Uber migrated from Postgres to MySQL

2016-07-27 Thread Bruce Momjian
On Wed, Jul 27, 2016 at 12:51:40PM -0400, Rakesh Kumar wrote:
> On Wed, Jul 27, 2016 at 12:47 PM, Bruce Momjian  wrote:
> 
> > Yes.  I was saying I don't know how to improve pg_upgrade to address it.
> 
> This problem is there even in oracle/db2/sqlserver. None of them allow
> rollback to the lower version
> unless it is a minor version upgrade. Major version upgrade almost
> definitely involves change in transaction log
> (WAL) structure and hence no rollback.

Oh, good to know.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+ Ancient Roman grave inscription +


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Uber migrated from Postgres to MySQL

2016-07-27 Thread Rakesh Kumar
On Wed, Jul 27, 2016 at 12:47 PM, Bruce Momjian  wrote:

> Yes.  I was saying I don't know how to improve pg_upgrade to address it.

This problem is there even in oracle/db2/sqlserver. None of them allow
rollback to the lower version
unless it is a minor version upgrade. Major version upgrade almost
definitely involves change in transaction log
(WAL) structure and hence no rollback.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Uber migrated from Postgres to MySQL

2016-07-27 Thread Bruce Momjian
On Wed, Jul 27, 2016 at 12:33:27PM -0400, Rakesh Kumar wrote:
> On Wed, Jul 27, 2016 at 11:45 AM, Bruce Momjian  wrote:
> 
> > I agree, but I am not sure how to improve it.  The big complaint I have
> > heard is that once you upgrade and open up writes on the upgraded
> > server, you can't re-apply those writes to the old server if you need to
> > fall back to the old server.  I also don't see how to improve that either.
> 
> doesn't and pg_logical solve this by logically replicating and allowing for
> different architecture/version between the replication nodes ?

Yes.  I was saying I don't know how to improve pg_upgrade to address it.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+ Ancient Roman grave inscription +


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Uber migrated from Postgres to MySQL

2016-07-27 Thread Geoff Winkless
On 27 July 2016 at 17:11, Andrew Sullivan  wrote:

> Given
> the discussion in the post in question, the decision to use MySQL
> appears to have been well-justified:
>

​Well yes, but that's pretty-much the point of back-justification, isn't it?

​[snip a whole bunch of good points]


> > For what it's worth, from what I've read uber are a company whose very
> > business plan relies on them taking things that they don't deserve while
> > they treat customers and employees with similar levels of arrogance.
>
> Nothin' for nothin', but I don't think it helps Postgres to attack
> others' business plans -- whatever one thinks of them -- as part of an
> argument about why Postgres is the right tool for a given job.
>

​Oh, I wasn't using as an argument about anything (hence "for what it's
worth").​

G

On 27 July 2016 at 17:11, Andrew Sullivan  wrote:

> On Wed, Jul 27, 2016 at 04:51:42PM +0100, Geoff Winkless wrote:
> > technical reasons. Most developers will harp on at their boss about how
> > terrible their current database is and how  performs
> > much better. Eventually one of two things happens: either a) those
> > developers end up in a position where their direct boss is in a position
> to
> > make the change and he or she doesn't understand how much time and money
> it
> > will actually take to change; or b) commercial considerations dictate the
> > change.
>
> In a different context, someone suggested to me that Postgres
> advocates sounded to him too often like FreeBSD advocates complaining
> about Linux, and I'm afraid there is a certain truth to that.  Given
> the discussion in the post in question, the decision to use MySQL
> appears to have been well-justified:
>
> 1.  They'd decided to use a NoSQL database and ditch relational
> systems, because shards.
>
> 2.  They wanted an MVCC engine behind the above.
>
> 3.  They wanted SQL semantics to this MVCC-enabled filesystem layer.
>
> Sounds just like MySQL+InnoDB to me.  Once you've already decided on
> (1), the rest of it flows pretty naturally and Postgres is probably
> not your choice.  You can dismiss any of 1-3 as commerical or
> political advocacy, but while I happen to think they're a somewhat
> questionable set of goals they're not obviously stupid, and
> competent people of good will could disagree about them.
>
> At the same time, there really are two serious problems with Postgres
> under heavy write loads.  Postgres's focus on readers' speed and
> convenience means you have to take the hit somewhere, so writers take
> it instead.  (The other side of the disk-layout description in the
> blog post is that, under MySQL, secondary index use is more expensive
> for readers than it is in Postgres.  The post acknowledges that, but
> of course most important secondary indexing is useless under sharding
> anyway, since you have to select from shards; so they won't care.)
> I/O storms on Postgres are a major source of pain for large operators,
> and the tools for understanding are sort of primitive because many of
> them depend on underlying OS features and tools.
>
> The second is the upgrade-by-replica-and-fallback-plan problem.  It's
> really an issue. There is a reason that, back in the cloudy past, we
> designed Slony to be able to replicate to and from any supported
> version of Postgres: Afilias needed to be able to upgrade without a
> lot of down time and with the ability to roll back if we had to,
> because that was our contractual obligation.  This has always been a
> large gap, and when it was raised in the past the answer was, "Well,
> Slony can already do that so use it."  It wasn't too satisfying then,
> and it's not much more satisfying now. :)
>
> > better invested in employing one of the commercial PG companies to
> improve
> > the specific postgres problems they found.
>
> I think the two big problems laid out above are deep architectural
> ones.  I'm not sure these are the sort of improvement you can buy
> without getting the community on board.
>
> > For what it's worth, from what I've read uber are a company whose very
> > business plan relies on them taking things that they don't deserve while
> > they treat customers and employees with similar levels of arrogance.
>
> Nothin' for nothin', but I don't think it helps Postgres to attack
> others' business plans -- whatever one thinks of them -- as part of an
> argument about why Postgres is the right tool for a given job.
>
> Best regards,
>
> A
>
> --
> Andrew Sullivan
> a...@crankycanuck.ca
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] Uber migrated from Postgres to MySQL

2016-07-27 Thread Rakesh Kumar
On Wed, Jul 27, 2016 at 11:45 AM, Bruce Momjian  wrote:

> I agree, but I am not sure how to improve it.  The big complaint I have
> heard is that once you upgrade and open up writes on the upgraded
> server, you can't re-apply those writes to the old server if you need to
> fall back to the old server.  I also don't see how to improve that either.

doesn't and pg_logical solve this by logically replicating and allowing for
different architecture/version between the replication nodes ?


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Uber migrated from Postgres to MySQL

2016-07-27 Thread Andrew Sullivan
On Wed, Jul 27, 2016 at 04:51:42PM +0100, Geoff Winkless wrote:
> technical reasons. Most developers will harp on at their boss about how
> terrible their current database is and how  performs
> much better. Eventually one of two things happens: either a) those
> developers end up in a position where their direct boss is in a position to
> make the change and he or she doesn't understand how much time and money it
> will actually take to change; or b) commercial considerations dictate the
> change.

In a different context, someone suggested to me that Postgres
advocates sounded to him too often like FreeBSD advocates complaining
about Linux, and I'm afraid there is a certain truth to that.  Given
the discussion in the post in question, the decision to use MySQL
appears to have been well-justified:

1.  They'd decided to use a NoSQL database and ditch relational
systems, because shards.

2.  They wanted an MVCC engine behind the above.

3.  They wanted SQL semantics to this MVCC-enabled filesystem layer.

Sounds just like MySQL+InnoDB to me.  Once you've already decided on
(1), the rest of it flows pretty naturally and Postgres is probably
not your choice.  You can dismiss any of 1-3 as commerical or
political advocacy, but while I happen to think they're a somewhat
questionable set of goals they're not obviously stupid, and
competent people of good will could disagree about them.

At the same time, there really are two serious problems with Postgres
under heavy write loads.  Postgres's focus on readers' speed and
convenience means you have to take the hit somewhere, so writers take
it instead.  (The other side of the disk-layout description in the
blog post is that, under MySQL, secondary index use is more expensive
for readers than it is in Postgres.  The post acknowledges that, but
of course most important secondary indexing is useless under sharding
anyway, since you have to select from shards; so they won't care.)
I/O storms on Postgres are a major source of pain for large operators,
and the tools for understanding are sort of primitive because many of
them depend on underlying OS features and tools.

The second is the upgrade-by-replica-and-fallback-plan problem.  It's
really an issue. There is a reason that, back in the cloudy past, we
designed Slony to be able to replicate to and from any supported
version of Postgres: Afilias needed to be able to upgrade without a
lot of down time and with the ability to roll back if we had to,
because that was our contractual obligation.  This has always been a
large gap, and when it was raised in the past the answer was, "Well,
Slony can already do that so use it."  It wasn't too satisfying then,
and it's not much more satisfying now. :)

> better invested in employing one of the commercial PG companies to improve
> the specific postgres problems they found.

I think the two big problems laid out above are deep architectural
ones.  I'm not sure these are the sort of improvement you can buy
without getting the community on board.

> For what it's worth, from what I've read uber are a company whose very
> business plan relies on them taking things that they don't deserve while
> they treat customers and employees with similar levels of arrogance.

Nothin' for nothin', but I don't think it helps Postgres to attack
others' business plans -- whatever one thinks of them -- as part of an
argument about why Postgres is the right tool for a given job.

Best regards,

A

-- 
Andrew Sullivan
a...@crankycanuck.ca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Uber migrated from Postgres to MySQL

2016-07-27 Thread Geoff Winkless
On 27 July 2016 at 15:22, Scott Mead  wrote:

>  "The bug we ran into only affected certain releases of Postgres 9.2 and
> has been fixed for a long time now. However, we still find it worrisome
> that this class of bug can happen at all. A new version of Postgres could
> be released at any time that has a bug of this nature, and because of the
> way replication works, this issue has the potential to spread into all of
> the databases in a replication hierarchy."
>
>
> ISTM that they needed a tire swing
> 
> and were using a dump truck.  Hopefully they vectored somewhere in the
> middle and got themselves a nice sandbox.
>

​
At least his bug got fixed. The last 2 bugs I reported to MySQL resulted in
an initial refusal to accept any problem existed, followed by (once that
particular strategy had run out of steam) the developer simply ignoring the
bug until it was closed automatically by their bug system. As far as I'm
aware those bugs still exist in the most recent version.

Admittedly they weren't data-corruption bugs but they were of the "this
will cause your application to unnecessarily come to a complete halt for a
couple of hours once your dataset grows beyond a certain point" variety.

As others have pointed out, db changes very rarely happen because of
technical reasons. Most developers will harp on at their boss about how
terrible their current database is and how  performs
much better. Eventually one of two things happens: either a) those
developers end up in a position where their direct boss is in a position to
make the change and he or she doesn't understand how much time and money it
will actually take to change; or b) commercial considerations dictate the
change.

The amount of money and time they've wasted making this change (and
whatever anyone tells you these things never come for free) would have been
better invested in employing one of the commercial PG companies to improve
the specific postgres problems they found. The fact that they decided to go
the MySQL route suggests to me that this was a political, not technical,
change, and they're now reverse-justifying.

For what it's worth, from what I've read uber are a company whose very
business plan relies on them taking things that they don't deserve while
they treat customers and employees with similar levels of arrogance.
Frankly I'd rather there were as many levels of separation as possible
between me and them: they and Oracle are welcome to each other, it seems
like a marriage made in heaven.

Geoff​


Re: [GENERAL] Uber migrated from Postgres to MySQL

2016-07-27 Thread Bruce Momjian
On Wed, Jul 27, 2016 at 10:22:27AM -0400, Scott Mead wrote:
> That being said, it doesn't really provide a back-out plan.  The beauty of
> replication is that you can halt the upgrade at any point if need be and cut
> your (hopefully small) losses. If you use -k, you are all in.  Sure, you could
> setup a new standby, stop traffic, upgrade whichever node you'd like (using 
> -k)
> and still have the other ready in the event of total catastrophe.  More often
> than not, I see DBAs and sysads lead the conversation with "well, postgres
> can't replicate from one version to another, so instead " followed by a
> fast-glazing of management's eyes and a desire to buy a 'commercial 
> database'. 

I agree, but I am not sure how to improve it.  The big complaint I have
heard is that once you upgrade and open up writes on the upgraded
server, you can't re-apply those writes to the old server if you need to
fall back to the old server.  I also don't see how to improve that either.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+ Ancient Roman grave inscription +


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] low perfomances migrating from 9.3 to 9.5

2016-07-27 Thread Melvin Davidson
On Wed, Jul 27, 2016 at 11:01 AM, Adrian Klaver 
wrote:

> On 07/27/2016 07:52 AM, thomas veymont wrote:
>
>>
>> 2016-07-27 14:11 GMT+02:00 Michael Paquier > >:
>>
>>
>>
>>
>> And do you see changes if you increase min_wal_size? This will
>> increase the number of WAL segments recycled instead of removed at
>> each checkpoint.
>> --
>> Michael
>>
>>
>> I have seen no improvment with the following parameters in 9.5:
>> max_wal_size = 3GB
>> min_wal_size = 512MB
>> #checkpoint_completion_target = 0.5 # checkpoint target duration,
>> 0.0 - 1.0
>> #checkpoint_warning = 30s   # 0 disables
>>
>> while my 9.3 configuration is:
>> checkpoint_segments = 128   # in logfile segments, min 1,
>> 16MB each
>> #checkpoint_timeout = 5min  # range 30s-1h
>> checkpoint_completion_target = 0.9  # checkpoint target duration,
>> 0.0 - 1.0
>> #checkpoint_warning = 30s   # 0 disables
>>
>> I have just run a quick pgbench test to get some objective numbers.
>> Both tests were run on the same machine (ie. production machine), same
>> disk, same logical volume :
>>
>> On 9.5 :
>>
>> $ pgbench -c 4 -j 2 -T 600 test
>> starting vacuum...end.
>> transaction type: TPC-B (sort of)
>> scaling factor: 70
>> query mode: simple
>> number of clients: 4
>> number of threads: 2
>> duration: 600 s
>> number of transactions actually processed: 77318
>> latency average: 31.041 ms
>> tps = 128.859708 (including connections establishing)
>> tps = 128.860447 (excluding connections establishing)
>>
>> On 9.3 :
>>
>> $ pgbench -c 4 -j 2 -T 600 test
>> starting vacuum...end.
>> transaction type: TPC-B (sort of)
>> scaling factor: 70
>> query mode: simple
>> number of clients: 4
>> number of threads: 2
>> duration: 600 s
>> number of transactions actually processed: 1834436
>> latency average: 1.308 ms
>> tps = 3057.387254 (including connections establishing)
>> tps = 3057.398493 (excluding connections establishing)
>>
>> Note that the 9.3 is handling others production requests in the same time.
>>
>> Is a checkpoint_segment/WAL problem still to be suspected ?
>>
>
> Where did you get the respective versions of Postgres?
>
> Where they installed the same way?
>
> You mentioned the log feed showing obvious performance issues, can we see
> the relevant portions?
>
>
>> cheers
>> Tom
>>
>>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>




*I have to ask, was a vacuumdb -Z  OR psql -U postgres -c ANALYZE ; *


*done after the migration?*

*Without accurate stats, performance goes down the drain.*
-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] low perfomances migrating from 9.3 to 9.5

2016-07-27 Thread Adrian Klaver

On 07/27/2016 07:52 AM, thomas veymont wrote:


2016-07-27 14:11 GMT+02:00 Michael Paquier >:



And do you see changes if you increase min_wal_size? This will
increase the number of WAL segments recycled instead of removed at
each checkpoint.
--
Michael


I have seen no improvment with the following parameters in 9.5:
max_wal_size = 3GB
min_wal_size = 512MB
#checkpoint_completion_target = 0.5 # checkpoint target duration,
0.0 - 1.0
#checkpoint_warning = 30s   # 0 disables

while my 9.3 configuration is:
checkpoint_segments = 128   # in logfile segments, min 1,
16MB each
#checkpoint_timeout = 5min  # range 30s-1h
checkpoint_completion_target = 0.9  # checkpoint target duration,
0.0 - 1.0
#checkpoint_warning = 30s   # 0 disables

I have just run a quick pgbench test to get some objective numbers.
Both tests were run on the same machine (ie. production machine), same
disk, same logical volume :

On 9.5 :

$ pgbench -c 4 -j 2 -T 600 test
starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 70
query mode: simple
number of clients: 4
number of threads: 2
duration: 600 s
number of transactions actually processed: 77318
latency average: 31.041 ms
tps = 128.859708 (including connections establishing)
tps = 128.860447 (excluding connections establishing)

On 9.3 :

$ pgbench -c 4 -j 2 -T 600 test
starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 70
query mode: simple
number of clients: 4
number of threads: 2
duration: 600 s
number of transactions actually processed: 1834436
latency average: 1.308 ms
tps = 3057.387254 (including connections establishing)
tps = 3057.398493 (excluding connections establishing)

Note that the 9.3 is handling others production requests in the same time.

Is a checkpoint_segment/WAL problem still to be suspected ?


Where did you get the respective versions of Postgres?

Where they installed the same way?

You mentioned the log feed showing obvious performance issues, can we 
see the relevant portions?




cheers
Tom




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] GIN Indexes: Extensibility

2016-07-27 Thread Paul Jungwirth

On 07/27/2016 07:44 AM, Vick Khera wrote:

On Wed, Jul 27, 2016 at 3:28 AM, Anton Ananich  wrote:

In my situation this order is invalid. Obviously, year 2016 should go after
2014, like that:


I think you expect JSONB to sort differently than it does. I cannot
imagine what a "natural" ordering of arbitrary JSON objects is.


FWIW, Postgres arrays do sort in the way he's expecting:

paul=# create table t (id integer, v integer[]);
CREATE TABLE
paul=# insert into t values (1, array[2014]), (2, array[2014, 1]), (3, 
array[2016]);

INSERT 0 3
paul=# select * from t order by v;
 id |v
+--
  1 | {2014}
  2 | {2014,1}
  3 | {2016}
(3 rows)

So maybe convert to an array before sorting?

Paul



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] low perfomances migrating from 9.3 to 9.5

2016-07-27 Thread thomas veymont
2016-07-27 14:11 GMT+02:00 Michael Paquier :

>
>
> And do you see changes if you increase min_wal_size? This will
> increase the number of WAL segments recycled instead of removed at
> each checkpoint.
> --
> Michael
>

I have seen no improvment with the following parameters in 9.5:
max_wal_size = 3GB
min_wal_size = 512MB
#checkpoint_completion_target = 0.5 # checkpoint target duration, 0.0 -
1.0
#checkpoint_warning = 30s   # 0 disables

while my 9.3 configuration is:
checkpoint_segments = 128   # in logfile segments, min 1, 16MB
each
#checkpoint_timeout = 5min  # range 30s-1h
checkpoint_completion_target = 0.9  # checkpoint target duration, 0.0 -
1.0
#checkpoint_warning = 30s   # 0 disables

I have just run a quick pgbench test to get some objective numbers.
Both tests were run on the same machine (ie. production machine), same
disk, same logical volume :

On 9.5 :

$ pgbench -c 4 -j 2 -T 600 test
starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 70
query mode: simple
number of clients: 4
number of threads: 2
duration: 600 s
number of transactions actually processed: 77318
latency average: 31.041 ms
tps = 128.859708 (including connections establishing)
tps = 128.860447 (excluding connections establishing)

On 9.3 :

$ pgbench -c 4 -j 2 -T 600 test
starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 70
query mode: simple
number of clients: 4
number of threads: 2
duration: 600 s
number of transactions actually processed: 1834436
latency average: 1.308 ms
tps = 3057.387254 (including connections establishing)
tps = 3057.398493 (excluding connections establishing)

Note that the 9.3 is handling others production requests in the same time.

Is a checkpoint_segment/WAL problem still to be suspected ?

cheers
Tom


Re: [GENERAL] GIN Indexes: Extensibility

2016-07-27 Thread Vick Khera
On Wed, Jul 27, 2016 at 3:28 AM, Anton Ananich  wrote:
> In my situation this order is invalid. Obviously, year 2016 should go after
> 2014, like that:

I think you expect JSONB to sort differently than it does. I cannot
imagine what a "natural" ordering of arbitrary JSON objects is.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Uber migrated from Postgres to MySQL

2016-07-27 Thread Scott Mead
On Wed, Jul 27, 2016 at 3:34 AM, Achilleas Mantzios <
ach...@matrix.gatewaynet.com> wrote:

> On 27/07/2016 10:15, Condor wrote:
>
>> On 26-07-2016 21:04, Dorian Hoxha wrote:
>>
>>> Many comments: https://news.ycombinator.com/item?id=12166585
>>>
>>> https://www.reddit.com/r/programming/comments/4uph84/why_uber_engineering_switched_from_postgres_to/
>>>
>>> On Tue, Jul 26, 2016 at 7:39 PM, Guyren Howe  wrote:
>>>
>>> Honestly, I've never heard of anyone doing that. But it sounds like
 they had good reasons.

 https://eng.uber.com/mysql-migration/

 Thoughts?

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general

>>>
>>
>> They are right for upgrades.
>> It's a hard to shutdown 1 TB database and wait couple of days pg_upgrade
>> to finish upgrade and meanwhile database is offline.
>> In some distros after upgrade of PG version you don't have old binary and
>> library, need to do full dump and restore that take time and disk space.
>>
>
> Our last 1TB upgrade from 9.0 -> 9.3 went like a charm in something like
> seconds. (with the -k option)
> However, be warned that the planing and testing took one full week.
>

That being said, it doesn't really provide a back-out plan.  The beauty of
replication is that you can halt the upgrade at any point if need be and
cut your (hopefully small) losses. If you use -k, you are all in.  Sure,
you could setup a new standby, stop traffic, upgrade whichever node you'd
like (using -k) and still have the other ready in the event of total
catastrophe.  More often than not, I see DBAs and sysads lead the
conversation with "well, postgres can't replicate from one version to
another, so instead " followed by a fast-glazing of management's eyes
and a desire to buy a 'commercial database'.

All in all, Evan's blog seemed to start out decently technical, it quickly
took a turn with half-truths, outdated information and, in some cases,
downright fud:

 "The bug we ran into only affected certain releases of Postgres 9.2 and
has been fixed for a long time now. However, we still find it worrisome
that this class of bug can happen at all. A new version of Postgres could
be released at any time that has a bug of this nature, and because of the
way replication works, this issue has the potential to spread into all of
the databases in a replication hierarchy."


ISTM that they needed a tire swing

and were using a dump truck.  Hopefully they vectored somewhere in the
middle and got themselves a nice sandbox.

--Scott


>
>
>>
>> Regards,
>> Hristo S.
>>
>>
>>
>>
>>
>>
>
> --
> Achilleas Mantzios
> IT DEV Lead
> IT DEPT
> Dynacom Tankers Mgmt
>
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
--
Scott Mead
Sr. Architect
*OpenSCG *
http://openscg.com


Re: [GENERAL] FTS with more than one language in body and with unknown query language?

2016-07-27 Thread Artur Zakirov

On 15.07.2016 21:34, Stefan Keller wrote:

I actually expect that stemming takes place for english and german.
And we will in fact have queries in english and in german as well.
So I think we still have some issues to resolve...?


I performed the following things:
- patch for PostgreSQL:
  https://github.com/select-artur/postgres/tree/join_tsconfig
  It adds new option for FTS dictionary mapping (JOIN). I want to 
propose this patch to -hackers.

- dict_translate dictionary based on dict_xsyn contrib:
  https://github.com/select-artur/dict_translate

This things are made for multilingual purpose and are interesting for 
us. Maybe they will be helpful for you too.


Example:

1 - Create files:

$SHAREDIR/tsearch_data/geo_en.trn:

forest wald forst holz

$SHAREDIR/tsearch_data/geo_de.trn:

wald forest wood
forst forest wood
holz forest wood

2 - Execute queries:

=# CREATE TEXT SEARCH DICTIONARY geo_en (
Template = translate,
DictFile = geo_en,
InputDict = pg_catalog.english_stem);
=# CREATE TEXT SEARCH DICTIONARY geo_de (
Template = translate,
DictFile = geo_de,
InputDict = pg_catalog.german_stem);
=# CREATE TEXT SEARCH CONFIGURATION geo(COPY='simple');
=# ALTER TEXT SEARCH CONFIGURATION geo_ths
ALTER MAPPING FOR asciiword, asciihword, hword_asciipart,
word, hword, hword_part
WITH geo_en (JOIN), english_stem (JOIN), geo_de (JOIN), german_stem 
(JOIN);


=# CREATE TABLE geo (id int, body_en text, body_de text);
=# INSERT INTO geo VALUES (1, 'forest', NULL), (2, NULL, 'wald');

3 - Sample queries:

=# SELECT * FROM geo WHERE to_tsvector('geo', body_en) @@ 
to_tsquery('geo', 'forests');

 id | body_en | body_de
+-+-
  1 | forest  | (null)
(1 row)

=# SELECT * FROM geo WHERE to_tsvector('geo', body_de) @@ 
to_tsquery('geo', 'forests');

 id | body_en | body_de
+-+-
  2 | (null)  | wald
  3 | (null)  | forst
(2 rows)

=# SELECT * FROM geo WHERE to_tsvector('geo', body_en) @@ 
to_tsquery('geo', 'walde');

 id | body_en | body_de
+-+-
  1 | forest  | (null)
(1 row)

I will be glad for your comments.

--
Artur Zakirov
Postgres Professional: http://www.postgrespro.com
Russian Postgres Company


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] low perfomances migrating from 9.3 to 9.5

2016-07-27 Thread Michael Paquier
On Wed, Jul 27, 2016 at 6:22 PM, thomas veymont
 wrote:
> as far as I remember these settings were for faster bulk loading.
>
> in 9.5 we set:
> max_wal_size = 6GB  #
> https://www.postgresql.org/docs/9.5/static/release-9-5.html, max_wal_size =
> (3 * checkpoint_segments) * 16MB
> #min_wal_size = 80MB
>
> though I tried max_wal_size = 3GB (default is 1GB) and did not notice any
> improvment.

And do you see changes if you increase min_wal_size? This will
increase the number of WAL segments recycled instead of removed at
each checkpoint.
-- 
Michael


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] question on parsing postgres sql queries

2016-07-27 Thread Jerome Wagner
> What problem are you trying to solve here?​  to whit not everything that
can be parsed is documented - usually intentionally.

I am tyring to see whether we could use the documentation as a kind of
formal specification of the language but I understand that the devil is
in the details and that even formal specifications can lead to incompatible
implementations,

I would have found it nice if the clean documentation of the project could
be used as a meta-grammar sufficient to maybe generate the grammar but I
will have to dig further into the Bison grammar files.

The project I mentioned that isolates the parser from PostgreSQL binary as
a re-usable library is probably the closest you can get currently to a
parser matching the real engine.

Otherwise, yes, parsing the synopsis could maybe lead to a sanity check on
the fact that the documentation is in line with the grammar. This could
lead to warnings or help uncover unexpected corner cases not mentioned in
the documentation.

Thanks for your answers
Jerome


On Tue, Jul 26, 2016 at 9:52 PM, David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Tue, Jul 26, 2016 at 3:20 PM, Jerome Wagner 
> wrote:
>
>>
>> Would it make sense to use these sgml synopsis as some kind of source of
>> truth, parse them, and automatically generate a parser for a specifc
>> language ?
>>
>
> ​What problem are you trying to solve here?​  to whit not everything that
> can be parsed is documented - usually intentionally.
>
>
>> Could the parser commiters share some lights on how the documentation
>> process interacts with the parser commits ?
>>
>>
> ​Commits that modify the parser are expected to have manual modifications
> to the relevant documentation ​as well.
>
> David J.
>
>


Re: [GENERAL] low perfomances migrating from 9.3 to 9.5

2016-07-27 Thread thomas veymont
2016-07-27 11:06 GMT+02:00 Michael Paquier :

> On Wed, Jul 27, 2016 at 5:54 PM, thomas veymont
>  wrote:
> > We switched from PostgreSQL server 9.3 to 9.5. From the very beginning
> there
> > was a noticeable drop in performances (for example : when injecting our
> SQL
> > dumps into 9.5, COPY and CREATE INDEX were very slow).
> >
> > Our configuration file was the same for 9.3 and 9.5, except for the
> > "chekpoint segment" that has been deprecated.
>
> Which value are you using for max_wal_size and min_wal_size? And which
> value of checkpoint_segments did you use previously? This could
> influence the checkpoint frequency.
> --
> Michael
>

in 9.3 we had :
checkpoint_segments = 128
#checkpoint_timeout = 5min  # range 30s-1h
checkpoint_completion_target = 0.9   # checkpoint target duration, 0.0 - 1.0
#checkpoint_warning = 30s   # 0 disables

as far as I remember these settings were for faster bulk loading.

in 9.5 we set:
max_wal_size = 6GB  #
https://www.postgresql.org/docs/9.5/static/release-9-5.html, max_wal_size =
(3 * checkpoint_segments) * 16MB
#min_wal_size = 80MB

though I tried max_wal_size = 3GB (default is 1GB) and did not notice any
improvment.

cheers
tom


Re: [GENERAL] low perfomances migrating from 9.3 to 9.5

2016-07-27 Thread Michael Paquier
On Wed, Jul 27, 2016 at 5:54 PM, thomas veymont
 wrote:
> We switched from PostgreSQL server 9.3 to 9.5. From the very beginning there
> was a noticeable drop in performances (for example : when injecting our SQL
> dumps into 9.5, COPY and CREATE INDEX were very slow).
>
> Our configuration file was the same for 9.3 and 9.5, except for the
> "chekpoint segment" that has been deprecated.

Which value are you using for max_wal_size and min_wal_size? And which
value of checkpoint_segments did you use previously? This could
influence the checkpoint frequency.
-- 
Michael


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] low perfomances migrating from 9.3 to 9.5

2016-07-27 Thread thomas veymont
hello all,

We switched from PostgreSQL server 9.3 to 9.5. From the very beginning
there was a noticeable drop in performances (for example : when injecting
our SQL dumps into 9.5, COPY and CREATE INDEX were very slow).

Our configuration file was the same for 9.3 and 9.5, except for the
"chekpoint segment" that has been deprecated.

I haven't got (yet) some objective performance measurements to show,
because the problem was so obvious (just by looking at the log feed) that
we switched back quickly to 9.3 (production constraints).

What could be the reason for such a problem ?
I thought that, maybe, our system kernel is getting obsolete for a recent
postgresql engine ?
OS is Centos 5.5 kernel 2.6.18-194.el5 #1 SMP x86_64

thanks for suggestions,
Tom


Re: [GENERAL] Uber migrated from Postgres to MySQL

2016-07-27 Thread Achilleas Mantzios

On 27/07/2016 10:15, Condor wrote:

On 26-07-2016 21:04, Dorian Hoxha wrote:

Many comments: https://news.ycombinator.com/item?id=12166585
https://www.reddit.com/r/programming/comments/4uph84/why_uber_engineering_switched_from_postgres_to/

On Tue, Jul 26, 2016 at 7:39 PM, Guyren Howe  wrote:


Honestly, I've never heard of anyone doing that. But it sounds like
they had good reasons.

https://eng.uber.com/mysql-migration/

Thoughts?

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



They are right for upgrades.
It's a hard to shutdown 1 TB database and wait couple of days pg_upgrade to 
finish upgrade and meanwhile database is offline.
In some distros after upgrade of PG version you don't have old binary and 
library, need to do full dump and restore that take time and disk space.


Our last 1TB upgrade from 9.0 -> 9.3 went like a charm in something like 
seconds. (with the -k option)
However, be warned that the planing and testing took one full week.




Regards,
Hristo S.








--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Uber migrated from Postgres to MySQL

2016-07-27 Thread Marc Fournier

> On Jul 27, 2016, at 00:15, Condor  wrote:
> 
> On 26-07-2016 21:04, Dorian Hoxha wrote:
>> Many comments: https://news.ycombinator.com/item?id=12166585
>> https://www.reddit.com/r/programming/comments/4uph84/why_uber_engineering_switched_from_postgres_to/
>> On Tue, Jul 26, 2016 at 7:39 PM, Guyren Howe  wrote:
>>> Honestly, I've never heard of anyone doing that. But it sounds like
>>> they had good reasons.
>>> https://eng.uber.com/mysql-migration/
>>> Thoughts?
>>> --
>>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>>> To make changes to your subscription:
>>> http://www.postgresql.org/mailpref/pgsql-general
> 
> 
> They are right for upgrades.
> It's a hard to shutdown 1 TB database and wait couple of days pg_upgrade to 
> finish upgrade and meanwhile database is offline.
> In some distros after upgrade of PG version you don't have old binary and 
> library, need to do full dump and restore that take time and disk space.

Not having tried to use pg_upgrade on a 1TB database, this might not apply, but 
pg_upgrade has a —link option that should greatly minimize the time required to 
upgrade, since it eliminates the duplication of files … apparently even works 
on Windows:

“—link … use hard links instead of copying files to the new cluster (use 
junction points on Windows)"

In fact, the —link option should also mean removing the requirement for ‘double 
the disk space’ to do the upgrade …

I don’t have a 1TB database to try it on, mind you, so your ‘wait couple of 
days’ might be *with* the —link option?

--
Marc G Fournier   
http://www.2ndQuadrant.com 
PostgreSQL Development, 24x7 Support, RemoteDBA, Training & Services



[GENERAL] GIN Indexes: Extensibility

2016-07-27 Thread Anton Ananich
Dear All,

Here is what I have:

user=# create table FOO (key jsonb);
CREATE TABLE
user=# insert into FOO(key) values ('[2014]'), ('[2015]'), ('[2016]'), ('[2014, 
2]'), ('[2014, 2, 3]'), ('[2014, 3]'), ('[2014,2,4]'), ('[2014, 2,4]'), 
('[2014,3,13]'), ('[2014, 2, 15]');
INSERT 0 10
user=# SELECT key FROM FOO order by key;
  key  
---
 [2014]
 [2015] <==
 [2016] <==
 [2014, 2]
 [2014, 3] <==
 [2014, 2, 3]
 [2014, 2, 4]
 [2014, 2, 4]
 [2014, 2, 15]
 [2014, 3, 13]
(10 rows)


In my situation this order is invalid. Obviously, year 2016 should go after 
2014, like that:

  key  
---
 [2014]
 [2014, 2]
 [2014, 2, 3]
 [2014, 2, 4]
 [2014, 2, 4]
 [2014, 2, 15]
 [2014, 3] <==
 [2014, 3, 13]
 [2015] <==
 [2016] <==

This is a simplified example, my real application is much more complicated and 
sorted arrays could have tens of values, could even be arrays of arrays.For 
this reason I need to customize sort function.

I found that there is a way to customize function `int compare(Datum a, Datum 
b)` (proof link: 
https://www.postgresql.org/docs/9.5/static/gin-extensibility.html), but I found 
no example which shows how to use it.

I’d appreciate any information which could help me to achieve the described 
result, even if it is just a link to an existing example.

Regards,
Anthony Ananich




Re: [GENERAL] Uber migrated from Postgres to MySQL

2016-07-27 Thread Chris Travers
Just a few points on reading this.

First, the timeline bugs regarding replication (particularly iirc in the
9.1 days).  I remember accidentally corrupting a (fortunately only
demonstration!) database cluster in the process of demonstrating promotion
at least once.  Iirc last time I tried to reproduce these problems, they
had been fixed (by 9.3?).

The replication section made me wonder though if they were using the right
replication solution for the job.  If you don't want an on-disk copy, don't
use physical replication.  This being said there is one serious issue here
that is worth mentioning, which is that since autovacuum on the master has
no knowledge of autovacuum on the slave, it is easy to have longer-running
queries on a slave that have rows they need to see removed by autovacuum
and replication.  This can of course be easily fixed (if your query takes
30 sec to run, every 30 sec open a minute-long transaction on the master,
which means that autovacuum can never clean rows that are older than 30
sec) but such is not a very robust solution and may cause more problems
than it is worth (the real solution is going to a logical replication
system where that is a problem).  As I usually put it, streaming
replication is for cases where you need to guarantee an exact replica of
everything, while logical replication is where you need a copy of data for
use.

Finally, if I were trying to create something like schemaless, there is one
major limitation of PostgreSQL that is not mentioned here, which is TOAST
overhead.  I have seen people try to do things like this and TOAST overhead
can be a real problem in these cases.  If your data for a row won't easily
fit in significantly less than a page, then every read of that data and
every write can effectively do an implicit nested loop join.  And if you
want to talk about write amplification..  But this is also very well
hidden and not easy to measure unless you know to look for it specifically
so it is possible that they ran into it and didn't know it but I don't have
any knowledge of what they did or tried so I could be totally off base
here.  I would say I have seen more than one project run into this and
because explain analyze select * does not detoast

All of the above being said, there are solutions to all the major
problems.  But you have to know about them, where to look, and what to do.
And with higher scale, one very important aspect is that attention to
detail starts to matter a whole lot.  I agree that there are some good
points raised but I wonder what the solutions are.  There is room for some
improvement in the backend (it would really be nice to instrument and
measure toasting/detoasting overhead in explain analyze) but for a lot of
these  I wonder if that is secondary.   PostgreSQL is very well optimized
for a certain series of tasks, and one can build well optimized solutions
well outside that.  At a certain point (including a certain scale)
therewill be no substitute for a teamof people who really know the db
backend inside and out who can design around limitations and I think that
is true for all databases I have worked with.

On Tue, Jul 26, 2016 at 7:39 PM, Guyren Howe  wrote:

> Honestly, I've never heard of anyone doing that. But it sounds like they
> had good reasons.
>
> https://eng.uber.com/mysql-migration/
>
> Thoughts?
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
Best Wishes,
Chris Travers

Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor
lock-in.
http://www.efficito.com/learn_more


Re: [GENERAL] Uber migrated from Postgres to MySQL

2016-07-27 Thread Condor

On 26-07-2016 21:04, Dorian Hoxha wrote:

Many comments: https://news.ycombinator.com/item?id=12166585
https://www.reddit.com/r/programming/comments/4uph84/why_uber_engineering_switched_from_postgres_to/

On Tue, Jul 26, 2016 at 7:39 PM, Guyren Howe  wrote:


Honestly, I've never heard of anyone doing that. But it sounds like
they had good reasons.

https://eng.uber.com/mysql-migration/

Thoughts?

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



They are right for upgrades.
It's a hard to shutdown 1 TB database and wait couple of days pg_upgrade 
to finish upgrade and meanwhile database is offline.
In some distros after upgrade of PG version you don't have old binary 
and library, need to do full dump and restore that take time and disk 
space.



Regards,
Hristo S.





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dumping extensions having sequences with 9.6beta3

2016-07-27 Thread Michael Paquier
On Wed, Jul 27, 2016 at 8:07 AM, Stephen Frost  wrote:
> That'd be great.  It's definitely on my list of things to look into, but
> I'm extremely busy this week.  I hope to look into it on Friday, would
> be great to see what you find.

Sequences that are directly defined in extensions do not get dumped,
and sequences that are part of a serial column in an extension are
getting dumped. Looking into this problem, getOwnedSeqs() is visibly
doing an incorrect assumption: sequences owned by table columns are
dumped unconditionally, but this is not true for sequences that are
part of extensions. More precisely, dobj->dump is being enforced to
DUMP_COMPONENT_ALL, which makes the sequence definition to be dumped.
Oops.

The patch attached fixes the problem for me. I have added as well
tests in test_pg_dump in the shape of sequences defined in an
extension, and sequences that are part of a serial column. This patch
is also able to work in the case where a sequence is created as part
of a serial column, and gets removed after, say with ALTER EXTENSION
DROP SEQUENCE. The behavior for sequences and serial columns that are
not part of extensions is unchanged.

Stephen, it would be good if you could check the correctness of this
patch as you did all this refactoring of pg_dump to support catalog
ACLs. I am sure by the way that checking for (owning_tab->dobj.dump &&
DUMP_COMPONENT_DEFINITION) != 0 is not good because of for example the
case of a serial column created in an extension where the sequence is
dropped from the extension afterwards.
-- 
Michael
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 08c2b0c..0278995 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -6037,6 +6037,8 @@ getOwnedSeqs(Archive *fout, TableInfo tblinfo[], int numTables)
 			continue;			/* not an owned sequence */
 		if (seqinfo->dobj.dump & DUMP_COMPONENT_DEFINITION)
 			continue;			/* no need to search */
+		if (seqinfo->dobj.ext_member)
+			continue;			/* member of an extension */
 		owning_tab = findTableByOid(seqinfo->owning_tab);
 		if (owning_tab && owning_tab->dobj.dump)
 		{
diff --git a/src/test/modules/test_pg_dump/t/001_base.pl b/src/test/modules/test_pg_dump/t/001_base.pl
index fd9c37f..9caee93 100644
--- a/src/test/modules/test_pg_dump/t/001_base.pl
+++ b/src/test/modules/test_pg_dump/t/001_base.pl
@@ -226,7 +226,7 @@ my %tests = (
 	'CREATE TABLE regress_pg_dump_table' => {
 		regexp => qr/^
 			\QCREATE TABLE regress_pg_dump_table (\E
-			\n\s+\Qcol1 integer,\E
+			\n\s+\Qcol1 integer NOT NULL,\E
 			\n\s+\Qcol2 integer\E
 			\n\);$/xm,
 		like   => { binary_upgrade => 1, },
@@ -241,6 +241,48 @@ my %tests = (
 			schema_only=> 1,
 			section_pre_data   => 1,
 			section_post_data  => 1, }, },
+	'CREATE SEQUENCE regress_pg_dump_table_col1_seq' => {
+		regexp => qr/^
+			\QCREATE SEQUENCE regress_pg_dump_table_col1_seq\E
+			\n\s+\QSTART WITH 1\E
+			\n\s+\QINCREMENT BY 1\E
+			\n\s+\QNO MINVALUE\E
+			\n\s+\QNO MAXVALUE\E
+			\n\s+\QCACHE 1;\E
+			$/xm,
+		like   => { binary_upgrade => 1, },
+		unlike => {
+			clean  => 1,
+			clean_if_exists=> 1,
+			createdb   => 1,
+			defaults   => 1,
+			no_privs   => 1,
+			no_owner   => 1,
+			pg_dumpall_globals => 1,
+			schema_only=> 1,
+			section_pre_data   => 1,
+			section_post_data  => 1, }, },
+	'CREATE SEQUENCE regress_pg_dump_seq' => {
+		regexp => qr/^
+			\QCREATE SEQUENCE regress_pg_dump_seq\E
+			\n\s+\QSTART WITH 1\E
+			\n\s+\QINCREMENT BY 1\E
+			\n\s+\QNO MINVALUE\E
+			\n\s+\QNO MAXVALUE\E
+			\n\s+\QCACHE 1;\E
+			$/xm,
+		like   => { binary_upgrade => 1, },
+		unlike => {
+			clean  => 1,
+			clean_if_exists=> 1,
+			createdb   => 1,
+			defaults   => 1,
+			no_privs   => 1,
+			no_owner   => 1,
+			pg_dumpall_globals => 1,
+			schema_only=> 1,
+			section_pre_data   => 1,
+			section_post_data  => 1, }, },
 	'CREATE ACCESS METHOD regress_test_am' => {
 		regexp => qr/^
 			\QCREATE ACCESS METHOD regress_test_am TYPE INDEX HANDLER bthandler;\E
diff --git a/src/test/modules/test_pg_dump/test_pg_dump--1.0.sql b/src/test/modules/test_pg_dump/test_pg_dump--1.0.sql
index 5fe6063..93de2c5 100644
--- a/src/test/modules/test_pg_dump/test_pg_dump--1.0.sql
+++ b/src/test/modules/test_pg_dump/test_pg_dump--1.0.sql
@@ -4,10 +4,12 @@
 \echo Use "CREATE EXTENSION test_pg_dump" to load this file. \quit
 
 CREATE TABLE regress_pg_dump_table (
-	col1 int,
+	col1 serial,
 	col2 int
 );
 
+CREATE SEQUENCE regress_pg_dump_seq;
+
 GRANT SELECT ON regress_pg_dump_table TO regress_dump_test_role;
 GRANT SELECT(col1) ON regress_pg_dump_table TO public;
 

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general