On Thu, Apr 11, 2019 at 03:34:30PM +1200, David Rowley wrote:
> On Thu, 21 Mar 2019 at 00:51, David Rowley
> wrote:
> > Just so I don't forget about this, I've added it to the July 'fest.
> >
> > https://commitfest.postgresql.org/23/2065/
>
> Now that we have 428b260f8, I think the version of
Fujita-san,
Thanks for the review.
On 2019/04/10 17:38, Etsuro Fujita wrote:
> (2019/03/06 18:33), Amit Langote wrote:
>> I noticed a bug with how UPDATE tuple routing initializes ResultRelInfos
>> to use for partition routing targets. Specifically, the bug occurs when
>> UPDATE targets include
On 2019/04/11 14:03, David Rowley wrote:
> On Fri, 5 Apr 2019 at 19:50, Amit Langote
> wrote:
>> While we're on the topic of the relation between constraint exclusion and
>> partition pruning, I'd like to (re-) propose this documentation update
>> patch. The partitioning chapter in ddl.sgml
On Fri, 5 Apr 2019 at 19:50, Amit Langote wrote:
> While we're on the topic of the relation between constraint exclusion and
> partition pruning, I'd like to (re-) propose this documentation update
> patch. The partitioning chapter in ddl.sgml says update/delete of
> partitioned tables uses
> On Apr 10, 2019, at 9:08 PM, Mark Kirkwood
> wrote:
>
>
>> On 11/04/19 4:01 PM, Mark Kirkwood wrote:
>>> On 9/04/19 12:27 PM, Ashwin Agrawal wrote:
>>>
>>> Heikki and I have been hacking recently for few weeks to implement
>>> in-core columnar storage for PostgreSQL. Here's the design and
On 2019/04/11 13:50, David Rowley wrote:
> On Thu, 11 Apr 2019 at 16:06, Amit Langote
> wrote:
>> I've posted a patch last week on the "speed up partition planning" thread
>> [1] which modifies ddl.sgml to remove the text about UPDATE/DELETE using
>> constraint exclusion under the covers. Do you
On Thu, 11 Apr 2019 at 16:06, Amit Langote
wrote:
>
> On 2019/04/11 12:34, David Rowley wrote:
> > Now that we have 428b260f8, I think the version of this that goes into
> > master should be more like the attached.
>
> Thanks, looks good.
Thanks for looking.
> I've posted a patch last week on
David Rowley writes:
> I was surprised to see nothing mentioned about attempting to roughly
> sort the test order in each parallel group according to their runtime.
I'm confused about what you have in mind here? I'm pretty sure pg_regress
launches all the scripts in a group at the same time, so
On Wed, Apr 10, 2019 at 11:59:18AM -0500, Justin Pryzby wrote:
> I found and included fixes for a few more references:
>
> doc/src/sgml/catalogs.sgml | 2 +-
> doc/src/sgml/ddl.sgml| 3 +--
> doc/src/sgml/information_schema.sgml | 4 ++--
>
On Thu, 11 Apr 2019 at 11:53, Paul Martinez wrote:
>
> I have some questions about the different types of extended statistics
> that were introduced in Postgres 10.
> - Which types of queries are each statistic type supposed to improve?
Multivariate ndistinct stats are aimed to improve distinct
On Wed, Apr 10, 2019 at 09:42:47PM +0200, Peter Eisentraut wrote:
> That is a great analysis. Seems like block-level is the preferred way
> forward.
In any solution related to incremental backups I have see from
community, all of them tend to prefer block-level backups per the
filtering which is
On Tue, Apr 09, 2019 at 10:38:19AM +0900, Michael Paquier wrote:
> Sure. With something like the attached? I don't think that there is
> much point to complicate the test code with multiple roles if the
> default is a superuser.
As this topic differs from the original thread, I haev started a
Hi all,
Recent commit bfc80683 has added some documentation in pg_rewind about
the fact that it is possible to do the operation with a non-superuser,
assuming that this role has sufficient grant rights to execute the
functions used by pg_rewind.
Peter Eisentraut has suggested to have some tests
On 11/04/19 4:01 PM, Mark Kirkwood wrote:
On 9/04/19 12:27 PM, Ashwin Agrawal wrote:
Heikki and I have been hacking recently for few weeks to implement
in-core columnar storage for PostgreSQL. Here's the design and initial
implementation of Zedstore, compressed in-core columnar storage
On 9/04/19 12:27 PM, Ashwin Agrawal wrote:
Heikki and I have been hacking recently for few weeks to implement
in-core columnar storage for PostgreSQL. Here's the design and initial
implementation of Zedstore, compressed in-core columnar storage (table
access method). Attaching the patch and
On Tue, Apr 09, 2019 at 03:50:27PM +0900, Michael Paquier wrote:
> And here is the patch to address this issue. It happens that a bit
> more than the dependency switch was lacking here:
> - At swap time, we need to have the new index definition track
> relispartition from the old index.
> - Again
On Thu, 21 Mar 2019 at 00:51, David Rowley wrote:
> Just so I don't forget about this, I've added it to the July 'fest.
>
> https://commitfest.postgresql.org/23/2065/
Now that we have 428b260f8, I think the version of this that goes into
master should be more like the attached.
I think the
Hi Ram,
I think this documentation helps people who want to understand functions.
>Please find the updated patch. Added to the commitfest as well
I have some comments.
I think some users who would like to know custom function check
src/test/perl/README at first.
How about add comments to the
On Thu, 11 Apr 2019 at 06:52, Bruce Momjian wrote:
>
> OK, let me step back. Why are people resetting the statistics
> regularly? Based on that purpose, does it make sense to clear the
> stats that effect autovacuum?
I can't speak for everyone, but once upon a time when I first started
using
Greetings,
* Robbie Harwood (rharw...@redhat.com) wrote:
> Bruce Momjian writes:
> > On Wed, Apr 3, 2019 at 08:49:25AM +0200, Magnus Hagander wrote:
> >> On Wed, Apr 3, 2019 at 12:22 AM Joe Conway wrote:
> >>
> >> Personally I don't find it as confusing as is either, and I find
> >> hostgss to
On Mon, Apr 8, 2019 at 6:42 PM Noah Misch wrote:
> - AIX animals failed two ways. First, I missed a "use" statement such that
> poll_start() would fail if it needed more than one attempt. Second, I
> assumed $pid would be gone as soon as kill(9, $pid) returned[1].
> [1] POSIX says "sig or
Hi, Hackers
I noticed something strange. Does it cause nothing?
I didn't detect anything, but feel restless.
Step:
- There are two standbys that connect to primary.
- Kill primary and promote one standby.
- Restart another standby that is reset primary_conninfo to connect new primary.
I
On Wed, Apr 10, 2019 at 5:49 PM Robert Haas wrote:
> There is one thing that does worry me about the file-per-LSN-range
> approach, and that is memory consumption when trying to consume the
> information. Suppose you have a really high velocity system. I don't
> know exactly what the busiest
>>> On 2019-03-29 20:32, Joe Conway wrote:
pg_util
>>>
>>> How is that better than just renaming to pg_$oldname?
>>
>> As I already said in up thread:
>>
>>> This way, we would be free from the command name conflict problem
>
> Well, whatever we do -- if anything -- we would certainly
On Wed, Apr 10, 2019 at 4:56 PM Peter Geoghegan wrote:
> The original fastpath tests don't seem particularly effective to me,
> even without the oversight I mentioned. I suggest that you remove
> them, since the minimal btree_index.sql fast path test is sufficient.
To be clear: I propose that
Em sex, 29 de mar de 2019 às 13:25, Tom Lane escreveu:
>
> Maybe if we want to merge these things into one executable,
> it should be a new one. "pg_util createrole bob" ?
>
+1 as I proposed in
https://www.postgresql.org/message-id/bdd1adb1-c26d-ad1f-2f15-cc52056065d4%40timbira.com.br
--
On Wed, Apr 10, 2019 at 4:19 PM Tom Lane wrote:
> > I'll come up with a patch to deal with this situation, by
> > consolidating the old and new tests in some way. I don't think that
> > your work needs to block on that, though.
>
> Should I leave out the part of my patch that creates
Hello,
I have some questions about the different types of extended statistics
that were introduced in Postgres 10.
- Which types of queries are each statistic type supposed to improve?
- When should one type of statistic be used over the other? Should they
both always be used?
We have a
Peter Geoghegan writes:
> On Wed, Apr 10, 2019 at 3:35 PM Tom Lane wrote:
>> * Likewise, I split up indexing.sql by moving the "fastpath" test into
>> a new file index_fastpath.sql.
> I just noticed that the "fastpath" test actually fails to test the
> fastpath optimization -- the coverage we
On Thu, Apr 11, 2019 at 10:54 AM Thomas Munro wrote:
> On Thu, Apr 11, 2019 at 9:43 AM Peter Billen wrote:
> > I kinda expected/hoped that transaction t2 would get aborted by a
> > serialization error, and not an exclude constraint violation. This makes
> > the application session bound to
On Fri, Mar 29, 2019 at 7:06 PM Tsunakawa, Takayuki <
tsunakawa.ta...@jp.fujitsu.com> wrote:
> Hi Hari-san,
>
> I've reviewed all the files. The patch would be OK when the following
> have been fixed, except for the complexity of fe-connect.c (which probably
> cannot be improved.)
>
>
On Wed, Apr 10, 2019 at 3:35 PM Tom Lane wrote:
> I finally got some time to pursue that, and attached is a proposed patch
> that moves some tests around and slightly adjusts some other ones.
> To cut to the chase: on my workstation, this cuts the time for
> "make installcheck-parallel" from 21.9
Em qua, 10 de abr de 2019 às 16:33, Alvaro Herrera
escreveu:
>
> On 2019-Apr-10, Bruce Momjian wrote:
>
> > On Thu, Apr 11, 2019 at 04:14:11AM +1200, David Rowley wrote:
>
> > > I still think we should start with a warning about pg_stat_reset().
> > > People are surprised by this, and these are
Andres Freund writes:
> On 2019-04-10 18:35:15 -0400, Tom Lane wrote:
>> ... What I did instead was to shove
>> that test case and some related ones into a new plpgsql test file,
>> src/pl/plpgsql/src/sql/plpgsql_trap.sql, so that it's not part of the
>> core regression tests at all. (We've
Hi,
On 2019-04-10 18:35:15 -0400, Tom Lane wrote:
> on my workstation, this cuts the time for "make installcheck-parallel"
> from 21.9 sec to 13.9 sec, or almost 40%. I think that's a worthwhile
> improvement, considering how often all of us run those tests.
Awesome.
> * The plpgsql test ran
On Tue, Nov 27, 2018 at 5:43 AM Andrey Borodin wrote:
> > 31 авг. 2018 г., в 2:40, Thomas Munro
> > написал(а):
> > [1] https://arxiv.org/pdf/1509.05053.pdf
>
> I've implemented all of the strategies used in that paper.
> On a B-tree page we have a line pointers ordered in key order and tuples
On 2019-Apr-10, Alvaro Herrera wrote:
> but the test immediately does this:
>
> alter table at_partitioned alter column b type numeric using b::numeric;
>
> and watch what happens! (1663 is pg_default)
>
> alvherre=# select relname, reltablespace from pg_class where relname like
>
Over at
https://www.postgresql.org/message-id/CA%2BTgmobFVe4J4AA7z9OMUzKnm09Tt%2BsybhxeL_Ddst3q3wqpzQ%40mail.gmail.com
I mentioned parsing the WAL to extract block references so that
incremental backup could efficiently determine which blocks needed to
be copied. Ashwin replied in
Hi all,
I understood that v11 includes predicate locking for gist indexes, as per
https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=3ad55863e9392bff73377911ebbf9760027ed405
.
I tried this in combination with an exclude constraint as following:
drop table if exists t;
create table
I just want to be on record that I don't think there is a problem here that
needs to be solved. The choice to put Postgres-related binaries in /usr/bin
or wherever is a distribution/packaging decision. As has been pointed out,
if I download, build, and install Postgres, the binaries by default go
On Wed, 10 Apr 2019 11:55:51 -0700
Andres Freund wrote:
> Hi,
>
> On 2019-04-10 14:38:43 -0400, Robert Haas wrote:
> > On Wed, Apr 10, 2019 at 2:21 PM Jehan-Guillaume de Rorthais
> > wrote:
> > > In my current design, the scan is done backward from end to start and I
> > > keep all the
On Wed, 10 Apr 2019 14:38:43 -0400
Robert Haas wrote:
> On Wed, Apr 10, 2019 at 2:21 PM Jehan-Guillaume de Rorthais
> wrote:
> > In my current design, the scan is done backward from end to start and I
> > keep all the records appearing after the last occurrence of their
> > respective FPI.
>
On 2019-Apr-10, Andres Freund wrote:
> Hi,
>
> On 2019-04-10 09:28:21 -0400, Alvaro Herrera wrote:
> > So I think that apart from David's patch, we should just document all
> > these things carefully.
>
> Yea, I think that's the most important part.
>
> I'm not convinced that we should have
On 10.04.2019 19:51, Robert Haas wrote:
On Wed, Apr 10, 2019 at 10:22 AM Konstantin Knizhnik
wrote:
Some times ago I have implemented alternative version of ptrack utility
(not one used in pg_probackup)
which detects updated block at file level. It is very simple and may be
it can be
On 2019-04-10 15:01, Tatsuo Ishii wrote:
>> On 2019-03-29 20:32, Joe Conway wrote:
>>> pg_util
>>
>> How is that better than just renaming to pg_$oldname?
>
> As I already said in up thread:
>
>> This way, we would be free from the command name conflict problem
Well, whatever we do -- if
On 2019-04-10 15:15, Fred .Flintstone wrote:
> The warnings would only be printed if the programs were executed with
> the old file names.
> This in order to inform people relying on the old names that they are
> deprecated and they should move to the new names with the pg_ prefix.
Yeah, that
On 2019-04-10 17:31, Robert Haas wrote:
> I think the way to think about this problem, or at least the way I
> think about this problem, is that we need to decide whether want
> file-level incremental backup, block-level incremental backup, or
> byte-level incremental backup.
That is a great
On 2019-Apr-10, Bruce Momjian wrote:
> On Thu, Apr 11, 2019 at 04:14:11AM +1200, David Rowley wrote:
> > I still think we should start with a warning about pg_stat_reset().
> > People are surprised by this, and these are just the ones who notice:
> >
> >
Hi,
On 2019-04-10 14:38:43 -0400, Robert Haas wrote:
> On Wed, Apr 10, 2019 at 2:21 PM Jehan-Guillaume de Rorthais
> wrote:
> > In my current design, the scan is done backward from end to start and I
> > keep all
> > the records appearing after the last occurrence of their respective FPI.
>
>
On Thu, Apr 11, 2019 at 04:14:11AM +1200, David Rowley wrote:
> On Sat, 30 Mar 2019 at 00:59, Robert Haas wrote:
> >
> > On Wed, Mar 27, 2019 at 7:49 PM David Rowley
> > wrote:
> > > Yeah, analyze, not vacuum. It is a bit scary to add new ways for
> > > auto-vacuum to suddenly have a lot of
Hi all,
I was wondering if there exists either a test suite of pathological failure
cases for postgres, or a dataset of failure scenarios. I'm not exactly sure
what such a dataset would look like, possibly a bunch of snapshots of test
databases when undergoing a bunch of different failure
On Wed, Apr 10, 2019 at 2:21 PM Jehan-Guillaume de Rorthais
wrote:
> In my current design, the scan is done backward from end to start and I keep
> all
> the records appearing after the last occurrence of their respective FPI.
Oh, interesting. That seems like it would require pretty major
Hi,
First thank you for your answer!
On Wed, 10 Apr 2019 12:21:03 -0400
Robert Haas wrote:
> On Wed, Apr 10, 2019 at 10:57 AM Jehan-Guillaume de Rorthais
> wrote:
> > My idea would be create a new tool working on archived WAL. No burden
> > server side. Basic concept is:
> >
> > * parse
On Wed, Apr 10, 2019 at 8:32 PM Andres Freund wrote:
> On 2019-04-10 20:14:17 +0300, Alexander Korotkov wrote:
> > Your explanation of existing limitations looks very good and
> > convincing. But I think there is one you didn't mention. We require
> > new table AMs to basically save old
Hello,
Thanks for the feedback.
I have one minor observation that in case of initDropTables you log
'drop' and in case of initCreateTables you log 'create table'. We need
to be consistent. The "drop tables" and "create tables" are the best
fit here.
Ok.
Attached version does that, plus
Hi,
On 2019-04-10 20:14:17 +0300, Alexander Korotkov wrote:
> Your explanation of existing limitations looks very good and
> convincing. But I think there is one you didn't mention. We require
> new table AMs to basically save old "contract" between heap and
> indexes. We have "all or nothing"
On Fri, Apr 5, 2019 at 11:25 PM Andres Freund wrote:
> I want to thank Haribabu, Alvaro, Alexander, David, Dmitry and all the
> others that collaborated on making tableam happen. It was/is a huge
> project.
Thank you so much for bringing this project to commit! Excellent work!
Your explanation
On Wed, Apr 10, 2019 at 12:56 PM Ashwin Agrawal wrote:
> Not to fork the conversation from incremental backups, but similar approach
> is what we have been thinking for pg_rewind. Currently, pg_rewind requires
> all the WAL logs to be present on source side from point of divergence to
>
On Wed, Apr 10, 2019 at 7:51 AM Andrey Borodin wrote:
> > 9 апр. 2019 г., в 20:48, Robert Haas написал(а):
> > - This is just a design proposal at this point; there is no code. If
> > this proposal, or some modified version of it, seems likely to be
> > acceptable, I and/or my colleagues might
On Wed, Apr 10, 2019 at 06:32:35PM +0200, Daniel Verite wrote:
> Justin Pryzby wrote:
>
> > Cleanup/remove/update references to OID column...
>
> Just spotted a couple of other references that need updates:
> #1. In catalogs.sgml:
> #2. In ddl.sgml, when describing ctid:
I found and
On Wed, Apr 10, 2019 at 9:21 AM Robert Haas wrote:
> I have a related idea, though. Suppose that, as Peter says upthread,
> you have a replication slot that prevents old WAL from being removed.
> You also have a background worker that is connected to that slot. It
> decodes WAL and produces
On Wed, Apr 10, 2019 at 10:22 AM Konstantin Knizhnik
wrote:
> Some times ago I have implemented alternative version of ptrack utility
> (not one used in pg_probackup)
> which detects updated block at file level. It is very simple and may be
> it can be sometimes integrated in master.
I don't
Justin Pryzby wrote:
> Cleanup/remove/update references to OID column...
>
> ..in wake of 578b229718e8f.
Just spotted a couple of other references that need updates:
#1. In catalogs.sgml:
attnum
int2
The number of the column. Ordinary columns
On Wed, Apr 10, 2019 at 10:57 AM Jehan-Guillaume de Rorthais
wrote:
> My idea would be create a new tool working on archived WAL. No burden
> server side. Basic concept is:
>
> * parse archives
> * record latest relevant FPW for the incr backup
> * write new WALs with recorded FPW and
On Sat, 30 Mar 2019 at 00:59, Robert Haas wrote:
>
> On Wed, Mar 27, 2019 at 7:49 PM David Rowley
> wrote:
> > Yeah, analyze, not vacuum. It is a bit scary to add new ways for
> > auto-vacuum to suddenly have a lot of work to do. When all workers
> > are busy it can lead to neglect of other
Hi,
On 2019-04-10 09:28:21 -0400, Alvaro Herrera wrote:
> So I think that apart from David's patch, we should just document all
> these things carefully.
Yea, I think that's the most important part.
I'm not convinced that we should have any inheriting behaviour btw - it
seems like there's a lot
Hi,
On 2019-04-10 12:11:21 +0530, tushar wrote:
>
> On 03/13/2019 08:40 PM, tushar wrote:
> > Hi ,
> >
> > I am getting a server crash on standby while executing
> > pg_logical_slot_get_changes function , please refer this scenario
> >
> > Master cluster( ./initdb -D master)
> > set
On Tue, Apr 9, 2019 at 5:28 PM Alvaro Herrera wrote:
> On 2019-Apr-09, Peter Eisentraut wrote:
> > On 2019-04-09 17:48, Robert Haas wrote:
> > > 3. There should be a new tool that knows how to merge a full backup
> > > with any number of incremental backups and produce a complete data
> > >
On 09/04/2019 19:11, Anastasia Lubennikova wrote:
05.04.2019 19:41, Anastasia Lubennikova writes:
In attachment, you can find patch with a test that allows to reproduce
the bug not randomly, but on every run.
Now I'm trying to find a way to fix the issue.
The problem was caused by incorrect
Hi,
On April 10, 2019 8:13:06 AM PDT, Alvaro Herrera
wrote:
>On 2019-Mar-31, Darafei "Komяpa" Praliaskouski wrote:
>
>> Alternative point of "if your database is super large and actively
>written,
>> you may want to set autovacuum_freeze_max_age to even smaller values
>so
>> that autovacuum
On 2019-Mar-31, Darafei "Komяpa" Praliaskouski wrote:
> Alternative point of "if your database is super large and actively written,
> you may want to set autovacuum_freeze_max_age to even smaller values so
> that autovacuum load is more evenly spread over time" may be needed.
I don't think it's
Hi,
On Tue, 9 Apr 2019 11:48:38 -0400
Robert Haas wrote:
> Several companies, including EnterpriseDB, NTT, and Postgres Pro, have
> developed technology that permits a block-level incremental backup to
> be taken from a PostgreSQL server. I believe the idea in all of those
> cases is that
On 09.04.2019 18:48, Robert Haas wrote:
1. There should be a way to tell pg_basebackup to request from the
server only those blocks where LSN >= threshold_value.
Some times ago I have implemented alternative version of ptrack utility
(not one used in pg_probackup)
which detects updated
On Sat, Apr 6, 2019 at 9:56 AM Darafei "Komяpa" Praliaskouski
wrote:
> The invoking autovacuum on table based on inserts, not only deletes
>> and updates, seems good idea to me. But in this case, I think that we
>> can not only freeze tuples but also update visibility map even when
>> setting
Re: Fred .Flintstone 2019-04-10
> Does anyone oppose the proposal?
I don't think part #3 has been discussed, and I'd oppose printing
these warnings.
Christoph
Does anyone oppose the proposal?
How can we determine consensus?
Is there any voting process?
Is there any developer who is more versed than me with C than me who
can write this patch?
On Wed, Apr 10, 2019 at 2:52 PM Christoph Berg wrote:
>
> Re: Fred .Flintstone 2019-04-10
>
> > It seems we
> On 2019-03-29 20:32, Joe Conway wrote:
>> pg_util
>
> How is that better than just renaming to pg_$oldname?
As I already said in up thread:
> This way, we would be free from the command name conflict problem
> and plus, we could do:
>
> pgsql --help
>
> which will prints subscommand names
Hi Fabien,
I have one minor observation that in case of initDropTables you log
'drop' and in case of initCreateTables you log 'create table'. We need
to be consistent. The "drop tables" and "create tables" are the best
fit here. Otherwise, the patch is good.
On Wed, Apr 10, 2019 at 2:18 PM Ibrar
Re: Fred .Flintstone 2019-04-10
> It seems we do have a clear path forward on how to accomplish this and
> implement this change.
>
> 1. Rename executables to carry the pg_ prefix.
> 2. Create symlinks from the old names to the new names.
> 3. Modify the executables to read argv[0] and print a
It seems we do have a clear path forward on how to accomplish this and
implement this change.
1. Rename executables to carry the pg_ prefix.
2. Create symlinks from the old names to the new names.
3. Modify the executables to read argv[0] and print a warning if the
executable is called from the
Hi!
> 9 апр. 2019 г., в 18:20, Zhichao Liu написал(а):
>
> Dear PostgreSQL community,
>
> I am a GSoC 2019 applicant and am working on 'WAL-G safety features'. I have
> finished an initial draft of my proposal and I would appreciate your comments
> and advice on my proposal. I know it is
Hi!
> 9 апр. 2019 г., в 20:48, Robert Haas написал(а):
>
> Thoughts?
Thanks for this long and thoughtful post!
At Yandex, we are using incremental backups for some years now. Initially, we
used patched pgbarman, then we implemented this functionality in WAL-G. And
there are many things to be
On Wed, Apr 10, 2019 at 05:03:21PM +0900, Amit Langote wrote:
> The problem lies in all branches that have partitioning, so it should be
> listed under Older Bugs, right? You may have noticed that I posted
> patches for all branches down to 10.
I have noticed. The message from Tom upthread
On Wed, Apr 10, 2019 at 12:55 PM Heikki Linnakangas wrote:
>
> On 10/04/2019 09:29, Amit Kapila wrote:
> > On Tue, Apr 9, 2019 at 5:57 AM Ashwin Agrawal wrote:
> >> Row store
> >> -
> >>
> >> The tuples are stored one after another, sorted by TID. For each
> >> tuple, we store its 48-bit
On 2019-04-09 13:58, Christoph Berg wrote:
> I'm not entirely sure what happened here, but I think this made
> pg_restore verbose by default (and there is no --quiet option).
That was by accident. Fixed.
--
Peter Eisentraut http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7
The following review has been posted through the commitfest application:
make installcheck-world: tested, passed
Implements feature: tested, passed
Spec compliant: tested, passed
Documentation:not tested
Please ignore the last email.
Patch works perfectly and the
The following review has been posted through the commitfest application:
make installcheck-world: tested, failed
Implements feature: tested, failed
Spec compliant: tested, failed
Documentation:not tested
Patch works perfectly and the code is well-written. I have one
On Wed, Apr 10, 2019 at 3:32 PM Amit Kapila wrote:
> On Mon, Apr 8, 2019 at 8:51 AM Amit Kapila
> wrote:
> >
> > On Mon, Apr 8, 2019 at 7:54 AM Jamison, Kirk
> wrote:
> > > So I am marking this thread as “Ready for Committer”.
> > >
> >
> > Thanks, Hari and Jamison for verification. The
On Wed, Apr 10, 2019 at 09:39:29AM +0900, Kyotaro HORIGUCHI wrote:
At Tue, 9 Apr 2019 17:03:33 +0200, Tomas Vondra wrote
in <20190409150333.5iashyjxm5jmraml@development>
Unfortunately, now that we're past code freeze it's clear this is a
PG12
matter now :-(
I personally consider this to be
(2019/03/06 18:33), Amit Langote wrote:
I noticed a bug with how UPDATE tuple routing initializes ResultRelInfos
to use for partition routing targets. Specifically, the bug occurs when
UPDATE targets include a foreign partition that is locally modified (as
opposed to being modified directly on
At Wed, 10 Apr 2019 14:55:48 +0900, Amit Langote
wrote in
> On 2019/04/10 12:53, Kyotaro HORIGUCHI wrote:
> > At Wed, 10 Apr 2019 11:17:53 +0900, Amit Langote
> > wrote:
> >> Yeah, I think we should move the "if (partconstr)" block to the "if
> >> (is_orclause(clause))" block as I originally
Hi,
On 2019/04/10 15:42, Michael Paquier wrote:
> On Mon, Apr 08, 2019 at 10:40:41AM -0400, Robert Haas wrote:
>> On Mon, Apr 8, 2019 at 9:59 AM Tom Lane wrote:
>>> Amit Langote writes:
>>> Yeah, it's an open issue IMO. I think we've been focusing on getting
>>> as many feature patches done as
On 10/04/2019 10:38, Konstantin Knizhnik wrote:
I also a little bit confused about UNDO records and MVCC support in
Zedstore. Actually columnar store is mostly needed for analytic for
read-only or append-only data. One of the disadvantages of Postgres is
quite larger per-record space overhead
On 10.04.2019 10:25, Heikki Linnakangas wrote:
On 10/04/2019 09:29, Amit Kapila wrote:
On Tue, Apr 9, 2019 at 5:57 AM Ashwin Agrawal
wrote:
Row store
-
The tuples are stored one after another, sorted by TID. For each
tuple, we store its 48-bit TID, a undo record pointer, and the
On 10/04/2019 09:29, Amit Kapila wrote:
On Tue, Apr 9, 2019 at 5:57 AM Ashwin Agrawal wrote:
Row store
-
The tuples are stored one after another, sorted by TID. For each
tuple, we store its 48-bit TID, a undo record pointer, and the actual
tuple data uncompressed.
Storing undo
On Tue, Apr 9, 2019 at 11:26 PM Bruce Momjian wrote:
>
> On Wed, Mar 20, 2019 at 03:19:58PM -0700, legrand legrand wrote:
> > > The rest of thread raise quite a lot of concerns about the semantics,
> > > the cost and the correctness of this patch. After 5 minutes checking,
> > > it wouldn't
On Mon, Apr 08, 2019 at 10:40:41AM -0400, Robert Haas wrote:
> On Mon, Apr 8, 2019 at 9:59 AM Tom Lane wrote:
>> Amit Langote writes:
>> Yeah, it's an open issue IMO. I think we've been focusing on getting
>> as many feature patches done as we could during the CF, but now it's
>> time to start
On 03/13/2019 08:40 PM, tushar wrote:
Hi ,
I am getting a server crash on standby while executing
pg_logical_slot_get_changes function , please refer this scenario
Master cluster( ./initdb -D master)
set wal_level='hot_standby in master/postgresql.conf file
start the server , connect to
On Tue, Apr 9, 2019 at 5:57 AM Ashwin Agrawal wrote:
>
> Heikki and I have been hacking recently for few weeks to implement
> in-core columnar storage for PostgreSQL. Here's the design and initial
> implementation of Zedstore, compressed in-core columnar storage (table
> access method). Attaching
Thanks for reviewing!
I've updated the patch according to your comments.
Best regards,
Peifeng Qiu
On Sun, Apr 7, 2019 at 2:31 PM Noah Misch wrote:
> On Sat, Mar 30, 2019 at 03:42:39PM +0900, Peifeng Qiu wrote:
> > When I watched the whole build process with a task manager, I discovered
> >
1 - 100 of 102 matches
Mail list logo