On 6 June 2018 at 01:09, Andres Freund wrote:
> On 2018-06-06 01:06:39 +1200, David Rowley wrote:
>> My concern is that only accounting memory for the group and not the
>> state is only solving half the problem. It might be fine for
>> aggregates that don't stray far from their aggtransspace, but
On 06/05/2018 02:49 PM, Andres Freund wrote:
Hi,
On 2018-06-05 10:05:35 +0200, Tomas Vondra wrote:
My concern is more about what happens when the input tuple ordering is
inherently incompatible with the eviction strategy, greatly increasing the
amount of data written to disk during evictions.
On 2018-06-04 23:32:18 -0400, Tom Lane wrote:
> Michael Paquier writes:
> > On Mon, Jun 04, 2018 at 07:16:33PM -0400, Peter Eisentraut wrote:
> >> There were some discussions about renaming the existing 2018-09 entry
> >> versus inserting a new one at -07 and requiring patches to be moved back
>
Hi,
On 2018-06-06 01:06:39 +1200, David Rowley wrote:
> On 6 June 2018 at 00:57, Andres Freund wrote:
> > I think it's ok to only handle this gracefully if serialization is
> > supported.
> >
> > But I think my proposal to continue use a hashtable for the already
> > known groups, and sorting
Greetings,
* Ashutosh Bapat (ashutosh.ba...@enterprisedb.com) wrote:
> On Tue, Jun 5, 2018 at 9:02 AM, Tom Lane wrote:
> > Michael Paquier writes:
> >> On Mon, Jun 04, 2018 at 07:16:33PM -0400, Peter Eisentraut wrote:
> >>> There were some discussions about renaming the existing 2018-09 entry
>
On 6 June 2018 at 00:57, Andres Freund wrote:
> On 2018-06-06 00:53:42 +1200, David Rowley wrote:
>> On 6 June 2018 at 00:45, Andres Freund wrote:
>> > On 2018-06-05 09:35:13 +0200, Tomas Vondra wrote:
>> >> I wonder if an aggregate might use a custom context
>> >> internally (I don't recall
On 2018-06-05 13:09:08 +0300, Alexander Korotkov wrote:
> On Tue, Jun 5, 2018 at 12:48 PM Konstantin Knizhnik
> wrote:
> > Workload is combination of inserts and selects.
> > Looks like shared locks obtained by select cause starvation of inserts,
> > trying to get exclusive relation extension
Hi,
On 2018-06-05 06:32:31 +0200, Pavel Stehule wrote:
> ./configure --with-libxml --enable-tap-tests --enable-debug --with-perl
> CFLAGS="-ggdb -Og -g3 -fno-omit-frame-pointer"
>
> [pavel@nemesis postgresql]$ gcc --version
> gcc (GCC) 8.1.1 20180502 (Red Hat 8.1.1-1)
>
> I executed simple
On 2018-06-06 00:53:42 +1200, David Rowley wrote:
> On 6 June 2018 at 00:45, Andres Freund wrote:
> > On 2018-06-05 09:35:13 +0200, Tomas Vondra wrote:
> >> I wonder if an aggregate might use a custom context
> >> internally (I don't recall anything like that). The accounting capability
> >>
On 6 June 2018 at 00:45, Andres Freund wrote:
> On 2018-06-05 09:35:13 +0200, Tomas Vondra wrote:
>> I wonder if an aggregate might use a custom context
>> internally (I don't recall anything like that). The accounting capability
>> seems potentially useful for other places, and those might not
Hi,
On 2018-06-05 10:05:35 +0200, Tomas Vondra wrote:
> My concern is more about what happens when the input tuple ordering is
> inherently incompatible with the eviction strategy, greatly increasing the
> amount of data written to disk during evictions.
>
> Say for example that we can fit 1000
On Tue, Jun 5, 2018 at 9:02 AM, Tom Lane wrote:
> Michael Paquier writes:
>> On Mon, Jun 04, 2018 at 07:16:33PM -0400, Peter Eisentraut wrote:
>>> There were some discussions about renaming the existing 2018-09 entry
>>> versus inserting a new one at -07 and requiring patches to be moved back
Ah, I think this is the missing, essential component:
CREATE INDEX ON t(right(i::text,1)) WHERE i::text LIKE '%1';
Finally, I reproduce it with attached script.
INSERT 0 99 <- first insertion
ERROR: cache lookup failed for relation 1032219
ALTER TABLE
ERROR: cache lookup failed for
On 05.06.2018 13:29, Masahiko Sawada wrote:
On Tue, Jun 5, 2018 at 6:47 PM, Konstantin Knizhnik
wrote:
On 05.06.2018 07:22, Masahiko Sawada wrote:
On Mon, Jun 4, 2018 at 10:47 PM, Konstantin Knizhnik
wrote:
On 26.04.2018 09:10, Masahiko Sawada wrote:
On Thu, Apr 26, 2018 at 3:30 AM,
Hi,
When a SUBSCRIPTION is altered, then the currently running
table-synchronization workers that are no longer needed for the
altered subscription, are terminated. This is done by the function
AtEOXact_ApplyLauncher() inside CommitTransaction(). So during each
ALTER-SUBSCRIPTION command, the
On 05/06/18 06:28, Michael Paquier wrote:
> On Mon, Jun 04, 2018 at 11:51:35AM +0200, Petr Jelinek wrote:
>> On 01/06/18 21:13, Michael Paquier wrote:
>>> -startlsn =3D MyReplicationSlot->data.confirmed_flush;
>>> +if (OidIsValid(MyReplicationSlot->data.database))
>>> +startlsn
Hi Dmitry,
Thanks for creating the patch. I looked at it and have some comments.
On 2018/06/04 22:30, Dmitry Dolgov wrote:
>> On 3 June 2018 at 19:11, Tom Lane wrote:
>> Dmitry Dolgov <9erthali...@gmail.com> writes:
>>> Just to clarify for myself, for evaluating any stable function here would
On Tue, Jun 5, 2018 at 6:47 PM, Konstantin Knizhnik
wrote:
>
>
> On 05.06.2018 07:22, Masahiko Sawada wrote:
>>
>> On Mon, Jun 4, 2018 at 10:47 PM, Konstantin Knizhnik
>> wrote:
>>>
>>>
>>> On 26.04.2018 09:10, Masahiko Sawada wrote:
On Thu, Apr 26, 2018 at 3:30 AM, Robert Haas
On Sat, May 26, 2018 at 12:25 AM, Robert Haas wrote:
> On Fri, May 18, 2018 at 11:21 AM, Masahiko Sawada
> wrote:
>> Regarding to API design, should we use 2PC for a distributed
>> transaction if both two or more 2PC-capable foreign servers and
>> 2PC-non-capable foreign server are involved
Hello.
At Mon, 04 Jun 2018 20:58:28 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI
wrote in
<20180604.205828.208262556.horiguchi.kyot...@lab.ntt.co.jp>
> It fails on some join-pushdown cases since it doesn't add tid
> columns to join tlist. I suppose that build_tlist_to_deparse
> needs
On Tue, Jun 5, 2018 at 12:48 PM Konstantin Knizhnik
wrote:
> Workload is combination of inserts and selects.
> Looks like shared locks obtained by select cause starvation of inserts,
> trying to get exclusive relation extension lock.
> The problem is fixed by fair lwlock patch, implemented by
On 04.06.2018 21:42, Andres Freund wrote:
Hi,
On 2018-06-04 16:47:29 +0300, Konstantin Knizhnik wrote:
We in PostgresProc were faced with lock extension contention problem at two
more customers and tried to use this patch (v13) to address this issue.
Unfortunately replacing heavy lock with
On 05.06.2018 07:22, Masahiko Sawada wrote:
On Mon, Jun 4, 2018 at 10:47 PM, Konstantin Knizhnik
wrote:
On 26.04.2018 09:10, Masahiko Sawada wrote:
On Thu, Apr 26, 2018 at 3:30 AM, Robert Haas
wrote:
On Tue, Apr 10, 2018 at 9:08 PM, Masahiko Sawada
wrote:
Never mind. There was a lot
On 2018/06/05 16:41, Ashutosh Bapat wrote:
> On Tue, Jun 5, 2018 at 1:07 PM, Amit Langote
> wrote:
>> On 2018/06/05 1:25, Alvaro Herrera wrote:
>>> In the meantime, my inclination is to fix the documentation to point out
>>> that AFTER triggers are allowed but BEFORE triggers are not.
>>
>>
On 06/05/2018 07:46 AM, Jeff Davis wrote:
On Tue, 2018-06-05 at 07:04 +0200, Tomas Vondra wrote:
I expect the eviction strategy to be the primary design challenge of
this patch. The other bits will be mostly determined by this one
piece.
Not sure I agree that this is the primary challenge.
On 2018/06/05 1:25, Alvaro Herrera wrote:
> In the meantime, my inclination is to fix the documentation to point out
> that AFTER triggers are allowed but BEFORE triggers are not.
Wasn't that already fixed by bcded2609ade6?
We don't say anything about AFTER triggers per se, but the following
On Tue, Jun 5, 2018 at 1:07 PM, Amit Langote
wrote:
> On 2018/06/05 1:25, Alvaro Herrera wrote:
>> In the meantime, my inclination is to fix the documentation to point out
>> that AFTER triggers are allowed but BEFORE triggers are not.
>
> Wasn't that already fixed by bcded2609ade6?
>
> We don't
On 5 June 2018 at 17:04, Tomas Vondra wrote:
> On 06/05/2018 04:56 AM, David Rowley wrote:
>> Isn't there still a problem determining when the memory exhaustion
>> actually happens though? As far as I know, we've still little
>> knowledge how much memory each aggregate state occupies.
>>
>>
On 06/05/2018 09:22 AM, David Rowley wrote:
On 5 June 2018 at 17:04, Tomas Vondra wrote:
On 06/05/2018 04:56 AM, David Rowley wrote:
Isn't there still a problem determining when the memory exhaustion
actually happens though? As far as I know, we've still little
knowledge how much memory
On Tue, Jun 05, 2018 at 06:04:21PM +1200, Thomas Munro wrote:
> On Thu, May 17, 2018 at 3:54 AM, Konstantin Knizhnik
> Speaking of configuration, are you planning to support multiple
> compression libraries at the same time? It looks like the current
> patch implicitly requires client and server
On Mon, Jun 04, 2018 at 11:32:18PM -0400, Tom Lane wrote:
> +1 for just renaming 2018-09 to 2018-07, if we can do that. We'll end
> up postponing some entries back to -09, but that seems like less churn
> than the other way.
Okay. If we tend toward this direction, I propose to do this switch in
On Sat, Jun 02, 2018 at 01:08:56PM -0400, Heikki Linnakangas wrote:
> On 28/05/18 15:08, Michael Paquier wrote:
>> On Mon, May 28, 2018 at 12:26:37PM +0300, Heikki Linnakangas wrote:
>> > Sounds good.
>>
>> Okay. Done this way as attached. If the backend forces anything else
>> than SCRAM then
On Thu, May 17, 2018 at 3:54 AM, Konstantin Knizhnik
wrote:
> Concerning specification of compression level: I have made many experiments
> with different data sets and both zlib/zstd and in both cases using
> compression level higher than default doesn't cause some noticeable increase
> of
101 - 133 of 133 matches
Mail list logo