On Wed, Apr 17, 2019 at 2:04 AM Andres Freund wrote:
>
> Hi,
>
> I'm somewhat unhappy in how much the no-fsm-for-small-rels exposed
> complexity that looks like it should be purely in freespacemap.c to
> callers.
>
>
> extern Size GetRecordedFreeSpace(Relation rel, BlockNumber heapBlk);
>
>From: Ideriha, Takeshi [mailto:ideriha.take...@jp.fujitsu.com]
>Sent: Wednesday, December 5, 2018 2:42 PM
>Subject: RE: Copy data to DSA area
Hi
It's been a long while since we discussed this topic.
Let me recap first and I'll give some thoughts.
It seems things we got consensus is:
- Want to
David Rowley writes:
> On Wed, 17 Apr 2019 at 15:54, Tom Lane wrote:
>> What I'm more worried about is whether this breaks any internal behavior
>> of explain.c, as the comment David quoted upthread seems to think.
>> If we need to have a tlist to reference, can we make that code look
>> to the
On 2019/04/17 12:58, David Rowley wrote:
> On Wed, 17 Apr 2019 at 15:54, Tom Lane wrote:
>>
>> Amit Langote writes:
>>> On 2019/04/17 11:29, David Rowley wrote:
Where do you think the output list for EXPLAIN VERBOSE should put the
output column list in this case? On the Append node, or
Bruce Momjian writes:
> I have found that log_planner_stats only outputs stats until the generic
> plan is chosen. For example, if you run the following commands:
Uh, well, the planner doesn't get run after that point ...
regards, tom lane
On Wed, 17 Apr 2019 at 15:54, Tom Lane wrote:
>
> Amit Langote writes:
> > On 2019/04/17 11:29, David Rowley wrote:
> >> Where do you think the output list for EXPLAIN VERBOSE should put the
> >> output column list in this case? On the Append node, or just not show
> >> them?
>
> > Maybe, not
Amit Langote writes:
> On 2019/04/17 11:29, David Rowley wrote:
>> Where do you think the output list for EXPLAIN VERBOSE should put the
>> output column list in this case? On the Append node, or just not show
>> them?
> Maybe, not show them?
Yeah, I think that seems like a reasonable idea. If
I have found that log_planner_stats only outputs stats until the generic
plan is chosen. For example, if you run the following commands:
SET client_min_messages = 'log';
SET log_planner_stats = TRUE;
PREPARE e AS SELECT relkind FROM pg_class WHERE relname = $1
On 2019/04/17 11:29, David Rowley wrote:
> On Wed, 17 Apr 2019 at 13:13, Amit Langote
> wrote:
>> When you see this:
>>
>> explain select * from t1 where dt = current_date + 400;
>> QUERY PLAN
>>
>> Append
On Wed, 17 Apr 2019 at 13:13, Amit Langote
wrote:
> When you see this:
>
> explain select * from t1 where dt = current_date + 400;
> QUERY PLAN
>
> Append (cost=0.00..198.42 rows=44 width=8)
>Subplans
Hi,
On 2019/04/16 21:09, David Rowley wrote:
> On Tue, 16 Apr 2019 at 23:55, Yuzuko Hosoya
> wrote:
>> postgres=# explain analyze select * from t1 where dt = current_date + 400;
>> QUERY PLAN
>>
On Mon, Apr 15, 2019 at 7:57 PM wrote:
> I forgot to mention that this is happening in a docker container.
Huh, so there may be some configuration of Linux container that can
fail here with EPERM, even though that error that does not appear in
the man page, and doesn't make much intuitive sense.
On Tue, Apr 16, 2019 at 08:03:22PM +0900, Masahiko Sawada wrote:
> Agreed. There are also some code which raise an ERROR after close a
> transient file but I think it's a good idea to not include them for
> safety. It looks to me that the patch you proposed cleans places as
> much as we can do.
On Tue, Apr 16, 2019 at 08:50:31AM +0200, Peter Eisentraut wrote:
> Looks good to me.
Thanks, committed. If there are additional discussions on various
points of the feature, let's move to a new thread please. This one
has been already extensively used ;)
--
Michael
signature.asc
Description:
Hi,
On 2019-04-16 17:05:36 -0700, Andres Freund wrote:
> On 2019-04-16 18:59:37 -0400, Robert Haas wrote:
> > On Tue, Apr 16, 2019 at 6:45 PM Tom Lane wrote:
> > > Do we need to think harder about establishing rules for multiplexed
> > > use of the process latch? I'm imagining some rule like
Hi,
On 2019-04-16 18:59:37 -0400, Robert Haas wrote:
> On Tue, Apr 16, 2019 at 6:45 PM Tom Lane wrote:
> > Do we need to think harder about establishing rules for multiplexed
> > use of the process latch? I'm imagining some rule like "if you are
> > not the outermost event loop of a process,
On Tue, Apr 16, 2019 at 6:45 PM Tom Lane wrote:
> Do we need to think harder about establishing rules for multiplexed
> use of the process latch? I'm imagining some rule like "if you are
> not the outermost event loop of a process, you do not get to
> summarily clear MyLatch. Make sure to leave
On Sun, Apr 14, 2019 at 3:29 PM Tom Lane wrote:
> What I get for test cases like [1] is
>
> single-partition SELECT, hash partitioning:
>
> N tps, HEAD tps, patch
> 2 11426.24375411448.615193
> 8 11254.83326711374.278861
> 32 11288.32911411371.942425
> 128
Michael Paquier writes:
> The buildfarm has reported two similar failures when shutting down a
> node:
> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=piculet=2019-03-23%2022%3A28%3A59
> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dragonet=2019-04-16%2006%3A14%3A01
> In
On Tue, Apr 16, 2019 at 5:44 PM Stephen Frost wrote:
> > > I love the general idea of having additional facilities in core to
> > > support block-level incremental backups. I've long been unhappy that
> > > any such approach ends up being limited to a subset of the files which
> > > need to be
Greetings,
* Bruce Momjian (br...@momjian.us) wrote:
> On Mon, Apr 15, 2019 at 09:01:11AM -0400, Stephen Frost wrote:
> > * Robert Haas (robertmh...@gmail.com) wrote:
> > > Several companies, including EnterpriseDB, NTT, and Postgres Pro, have
> > > developed technology that permits a block-level
I wrote:
> I'm thinking that we really need to upgrade vacuum's reporting totals
> so that it accounts in some more-honest way for pre-existing dead
> line pointers. The patch as it stands has made the reporting even more
> confusing, rather than less so.
Here's a couple of ideas about that:
1.
Andres Freund writes:
> On 2019-04-16 14:31:25 -0400, Tom Lane wrote:
>> This can only work at all if an inaccurate map is very fail-soft,
>> which I'm not convinced it is
> I think it better needs to be fail-soft independent of this the no-fsm
> patch. Because the fsm is not WAL logged etc,
Hi,
On 2019-04-16 14:31:25 -0400, Tom Lane wrote:
> Andres Freund writes:
> > I'm kinda thinking that this is the wrong architecture.
>
> The bits of that patch that I've looked at seemed like a mess
> to me too. AFAICT, it's trying to use a single global "map"
> for all relations (strike 1)
On Tue, Apr 16, 2019 at 12:00 PM Peter Geoghegan wrote:
> Can you be more specific? What was the cause of the corruption? I'm
> always very interested in hearing about cases that amcheck could have
> detected, but didn't.
FWIW, v4 indexes in Postgres 12 will support the new "rootdescend"
On Mon, Apr 15, 2019 at 7:30 PM Alexander Korotkov
wrote:
> Currently we amcheck supports lossy checking for missing parent
> downlinks. It collects bitmap of downlink hashes and use it to check
> subsequent tree level. We've experienced some large corrupted indexes
> which pass this check due
Andres Freund writes:
> I'm kinda thinking that this is the wrong architecture.
The bits of that patch that I've looked at seemed like a mess
to me too. AFAICT, it's trying to use a single global "map"
for all relations (strike 1) without any clear tracking of
which relation the map currently
Amit Langote writes:
>> I get that we want to get rid of the keep_* kludge in the long term, but
>> is it wrong to think, for example, that having keep_partdesc today allows
>> us today to keep the pointer to rd_partdesc as long as we're holding the
>> relation open or refcnt on the whole
Hi,
I'm somewhat unhappy in how much the no-fsm-for-small-rels exposed
complexity that looks like it should be purely in freespacemap.c to
callers.
extern Size GetRecordedFreeSpace(Relation rel, BlockNumber heapBlk);
-extern BlockNumber GetPageWithFreeSpace(Relation rel, Size spaceNeeded);
Amit Langote writes:
> On 2019/04/15 2:38, Tom Lane wrote:
>> To my mind there are only two trustworthy solutions to the problem of
>> wanting time-extended usage of a relcache subsidiary data structure: one
>> is to copy it, and the other is to reference-count it. I think that going
>> over to
Dear Nikolay,
Many thanks for your efforts!
On Sat, Apr 6, 2019 at 2:29 PM Nikolay Shaplov wrote:
> В письме от воскресенье, 24 февраля 2019 г. 14:31:55 MSK пользователь
> Dmitry
> Belyavsky написал:
>
> Hi! Am back here again.
>
> I've been thinking about this patch a while... Come to some
So after thinking about this a bit more ...
ISTM that what we have here is a race condition (ie, tuple changed state
since heap_page_prune), and that ideally we want the code to resolve it
as if no race had happened. That is, either of these behaviors would
be acceptable:
1. Delete the tuple,
On Mon, Apr 15, 2019 at 10:45:51PM -0700, Ashwin Agrawal wrote:
On Mon, Apr 15, 2019 at 12:50 PM Peter Geoghegan wrote:
On Mon, Apr 15, 2019 at 9:16 AM Ashwin Agrawal
wrote:
> Would like to know more specifics on this Peter. We may be having
different context on hybrid row/column design.
On Tue, Apr 16, 2019 at 10:48 AM Jamison, Kirk wrote:
>
> Hello Fujii-san,
>
> On April 18, 2018, Fujii Masao wrote:
>
> > On Fri, Mar 30, 2018 at 12:18 PM, Tsunakawa, Takayuki
> > wrote:
> >> Furthermore, TRUNCATE has a similar and worse issue. While DROP TABLE
> >> scans the shared buffers
Hi,
On 2019-04-16 12:01:36 -0400, Tom Lane wrote:
> (BTW, I don't understand why that code will throw "found xmin %u from
> before relfrozenxid %u" if HeapTupleHeaderXminFrozen is true? Shouldn't
> the whole if-branch at lines 6113ff be skipped if xmin_frozen?)
I *think* that just looks odd,
Robert Haas writes:
> On Mon, Apr 15, 2019 at 9:07 PM Tom Lane wrote:
>> If we're failing to remove it, and it's below the desired freeze
>> horizon, then we'd darn well better freeze it instead, no?
> I don't know that that's safe. IIRC, the freeze code doesn't cope
> nicely with being given
Hi,
On 2019-04-16 11:38:01 -0400, Tom Lane wrote:
> Alvaro Herrera writes:
> > On 2019-Apr-16, Robert Haas wrote:
> >> On Mon, Apr 15, 2019 at 9:07 PM Tom Lane wrote:
> >>> If we're failing to remove it, and it's below the desired freeze
> >>> horizon, then we'd darn well better freeze it
Hi,
On 2019-04-16 10:54:34 -0400, Alvaro Herrera wrote:
> On 2019-Apr-16, Robert Haas wrote:
> > On Mon, Apr 15, 2019 at 9:07 PM Tom Lane wrote:
> > > > I'm not sure that's correct. If you do that, it'll end up in the
> > > > non-tupgone case, which might try to freeze a tuple that should've
>
On Mon, Apr 15, 2019 at 3:32 PM Julien Rouhaud wrote:
>
> Sorry for late reply,
>
> On Sun, Apr 14, 2019 at 7:12 PM Magnus Hagander wrote:
> >
> > On Sat, Apr 13, 2019 at 8:46 PM Robert Treat wrote:
> >>
> >> On Fri, Apr 12, 2019 at 8:18 AM Magnus Hagander
> >> wrote:
> >> ISTM the argument
Alvaro Herrera writes:
> On 2019-Apr-16, Robert Haas wrote:
>> On Mon, Apr 15, 2019 at 9:07 PM Tom Lane wrote:
>>> If we're failing to remove it, and it's below the desired freeze
>>> horizon, then we'd darn well better freeze it instead, no?
>> I don't know that that's safe. IIRC, the freeze
On Tue, Apr 16, 2019 at 11:26 PM Robert Haas wrote:
>
> On Mon, Apr 15, 2019 at 9:07 PM Tom Lane wrote:
> > > I'm not sure that's correct. If you do that, it'll end up in the
> > > non-tupgone case, which might try to freeze a tuple that should've
> > > been removed. Or am I confused?
> >
> >
Michael Paquier writes:
> Aren't extra ORDER BY clauses the usual response to tuple ordering? I
> really think that we should be more aggressive with that.
I'm not excited about that. The traditional argument against it
is that if we start testing ORDER BY queries exclusively (and it
would
On 2019-Apr-16, Robert Haas wrote:
> On Mon, Apr 15, 2019 at 9:07 PM Tom Lane wrote:
> > > I'm not sure that's correct. If you do that, it'll end up in the
> > > non-tupgone case, which might try to freeze a tuple that should've
> > > been removed. Or am I confused?
> >
> > If we're failing to
Michael Paquier writes:
> In short, I tend to think that the attached is an acceptable cleanup.
> Thoughts?
WFM.
regards, tom lane
On Mon, Apr 15, 2019 at 9:07 PM Tom Lane wrote:
> > I'm not sure that's correct. If you do that, it'll end up in the
> > non-tupgone case, which might try to freeze a tuple that should've
> > been removed. Or am I confused?
>
> If we're failing to remove it, and it's below the desired freeze
>
Magnus Hagander writes:
> On Tue, Apr 16, 2019 at 8:55 AM Peter Eisentraut <
> peter.eisentr...@2ndquadrant.com> wrote:
>> On 2019-04-16 08:47, Magnus Hagander wrote:
>>> Unless we want to go all the way and have said bot actualy close the CF
>>> entry. But the question is, do we?
>> I don't
> 9 апр. 2019 г., в 22:30, Tom Lane написал(а):
>
> The proposal is kind of cute, but I'll bet it's a net loss for
> small copy lengths --- likely we'd want some cutoff below which
> we do it with the dumb byte-at-a-time loop.
Ture.
I've made simple extension to compare decompression time on
On Mon, 15 Apr 2019 at 15:26, Alvaro Herrera wrote:
>
> On 2019-Apr-15, David Rowley wrote:
>
> > To be honest, if I'd done a better job of thinking through the
> > implications of this tablespace inheritance in ca4103025d, then I'd
> > probably have not bothered submitting a patch for it. We
On Tue, 16 Apr 2019 at 23:55, Yuzuko Hosoya wrote:
> postgres=# explain analyze select * from t1 where dt = current_date + 400;
> QUERY PLAN
> ---
> Append
Hi all,
I found a runtime pruning test case which may be a problem as follows:
create table t1 (id int, dt date) partition by range(dt);
create table t1_1 partition of t1 for values from ('2019-01-01') to
('2019-04-01');
create table t1_2 partition of t1 for values from ('2019-04-01') to
On Fri, 5 Apr 2019 at 17:31, Pavan Deolasee wrote:
> IMV it makes sense to simply cap the lower limit of toast_tuple_target to the
> compile time default and update docs to reflect that. Otherwise, we need to
> deal with the possibility of dynamically creating the toast table if the
> relation
On Tue, Apr 16, 2019 at 2:45 PM Michael Paquier wrote:
>
> On Fri, Apr 12, 2019 at 10:06:41PM +0900, Masahiko Sawada wrote:
> > But I think that's not right, I've checked the code. If the startup
> > process failed in that function it raises a FATAL and recovery fails,
> > and if checkpointer
On Fri, Apr 12, 2019 at 11:05 PM Tom Lane wrote:
>
> Masahiko Sawada writes:
> > There are something like the following code in many places in PostgreSQL
> > code.
> > ...
> > Since we eventually call
> > pgstat_report_wait_end() in AbortTransaction(). I think that we don't
> > need to call
On Tue, Apr 16, 2019 at 4:47 AM Tom Lane wrote:
>
> Robert Haas writes:
> > On Mon, Apr 15, 2019 at 1:13 PM Tom Lane wrote:
> >> I have a very strong feeling that this patch was not fully baked.
>
> > I think you're right, but I don't understand the comment in the
> > preceding paragraph. How
On Tue, Apr 16, 2019 at 4:47 AM Eric Hanson wrote:
> We would probably be wise to learn from what has gone (so I hear) terribly
> wrong with the Node / NPM packaging system (and I'm sure many before it),
> namely versioning. What happens when two extensions require different
> versions of the
On Tue, Apr 16, 2019 at 4:24 AM Eric Hanson wrote:
>
>
> On Tue, Apr 16, 2019 at 12:47 AM Noah Misch wrote:
>
>> On Mon, Mar 18, 2019 at 09:38:19PM -0500, Eric Hanson wrote:
>> > I have heard talk of a way to write extensions so that they dynamically
>> > reference the schema of their
On Tue, Apr 16, 2019 at 12:47 AM Noah Misch wrote:
> On Mon, Mar 18, 2019 at 09:38:19PM -0500, Eric Hanson wrote:
> > I have heard talk of a way to write extensions so that they dynamically
> > reference the schema of their dependencies, but sure don't know how that
> > would work if it's
Hi, hackers.
I'm trying to build 64-bit windows binaries with kerberos support.
I downloaded latest kerberos source package from here:
https://kerberos.org/dist/index.html
I followed the the instructions in src\windows\README, and executed the
following script in 64-bit Visual Studio Command
Hi Horiguchi-san,
Thank you for your reviewing.
I updated patch. Please see my attached patch.
> +/* protocol message name */
> +static char *command_text_b[] = {
>
> Couldn't the name be more descriptive? The comment just above doesn't seem
> consistent with the variable. The tables are very
On Tue, Apr 16, 2019 at 8:55 AM Peter Eisentraut <
peter.eisentr...@2ndquadrant.com> wrote:
> On 2019-04-16 08:47, Magnus Hagander wrote:
> > Unless we want to go all the way and have said bot actualy close the CF
> > entry. But the question is, do we?
>
> I don't think so. There are too many
Hi all,
This is a continuation of the following thread, but I prefer spawning
a new thread for clarity:
https://www.postgresql.org/message-id/20190416064512.gj2...@paquier.xyz
The buildfarm has reported two similar failures when shutting down a
node:
On 2019-04-16 06:36, Michael Paquier wrote:
> +$node->append_conf('pg_hba.conf',
> + qq{hostgssenc all all $hostaddr/32 gss map=mymap});
> +$node->restart;
> A reload should be enough but not race-condition free, which is why a
> set of restarts is done in this test right? (I have noticed that
On Sat, 13 Apr 2019 at 00:57, Andres Freund wrote:
>
> Hi,
>
> On 2019-04-12 23:34:02 +0530, Amit Khandekar wrote:
> > I tried to see if I can quickly understand what's going on.
> >
> > Here, master wal_level is hot_standby, not logical, though slave
> > wal_level is logical.
>
> Oh, that's well
On 2019-04-16 08:47, Magnus Hagander wrote:
> Unless we want to go all the way and have said bot actualy close the CF
> entry. But the question is, do we?
I don't think so. There are too many special cases that would make this
unreliable, like one commit fest thread consisting of multiple
On 2019-04-16 08:19, Michael Paquier wrote:
> On Fri, Apr 12, 2019 at 12:11:12PM +0100, Dagfinn Ilmari Mannsåker wrote:
>> I don't have any comments on the code (but the test looks sensible, it's
>> the same trick I used to discover the issue in the first place).
>
> After thinking some more on
On Sat, Apr 13, 2019 at 10:28 PM Tom Lane wrote:
> Tomas Vondra writes:
> > On Thu, Apr 11, 2019 at 02:55:10PM +0500, Ibrar Ahmed wrote:
> >> On Thu, Apr 11, 2019 at 2:44 PM Erikjan Rijkers wrote:
> Is it possible to have commit-message or at least git hash in
> commitfest. It will
On Sun, Mar 24, 2019 at 09:47:58PM +0900, Michael Paquier wrote:
> The failure is a bit weird, as I would expect all those three actions
> to be sequential. piculet is the only failure happening on the
> buildfarm and it uses --disable-atomics, so I am wondering if that is
> related and if
On Fri, Apr 12, 2019 at 12:11:12PM +0100, Dagfinn Ilmari Mannsåker wrote:
> I don't have any comments on the code (but the test looks sensible, it's
> the same trick I used to discover the issue in the first place).
After thinking some more on it, this behavior looks rather sensible to
me. Are
On Mon, Apr 15, 2019 at 11:06:18AM -0400, Tom Lane wrote:
> Hmm. The second, duplicate assignment is surely pointless, but I think
> that setting the ctx as the private_data is a good idea. It hardly seems
> out of the question that it might be needed in future.
Agreed that we should keep the
69 matches
Mail list logo