From: Koichi Suzuki
> I'm writing an extension running on background workers and found
> get_database_name() causes SEGV and found internally resource owner was
> wet to NULL. Could anybody let me know how it happens and how I can use
> this function. Argument to get_database_name() looks
From: Andres Freund
> We waste a lot of space due to all these small contexts. Even leaving
> aside the overhead of the context and its blocks - not insignificant -
> they are mostly between ~1/2 a ~1/4 empty.
>
>
> But what if we had a new type of memory context that did not itself
> manage
From: Koichi Suzuki
> I'm not using this. Is this the must to use get_database_name()?
I don't think pg_background is a must, but the system catalog access by
get_database_name() should require database connection and transaction. See
src/test/modules/worker_spi/worker_spi.c for an example
Hello,
The attached trivial patch fixes the initialization of the fake unlogged LSN.
Currently, BootstrapXLOG() in initdb sets the initial fake unlogged LSN to
FirstNormalUnloggedLSN (=1000), but the recovery and pg_resetwal sets it to 1.
The patch modifies the latter two cases to match
From: Dilip Kumar
> I have noticed that in StartupXlog also we reset it with 1, you might
> want to fix that as well?
>
> StartupXLOG
> {
> ...
> /*
> * Initialize unlogged LSN. On a clean shutdown, it's restored from the
> * control file. On recovery, all unlogged relations are blown away, so
>
From: Simon Riggs
> From xlogdefs.h added by 9155580:
> /*
>* First LSN to use for "fake" LSNs.
>*
>* Values smaller than this can be used for special per-AM purposes.
>*/
> #define FirstNormalUnloggedLSN ((XLogRecPtr) 1000)
Yeah, I had seen it,
From: Greg Nancarrow
> With the permission of the original patch author, Haribabu Kommi, I’ve
> rationalized the existing 8 patches into 3 patches, merging patches
> 1-5 and 6-7, and tidying up some documentation and code comments. I
> also rebased them to the latest PG12 source code (as of
From: Tatsuo Ishii
> > The following IVM wiki page returns an error. Does anybody know what's
> wrong?
> >
> > https://wiki.postgresql.org/wiki/Incremental_View_Maintenance
>
> I don't have any problem with the page. Maybe temporary error?
Yeah, I can see it now. I could see it on the
From: legrand legrand
> For each insert into a base table there are 3 statements:
> - ANALYZE pg_temp_3.pg_temp_81976
> - WITH updt AS ( UPDATE public.mv1 AS mv SET __ivm_count__ = ...
> - DROP TABLE pg_temp_3.pg_temp_81976
Does it also include CREATE TEMPORARY TABLE, because there's DROP?
I
Hello,
I'm starting to take a closer look at this feature. I've just finished reading
the discussion, excluding other referenced materials.
The following IVM wiki page returns an error. Does anybody know what's wrong?
https://wiki.postgresql.org/wiki/Incremental_View_Maintenance
[screen]
From: Michael Paquier
> Adding extra custom logging information, or plug that information
> elsewhere than Postgres. I have use cases for that when it comes to
> store external telemetry data or audit things for events happening
> specifically in Postgres.
>
> Well, hook authors can do a lot of
From: Craig Ringer
> Currently my options are "dump all shmem including shared_buffers" or
> "dump no shmem". But I usually want "dump all shmem except
> shared_buffers". It's tolerable to just dump s_b on a test system with
> a small s_b, but if enabling coredumps to track down some
>
From: Tomas Vondra
> You're right the users can't really take advantage of this - my primary
> motivation was providing a feedback for devs, benchmarking etc. That
> might have been done with DEBUG messages or something, but this seems
> more convenient.
Understood. I'm in favor of adding
From: Andres Freund
> While I think there's a number of improvements[2] we could make to the
> ringbuffer logic, I think we should also just allow to make them
> configurable. I think that'll allow a decent number of systems perform
> better (especially on slightly bigger systems the the current
From: Tomas Vondra
> That's not really what I meant - let me explain. When I said DROP TABLE
> should do everything as usual, that includes catalog changes. I.e. after
> the commit there would not be any remaining entries in system catalogs
> or anything like that.
>
> The only thing we'd do
From: Tom Lane
> Up to now, we've sort of looked the other way with respect to failures
> of file unlinks post-commit, reasoning that the worst that will happen
> is disk space leakage from no-longer-referenced files that we failed to
> unlink. (Which is bad, certainly, but not catastrophic;
From: Jose Luis Tallon
> Musing some other date-related things I stumbled upon the thought
> that naming the upcoming release PostgreSQL 20 might be preferrable to
> the current/expected "PostgreSQL 13".
+1
Users can easily know how old/new the release is that they are using.
Regards
From: Andres Freund
> Perhaps this has already been discussed (I only briefly looked): I'd
> strongly advise against having any new infrastrure depend on
> pgcrypto. Its code quality imo is well below our standards and contains
> serious red flags like very outdated copies of cryptography
From: Chapman Flack
> I read the question as perhaps concerning the other direction, whether
> there might be an open source foreign data wrapper installable in Oracle
> for talking to PostgreSQL (which might, I suppose, also have a name like
> "postgres_fdw", which helps explain the number of
From: Tomas Vondra
> One of the stats I occasionally wanted to know are stats for the SLRU
> stats (we have couple of those - clog, subtrans, ...). So here is a WIP
> version of a patch adding that.
How can users take advantage of this information? I think we also need the
ability to set the
From: Tatsuo Ishii
> First of all, we do not think that current approach is the final
> one. Instead we want to implement IVM feature one by one: i.e. we
> start with "immediate update" approach, because it's simple and easier
> to implement. Then we will add "deferred update" mode later on.
I
From: Yugo Nagata
> On Mon, 23 Dec 2019 08:08:53 +
> "tsunakawa.ta...@fujitsu.com" wrote:
> > How about unlogged tables ? I thought the point of using a temp table is to
> avoid WAL overhead.
>
> Hmm... this might be another option. However, if we use
From: Alvaro Herrera
> I'm not sure I understand why we end up with "prefer-read" in addition
> to "prefer-standby" (and similar seeming redundancy between "primary"
> and "read-write"). Do we really need more than one way to identify
> hosts' roles? It seems 0001 adds the "prefer-read" modes
From: Yugo Nagata
> 1. Create a temporary table only once at the first view maintenance in
> this session. This is possible if we store names or oid of temporary
> tables used for each materialized view in memory. However, users may
> access to these temptables whenever during the session.
>
>
From: Tatsuo Ishii
> the target data? In the discussion, someone referred to master data with low
> update frequency, because the proposed IVM implementation adds triggers on
> source tables, which limits the applicability to update-heavy tables.
>
> But if you want to get always up-to-data you
From: Tatsuo Ishii
> AFAIK benefit of ON STATEMENT is the transaction can see the result of
> update to the base tables. With ON COMMIT, the transaction does not
> see the result until the transaction commits.
>
> Well, I can see use cases of IVM in both DWH and OLTP.
>
> For example, a user
From: Alvaro Herrera
> So, we can know whether server is primary/standby by checking
> in_recovery, as opposed to knowing whether read-write which is done by
> checking transaction_read_only. So we can keep read-write as a synonym
> for "primary", and check in_recovery when used in servers that
From: David Fetter
> > But I see two problems with the proposed approach: (1) initdb
> > doesn't support setting up SSL, so the only thing you can achieve
> > here is to reject all TCP/IP connections, until you have set up SSL.
>
> I don't believe any special setup is needed to require TLS for
From: Konstantin Knizhnik
> Recently I have asked once again by one of our customers about login trigger
> in
> postgres. People are migrating to Postgres from Oracle and looking for
> Postgres
> analog of this Oracle feature.
> This topic is not new:
> I attached my prototype implementation
From: Fujii Masao
> > I changed the view name from pg_stat_walwrites to pg_stat_walwriter.
> > I think it is better to match naming scheme with other views like
> pg_stat_bgwriter,
> > which is for bgwriter statistics but it has the statistics related to
> > backend.
>
> I prefer the view name
From: Kyotaro Horiguchi
> If we can skip the table-copy when ALTER TABLE SET LOGGED on
> wal_level=minimal, is your objective achived?
I expect so, if we can skip the table copy during ALTER TABLE SET
LOGGED/UNLOGGED. On the other hand, both approaches have different pros and
cons. It's nice
From: Masahiko Sawada
> On Tue, 8 Sep 2020 at 13:00, tsunakawa.ta...@fujitsu.com
> wrote:
> > 2. 2PC processing is queued and serialized in one background worker. That
> severely subdues transaction throughput. Each backend should perform
> 2PC.
>
> Not sure it'
From: Peter Smith
On Thu, Sep 10, 2020 at 7:16 PM tsunakawa.ta...@fujitsu.com
> wrote:
> > ALTER TABLE takes long time proportional to the amount of existing data,
> while wal_level = none doesn't.
>
> Right, but if wal_level=none is considered OK for that table with
> ex
From: Fujii Masao
> Originally start(), commit() and rollback() are supported as FDW interfaces.
> As far as I and Sawada-san discussed this upthread, to support MySQL,
> another type of start() would be necessary to issue "XA START id" command.
> end() might be also necessary to issue
From: Amit Kapila
> if (RelFileNodeEquals(bufHdr->tag.rnode, rnode.node) &&
> + bufHdr->tag.forkNum == forkNum[j] &&
> + bufHdr->tag.blockNum >= firstDelBlock[j])
>
> Here, I think you need to use 'i' not 'j' for forkNum and
> firstDelBlock as those are arrays w.r.t forks. That might fix
From: tsunakawa.ta...@fujitsu.com
> (1)
> + INIT_BUFFERTAG(newTag,
> rnode.node, forkNum[j], firstDelBlock[j]);
>
> And you need to use i here, too.
I remember the books "Code Complete" and/or "Readable Code" suggest to
From: Amit Kapila
> I intend to say that the global-visibility work can impact this in a
> major way and we have analyzed that to some extent during a discussion
> on the other thread. So, I think without having a complete
> design/solution that addresses both the 2PC and global-visibility, it
>
From: Tom Lane
> This is straying a bit from the stated topic of this thread, but ...
> I did some further looking around to see whether there were any
> unsafe signal handlers besides SIGQUIT ones. The situation is not
> too awful, but I did find several issues not already mentioned
> in this
Alexey-san, Sawada-san,
cc: Fujii-san,
From: Fujii Masao
> But if we
> implement 2PC as the improvement on FDW independently from PostgreSQL
> sharding, I think that it's necessary to support other FDW. And this is our
> direction, isn't it?
I understand the same way as Fujii san. 2PC FDW is
Hi Andrey san,
From: Andrey V. Lepikhov > > From:
tsunakawa.ta...@fujitsu.com
> >> While Clock-SI seems to be considered the best promising for global
> >>> > Could you take a look at this patent? I'm afraid this is the Clock-SI
> >>> > for MVCC.
&
From: Fujii Masao
> But I'm concerned about that it's really hard to say there is no patent risk
> around that. I'm not sure who can judge there is no patent risk,
> in the community. Maybe no one? Anyway, I was thinking that Google Spanner,
> YugabyteDB, etc use the global transaction approach
From: Peter Smith
> Earlier, Osumi-san was rejecting the idea of using ALTER TABLE tbl SET
> UNLOGGED on basis that it is too time consuming for large data to
> switch the table modes [1].
> Doesn't wal_level=none essentially just behave as if every table was
> UNLOGGED; not just the ones we are
The code doesn't seem to be working correctly.
(1)
+ for (block_num = 0; block_num <= nblocks;
block_num++)
should be
+ for (block_num = firstDelBlock[fork_num];
block_num < nblocks; block_num++)
because:
* You only want to
From: Masahiko Sawada
> The resolver process has two functionalities: resolving foreign
> transactions automatically when the user issues COMMIT (the case you
> described in the second paragraph), and resolving foreign transaction
> when the corresponding backend no longer exist or when the
From: Masahiko Sawada
> > If so, can't we stipulate that the FDW implementor should ensure that the
> commit function always returns control to the caller?
>
> How can the FDW implementor ensure that? Since even palloc could call
> ereport(ERROR) I guess it's hard to require that to all FDW
>
From: Fujii Masao
> On 2020/09/09 14:15, Etsuro Fujita wrote:
> > Attached is a patch to standardize Japanese names as given-name-first
> > in the v13 contributors list as before.
>
> Using given-name-first order is our consensus? I was thinking we have not
> reached that yet and our "vague"
From: Masahiko Sawada
> You proposed the first idea
> to avoid such a situation that FDW implementor can write the code
> while trying to reduce the possibility of errors happening as much as
> possible, for example by usingpalloc_extended(MCXT_ALLOC_NO_OOM) and
> hash_search(HASH_ENTER_NULL) but
From: Jamison, Kirk/ジャミソン カーク
> > (6)
> > + bufHdr->tag.blockNum >=
> > firstDelBlock[j])
> > + InvalidateBuffer(bufHdr); /*
> > releases spinlock */
> >
> > The right side of >= should be cur_block.
>
> Fixed.
>= should
From: Masahiko Sawada
> What about temporary network failures? I think there are users who
> don't want to give up resolving foreign transactions failed due to a
> temporary network failure. Or even they might want to wait for
> transaction completion until they send a cancel request. If we want
From: Tomas Vondra
> I'm not sure when I'll have time to work on this again, so if you are
> interested and willing to work on it, please go ahead. I'll gladly do
> reviews and help you with it.
Thank you very much.
> I think transferring data to other databases is fine - interoperability
> is
From: Kyotaro Horiguchi
> I don't understand why we hate ERRORs from fdw-2pc-commit routine so
> much. I think remote-commits should be performed before local commit
> passes the point-of-no-return and the v26-0002 actually places
> AtEOXact_FdwXact() before the critical section.
I don't hate
From: Jamison, Kirk/ジャミソン カーク
> 2. Non-recovery Performance
> However, I still can't seem to find the cause of why the non-recovery
> performance does not change when compared to master. (1 min 15 s for the
> given test case below)
...
> 5. Measure VACUUM timing
> \timing
> VACUUM;
Oops, why are
From: Masahiko Sawada
> I was thinking to have a GUC timeout parameter like statement_timeout.
> The backend waits for the setting value when resolving foreign
> transactions.
Me too.
> But this idea seems different. FDW can set its timeout
> via a transaction timeout API, is that right?
I'm
From: Jamison, Kirk/ジャミソン カーク
(1)
> Alright. I also removed nTotalBlocks in v24-0003 patch.
>
> for (i = 0; i < nforks; i++)
> {
> if (nForkBlocks[i] != InvalidBlockNumber &&
> nBlocksToInvalidate < BUF_DROP_FULL_SCAN_THRESHOLD)
> {
> Optimization loop
> }
>
From: Masahiko Sawada
> > Unfortunately, I'm afraid we can do nothing about it. If the DBMS's client
> library doesn't support cancellation (e.g. doesn't respond to Ctrl+C or
> provide a
> function that cancel processing in pgorogss), then the Postgres user just
> finds
> that he can't cancel
Hello Andrey-san,
Thank you for challenging an interesting feature. Below are my review comments.
(1)
- /* for use by copy.c when performing multi-inserts */
+ /*
+* The following fields are currently only relevant to copy.c.
+*
+* True if okay to use
From: Masahiko Sawada
> On Mon, 19 Oct 2020 at 14:39, tsunakawa.ta...@fujitsu.com
> wrote:
> > I have to say that's nitpicking. I believe almost nobody does, or cares
> > about,
> canceling commits,
>
> Really? I don’t think so. I think It’s terrible that the qu
From: Kyotaro Horiguchi
> At Tue, 20 Oct 2020 15:53:29 +0900, Masahiko Sawada
> wrote in
> > I think it doesn't matter whether in FDW framework or not. The user
> > normally doesn't care which backend processes connecting to foreign
> > servers. They will attempt to cancel the query like always
From: Jamison, Kirk/ジャミソン カーク
> However, I still can't seem to find the cause of why the non-recovery
> performance does not change when compared to master. (1 min 15 s for the
> given test case below)
Can you check and/or try the following?
1. Isn't the vacuum cost delay
From: Kyotaro Horiguchi
> It seems to respond to a statement-cancel signal immediately while
> waiting for a coming byte. However, seems to wait forever while
> waiting a space in send-buffer. (Is that mean the session will be
> stuck if it sends a large chunk of bytes while the network is
Hello Tomas san,
Thank you for picking up this. I'm interested in this topic, too. (As an
aside, we'd like to submit a bulk insert patch for ECPG in the near future.)
As others referred, Andrey-san's fast COPY to foreign partitions is also
promising. But I think your bulk INSERT is a
From: Jamison, Kirk/ジャミソン カーク
> With the latest patches attached, and removing the recovery check in
> smgrnblocks, I tested the performance of vacuum.
> (3 trial runs, 3.5 GB db populated with 1000 tables)
>
> Execution Time (seconds)
> | s_b | master | patched | %reg |
>
Hi Kirk san,
(1)
+ * This returns an InvalidBlockNumber when smgr_cached_nblocks is not
+ * available and when not in recovery path.
+ /*
+* We cannot believe the result from smgr_nblocks is always accurate
+* because lseek of buggy Linux kernels doesn't account for a
Sorry to be late to respond. (My PC is behaving strangely after upgrading
Win10 2004)
From: Masahiko Sawada
> After more thoughts on Tsunakawa-san’s idea it seems to need the
> following conditions:
>
> * At least postgres_fdw is viable to implement these APIs while
> guaranteeing not to
Hi Andrey-san,
Thanks for the revision. The patch looks good except for the following two
items.
(18)
+ if (target_resultRelInfo->ri_FdwRoutine != NULL)
+ {
+ if (target_resultRelInfo->ri_usesMultiInsert)
+ {
+
From: Ashutosh Bapat
> Using pg_cancel_backend() and pg_terminate_backend() a DBA can cancel
> running query from any backend or terminate a backend. For either to
> work the backend needs to be interruptible. IIRC, Robert had made an
> effort to make postgres_fdw interruptible few years back.
From: Kyotaro Horiguchi
> I don't think the inability to cancel all session at once cannot be a
> reason not to not to allow operators to cancel a stuck session.
Yeah, I didn't mean to discount the ability to cancel queries. I just want to
confirm how the user can use the cancellation in
Hello,
I think it's worth thinking about a sophisticated feature like Oracle's
UNRECOVERABLE data loading (because SQL Server's BCP load utility also has such
a feature, but for an empty table), how about an easier approach like MySQL? I
expect this won't complicate Postgres code much.
The
From: Masahiro Ikeda
> It's important to provide the metrics for tuning the size of WAL buffers.
> For now, it's lack of the statistics how often processes wait to write WAL
> because WAL buffer is full.
>
> If those situation are often occurred, WAL buffer is too small for the
> workload.
>
From: Masahiro Ikeda
> If my understanding is correct, we have to measure the performance
> impact first.
> Do you know HariBabu is now trying to solve it? If not, I will try to
> modify patches to apply HEAD.
No, he's not doing it anymore. It'd be great if you could resume it. However,
I
From: Fujii Masao
> I agree to expose the number of WAL write caused by full of WAL buffers.
> It's helpful when tuning wal_buffers size. Haribabu separated that number
> into two fields in his patch; one is the number of WAL write by backend,
> and another is by background processes and workers.
From: Fujii Masao
> Just idea; it may be worth exposing the number of when new WAL file is
> created and zero-filled. This initialization may have impact on
> the performance of write-heavy workload generating lots of WAL. If this
> number is reported high, to reduce the number of this
From: Amit Kapila
> So you want your users to shutdown and restart the server before Copy
> because that would be required if you want to change the wal_level.
Yes. They seem to be fine with it, as far as I heard from a person who is
involved in the system design.
> However, even if we do
From: Amit Kapila
> Sure, but on a daily basis, one requires only incremental WAL to
> complete the backup but in this case, it would require the entire
> database back up unless we have some form of block-level incremental
> backup method.
Regarding the backup time, I think users can shorten
Hello,
We'd like to propose a feature to disable WAL to speed up data loading. This
was inspired by a feature added in the latest MySQL. I wish you won't fear
this feature...
BACKGROUND
This branches off from [1] as mentioned therein. Briefly
Hi Andrey san, all,
From: tsunakawa.ta...@fujitsu.com
> And please wait. As below, the patent holder just says that Clock-SI is not
> based on the patent and an independent development. He doesn't say
> Clock-SI does not overlap with the patent or implementing Clock-SI does not
&
From: Amit Kapila
> I agree with the above two points.
Thank you. I'm relieved to know I didn't misunderstand.
> > * Then, add a new function, say, smgrnblocks_cached() that simply returns
> the cached block count, and DropRelFileNodeBuffers() uses it instead of
> smgrnblocks().
> >
>
> I am
From: Ashutosh Bapat
> The way I am looking at is to put the parallelism in the resolution
> worker and not in the FDW. If we use multiple resolution workers, they
> can fire commit/abort on multiple foreign servers at a time.
From a single session's view, yes. However, the requests from
From: Vladimir Sitnikov
> Just in case, I'm PgJDBC committer.
Thank you very much for your great efforts for the wonderful PgJDBC. I saw you
active.
# I'd be happy if you send emails in text format so that the reply looks nice.
Your email seems to be in HTML.
> and apparently, Andrew is
From: Masahiko Sawada
> To avoid misunderstanding, I didn't mean to disregard the performance.
> I mean especially for the transaction management feature it's
> essential to work fine even in failure cases. So I hope we have a
> safe, robust, and probably simple design for the first version that
From: Masahiko Sawada
> Yes, but it still seems hard to me that we require for all FDW
> implementations to commit/rollback prepared transactions without the
> possibility of ERROR.
Of course we can't eliminate the possibility of error, because remote servers
require network communication.
Hi Andrey-san, all,
From: Andrey V. Lepikhov
> On 7/27/20 11:22 AM, tsunakawa.ta...@fujitsu.com wrote:
> > Could you take a look at this patent? I'm afraid this is the Clock-SI for
> > MVCC.
> Microsoft holds this until 2031. I couldn't find this with the k
From: MauMau
> I intentionally have put little conclusion on our specification and
> design. I'd like you to look at recent distributed databases, and
> then think about and discuss what we want to aim for together. I feel
> it's better to separate a thread per topic or group of topics.
From: Masahiko Sawada
> So with your idea, I think we require FDW developers to not call
> ereport(ERROR) as much as possible. If they need to use a function
> including palloc, lappend etc that could call ereport(ERROR), they
> need to use PG_TRY() and PG_CATCH() and return the control along
In v15:
(1)
+ for (cur_blk = firstDelBlock[j]; cur_blk <
nblocks; cur_blk++)
The right side of "cur_blk <" should not be nblocks, because nblocks is not the
number of the relation fork anymore.
(2)
+ BlockNumber nblocks;
+
From: Amit Kapila
> No, during recovery also we need to be careful. We need to ensure that
> we use cached value during recovery and cached value is always
> up-to-date. We can't rely on lseek and I have provided some scenario
> up thread [1] where such behavior can cause problem and then see the
From: Jamison, Kirk/ジャミソン カーク
> [Results]
> Recovery/Failover performance (in seconds). 3 trial runs.
>
> | shared_buffers | master | patch | %reg|
> ||||-|
> | 128MB | 32.406 | 33.785 | 4.08% |
> | 1GB| 36.188 | 32.747 |
From: Masahiko Sawada
> I don't think it's always possible to avoid raising errors in advance.
> Considering how postgres_fdw can implement your idea, I think
> postgres_fdw would need PG_TRY() and PG_CATCH() for its connection
> management. It has a connection cache in the local memory using
From: Ashutosh Bapat
> parallelism here has both pros and cons. If one of the servers errors
> out while preparing for a transaction, there is no point in preparing
> the transaction on other servers. In parallel execution we will
> prepare on multiple servers before realising that one of them
From: Amit Kapila
> > > > Don't
> > > > we need to guarantee the cache to be valid while recovery?
> > > >
> > >
> > > One possibility could be that we somehow detect that the value we
> > > are using is cached one and if so then only do this optimization.
> >
> > I basically like this direction.
From: Andrey Lepikhov
> Thank you for this work!
> As I can see, main development difficulties placed in other areas: CSN,
> resolver,
> global deadlocks, 2PC commit... I'm not lawyer too. But if we get remarks from
> the patent holders, we can rewrite our Clock-SI implementation.
Yeah, I
I looked at v14.
(1)
+ /* Get the total number of blocks for the supplied relation's
fork */
+ for (j = 0; j < nforks; j++)
+ {
+ BlockNumber block = smgrnblocks(smgr_reln,
forkNum[j]);
+ nblocks
From: Jamison, Kirk/ジャミソン カーク
> I revised the patch based from my understanding of Horiguchi-san's comment,
> but I could be wrong.
> Quoting:
>
> "
> + /* Get the number of blocks for the supplied relation's
> fork */
> + nblocks = smgrnblocks(smgr_reln,
From: Amit Kapila
> The idea is that we can't use this optimization if the value is not
> cached because we can't rely on lseek behavior. See all the discussion
> between Horiguchi-San and me in the thread above. So, how would you
> ensure that if we don't use Kirk-San's proposal?
Hmm, buggy
From: Jamison, Kirk/ジャミソン カーク
> Recovery performance measurement results below.
> But it seems there are overhead even with large shared buffers.
>
> | s_b | master | patched | %reg |
> |---||-|---|
> | 128MB | 36.052 | 39.451 | 8.62% |
> | 1GB | 21.731 | 21.73 |
From: Kyotaro Horiguchi
> In more detail, if smgrcachednblocks() returned InvalidBlockNumber for
> any of the forks, we should give up the optimization at all since we
> need to run a full scan anyway. On the other hand, if any of the
> forks is smaller than the threshold, we still can use the
From: Amit Kapila
> I have one idea for performance testing. We can even test this for
> non-recovery paths by removing the recovery-related check like only
> use it when there are cached blocks. You can do this if testing via
> recovery path is difficult because at the end performance should be
From: Kyotaro Horiguchi
> Another reason that I mildly want to object to subdivided functions is
> I was annoyed that a stats view makes many individual calls to
> functions that internally share the same statistics entry. That
> behavior required me to provide an entry-caching feature to my
>
From: Kyotaro Horiguchi
> I thought that the advantage of this optimization is that we don't
> need to visit all buffers? If we need to run a full-scan for any
> reason, there's no point in looking-up already-visited buffers
> again. That's just wastefull cycles. Am I missing somethig?
>
> I
From: Jamison, Kirk/ジャミソン カーク
> For non-recovery path, did you mean by any chance
> measuring the cache hit rate for varying shared_buffers?
No. You can test the speed of DropRelFileNodeBuffers() during normal
operation, i.e. by running TRUNCATE on psql, instead of performing recovery.
To
1 - 100 of 404 matches
Mail list logo