ner = NULL;
+ res_releasing = true;
+ }
+
MemoryStatsDsaArea =
dsa_create(memCxtArea->lw_lock.tranche);
+ if (res_releasing)
+ CurrentResourceOwner = current_owner;
Kindly let me know your views.
Thank you,
Rahila Syed
Hi,
Please find attached a patch with some comments and documentation changes.
Additionaly, added a missing '\0' termination to "Remaining Totals" string.
I think this became necessary after we replaced dsa_allocate0()
with dsa_allocate() is the latest version.
Thank you,
Ra
ning either !USE_INJECTION_POINTS or that there are no
> points attached if the build uses USE_INJECTION_POINTS.
>
The changes LGTM.
Should the execution privileges on the function be restricted to a role
like pg_monitor?
Thank you,
Rahila Syed
ntext);
Thank you,
Rahila Syed
e code readability, IMO.
Thank you,
Rahila Syed
jection points are attached and
instances where the build does not support injection points.
Thank you,
Rahila Syed
unctions were zero.
May be something can be done to keep resowner assignments under both these
functions
in sync.
Thank you,
Rahila Syed
either. It just seems to make things
> more
> complicated an dmore expensive.
>
OK, I see that this could be expensive if a process is periodically being
queried for
statistics. However, in scenarios where a process is queried only once for
memory,
statistics, keeping the area mapped would consume memory resources, correct?
Thank you,
Rahila Syed
uite* work, because
> memCtxState[idx].total_stats is
> > > only set *after* we would have failed.
> >
> > Keeping a running total in .total_stats should make the leak window
> smaller.
>
> Why not just initialize .total_stats *before* calling any fallible code?
> Afaict it's zero-allocated, so the free function should have no problem
> dealing with the entries that haven't yet been populated/
>
>
Fixed accordingly.
PFA a v28 which passes all local and github CI tests.
Thank you,
Rahila Syed
v28-0001-Add-function-to-get-memory-context-stats-for-process.patch
Description: Binary data
eanup
> if we are already attached to the dsm segment?
>
I am not expecting to hit this case, since we are always detaching from the
dsa.
This could be an assert but since it is a cleanup code, I thought returning
would be
a harmless step.
Thank you,
Rahila Syed
pgbench results with the following custom script also shows good
performance.
```
SELECT * FROM pg_get_process_memory_contexts(
(SELECT pid FROM pg_stat_activity
ORDER BY random() LIMIT 1)
, false, 5);
```
Thank you,
Rahila Syed
the comment and code are clearer now.
PFA the patches after merging the review patches.
Thank you,
Rahila Syed
v9-0001-Improve-acounting-for-memory-used-by-shared-hash-tab.patch
Description: Binary data
v9-0002-Improve-accounting-for-PredXactList-RWConflictPool-a.patch
Description: Binary
the reproducer
script shared
by David. I also ran pgbench to test creation and expansion of some of the
shared hash tables.
Thank you,
Rahila Syed
v10-0001-Improve-accounting-for-memory-used-by-shared-hash-ta.patch
Description: Binary data
alling the function, I have fixed it accordingly in the attached 0001
patch.
Now, there's no need to pass `nelem_alloc` as a parameter. Instead, I've
passed this information as a boolean variable-initial_elems. If it is
false,
no elements are pre-allocated.
Please find attached the v7-serie
lloc named variable is used
in two different cases, In the above case nelem_alloc refers to the one
returned by choose_nelem_alloc function.
The other nelem_alloc determines the number of elements in each partition
for a partitioned hash table. This is not what is being referred to in the
above
comment.
The bit "For more explanation see comments within this function" is not
> great, if only because there are not many comments within the function,
> so there's no "more explanation". But if there's something important, it
> should be in the main comment, preferably.
>
>
I will improve the comment in the next version.
Thank you,
Rahila Syed
alculate nbuckets and
nsegs,
hence the probability of mismatch is low. I am open to adding some
asserts to verify this.
Do you have any suggestions in mind?
Please find attached updated patches after merging all your review comments
except
a few discussed above.
Thank you,
Rahila Syed
v5-0003-Add-cacheline-padding-between-heavily-accessed-array.patch
Description: Binary data
v5-0001-Account-for-initial-shared-memory-allocated-by-hash_.patch
Description: Binary data
v5-0002-Replace-ShmemAlloc-calls-by-ShmemInitStruct.patch
Description: Binary data
calling hash_create.
The hash_create function already generates a memory context containing the
hash table,
enabling easy memory deallocation by simply deleting the context via
hash_destroy.
Therefore, the patch relies on hash_destroy for memory management instead
of manual freeing.
2. Optimized
Hi,
Please find the attached updated and rebased patch.
I have added a test in the test_dsa module that uses a function
to create a dsa area. This function is called after
the resowner->releasing is set to true, using an injection point.
Thank you,
Rahila Syed
v2-0001-Prevent-the-error
int() on num_partitions before running the function.
Additionally, I am not adding any new code to the compute_buckets_and_segs
function. I am simply moving part of the init_tab() code into a separate
function
for reuse.
Please find attached the updated and rebased patches.
Thank you,
Rahila Syed
v4-0002-Replace-ShmemAlloc-calls-by-ShmemInitStruct.patch
Description: Binary data
v4-0001-Account-for-initial-shared-memory-allocated-by-hash_.patch
Description: Binary data
d from a
PG_ENSURE_ERROR_CLEANUP block. This block operates under the assumption
that
the before_shmem_exit callback registered at the beginning of the block,
will be the last one
in the registered callback list at the end of the block, which would not be
the case if I register
before_shmem_exit ca
tic structures need to be created even
if memory
context statistics are never queried.
On the contrary, a dsa is created for the feature whenever statistics are
first queried.
We are not preallocating shared memory for this feature, except for small
structures
to store the dsa_handle and dsa_pointers for each backend.
Thank you,
Rahila Syed
maintain consistency with the output of
pg_backend_memory_contexts.
Thank you,
Rahila Syed
On Tue, Mar 4, 2025 at 12:30 PM Rahila Syed wrote:
> Hi Daniel,
>
> Thanks for the rebase, a few mostly superficial comments from a first
>> read-through.
>>
> Thank you for y
d non-shared hash
tables,
making the code fix look cleaner. I hope this aligns with your suggestions.
Please find attached updated and rebased versions of both patches.
Kindly let me know your views.
Thank you,
Rahila Syed
v3-0001-Account-for-initial-shared-memory-allocated-by-hash_.patch
Description: Binary data
v3-0002-Replace-ShmemAlloc-calls-by-ShmemInitStruct.patch
Description: Binary data
on id states", TotalProcs *
> sizeof(*ProcGlobal->subxidStates), &found);
> > MemSet(ProcGlobal->subxidStates, 0, TotalProcs *
> sizeof(*ProcGlobal->subxidStates));
> > - ProcGlobal->statusFlags = (uint8 *) ShmemAlloc(TotalProcs *
> sizeof(*Proc
t; + PROCSIG_GET_MEMORY_CONTEXT, /* ask backend to log the memory contexts
> */
> This comment should be different from the LOG_MEMORY_xx one.
>
> Fixed.
+#define MEM_CONTEXT_SHMEM_STATS_SIZE 30
> +#define MAX_TYPE_STRING_LENGTH 64
> These are unused, from an earl
allocation by consolidating
initial shared
memory allocations for the hash table. For ex. the allocated size for the
LOCK hash
hash_create decreased from 801664 bytes to 799616 bytes. Please find the
attached
patches, which I will add to the March Commitfest.
Thank you,
Rahila Syed
0001-Account-for
tions
per context. The integer counters are still allocated at once for all
contexts, but
the size of an allocated chunk will not exceed approximately 128 bytes *
total_num_of_contexts.
Average total number of contexts is in the hundreds.
PFA the updated and rebased patches.
Thank you,
Rahila Syed
Hi,
Please find attached the updated patches after some cleanup and test
fixes.
Thank you,
Rahila Syed
On Tue, Feb 18, 2025 at 6:35 PM Rahila Syed wrote:
> Hi,
>
>>
>> Thanks for updating the patch!
>>
>> The below comments would be a bit too detailed at this s
me DSA.
Please find attached updated and rebased patches.
Thank you,
Rahila Syed
v13-0001-Preparatory-changes-for-reporting-memory-context-sta.patch
Description: Binary data
v13-0002-Function-to-report-memory-context-statistics.patch
Description: Binary data
Hi,
> >
> > Just idea; as an another option, how about blocking new requests to
> > the target process (e.g., causing them to fail with an error or
> > returning NULL with a warning) if a previous request is still
> pending?
> > Users can simply retry the request if it fails. IMO
Hi,
On Sat, Jan 25, 2025 at 3:50 AM Tomas Vondra wrote:
>
>
> On 1/24/25 14:47, Rahila Syed wrote:
> >
> > Hi,
> >
> >
> > Just idea; as an another option, how about blocking new requests to
> > the target process (e.g., causing them to fail
code, which is
under the
track_wal_io_timing check, to the existing check before this added chunk?
This way, all code related to track_wal_io_timing will be grouped together,
closer to where the "end" variable is computed.
Thank you,
Rahila Syed
On Tue, Jan 21, 2025 at 12:50 PM B
ve itself and they just need to retry. Therefore, issuing a
warning
or displaying previously updated statistics might be a better alternative
to throwing
an error.
Thank you,
Rahila Syed
enging, as
it
depends on the server's load.
Kindly let me know your preference. I have attached a patch which
implements the
2nd approach for testing, the 3rd approach being implemented in the v10
patch.
Thank you,
Rahila Syed
v11-0001-Function-to-report-memory-context-stats-of-any-backe.patch
Description: Binary data
n you share any errors that you see in logs when postgres crashes?
Thank you,
Rahila Syed
Hi,
If a DSM is created or attached from an interrupt handler while a
transaction is being
rolled back, it may result in the following error.
"ResourceOwnerEnlarge called after release started"
This was found during the testing of Enhancing Memory Context Reporting
feature
by Fujii Masao [1].
I p
_stats - 1 is reserved for the summary statistics.
Q./* XXX I don't understand why we need to check get_summary here? */
A. get_summary check is there to ensure that the context_id is inserted in
the
hash_table if get_summary is true. If get_summary is true, the loop will
break after the first iteration
and the entire main list of contexts won't be traversed and hence
context_ids won't be inserted.
Hence it is handled separately inside a check for get_summary.
Q. /* XXX What if the memstats_dsa_pointer is not valid? Is it even
possible?
* If it is, we have garbage in memctx_info. Maybe it should be an
Assert()? */
A . Agreed. Changed it to an assert.
Q./*
* XXX isn't 2 x 1kB for every context a bit too much? Maybe better
to
* make it variable-length?
*/
A. I don't know how to do this for a variable in shared memory, won't that
mean
allocating from the heap and thus the pointer would become invalid in
another
process?
Thank you,
Rahila Syed
v10-0001-Function-to-report-memory-context-stats-of-any-backe.patch
Description: Binary data
inite_recurse();
>LOG: terminating any other active server processes
>
> I have not been able to reproduce this issue. Could you please clarify
which process you ran
pg_get_process_memory_context() on, with the interval of 0.1? Was it a
backend process
created by make installcheck-world, o
t;
> Ok. I added an explanation of this column in the documentation.
> > I have added this information as a column named "num_agg_contexts",
> > which indicates
> > the number of contexts whose statistics have been aggregated/added for
> > a particular output.
>
able limits. It's something executed
> every now and then - no one is going to complain it takes 10ms extra,
> measure tps with this function, etc.
>
> 17-26% seems surprisingly high, but Even 256kB is too much, IMHO. I'd
> just get rid of this optimization until someone
rocesses))
> > would be created and pinned for subsequent reporting. This size does
> > not seem excessively high, even for approx 100 backends and
> > auxiliary processes.
> >
>
> That seems like a pretty substantial amount of memory reserved for each
> connection. IM
Hi Tomas,
Thank you for the review.
>
>
> 1) I read through the thread, and in general I agree with the reasoning
> for removing the file part - it seems perfectly fine to just dump as
> much as we can fit into a buffer, and then summarize the rest. But do we
> need to invent a "new" limit here?
fXSOfn1skGwPsgrnuEMcx2Z2oWB2f%2FbuNq%2FCj9%2BbPx9hYQ3W%2FgBwyVOjTtNmj818mL6xxiJpulftBgv%2FwM%3D%3C%2Fdiagram%3E%3C%2Fmxfile%3E#%7B%22pageId%22%3A%22prtHgNgQTEPvFCAcTncT%22%7D>
Thank you,
Rahila Syed
)
would be inaccurate and I am not sure whether a change to rename the
existing function would be welcome.
Please find an updated patch which fixes an issue seen in CI runs.
Thank you,
Rahila Syed
v5-Function-to-report-memory-context-stats-of-a-process.patch
Description: Binary data
ill time out.
When statistics of a local backend is requested, this function returns the
following
WARNING and exits, since this can be handled by an existing function which
doesn't require a DSA.
WARNING: cannot return statistics for local backend
HINT: Use pg_get_backend_memory_contexts instead
a test, if we finalize the
approach
of spill-to-file.
Please find attached a rebased and updated patch with a basic test
and some fixes. Kindly let me know your thoughts.
Thank you,
Rahila Syed
v3-0001-Function-to-report-memory-context-stats-of-any-backe.patch
Description: Binary data
come.
When the get_summary argument is set to true, the function provides
statistics for memory contexts up to level 2—that is, the
top memory context and all its children.
Please find attached a rebased patch that includes these changes.
I will work on adding a test for the function and some co
nd not the index. I will perform the test again. However,
I would like to know your opinion on whether this looks like
a valid test.
Thank you,
Rahila Syed
On Thu, Oct 24, 2024 at 4:45 PM Andrey M. Borodin
wrote:
>
>
> > On 24 Oct 2024, at 10:15, Andrey M. Borodin
> wrote:
>
Hi Torikoshia,
Thank you for reviewing the patch!
On Wed, Oct 23, 2024 at 9:28 AM torikoshia
wrote:
> On 2024-10-22 03:24, Rahila Syed wrote:
> > Hi,
> >
> > PostgreSQL provides following capabilities for reporting memory
> > contexts statistics.
> > 1. pg_
Hi Michael,
Thank you for the review.
On Tue, Oct 22, 2024 at 12:18 PM Michael Paquier
wrote:
> On Mon, Oct 21, 2024 at 11:54:21PM +0530, Rahila Syed wrote:
> > On the other hand, [2] provides the statistics for all backends but logs
> > them in a file, which may not be conve
oiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=UCwkwg6kikVEf0oHf3%2BlliA%2FTUdMG%2F0cOiMta7fjPPk%3D&reserved=0>*
Thank you,
Rahila Syed
0001-Function-to-report-memory-context-stats-of-any-backe.patch
Description: Binary data
and the other returning it
after attempting DELETE.
Thank you,
Rahila Syed
On Fri, Mar 15, 2024 at 7:57 PM Aleksander Alekseev <
aleksan...@timescale.com> wrote:
> Hi,
>
> > it took me a while to figure out why the doc build fails.
> >
> > [...]
> &g
> Vigneshwaran C
>
> New PostgreSQL Major Contributors:
>
> Julien Rouhaud
> Stacey Haysler
> Steve Singer
>
> Congratulations to all the new contributors!
Thank you,
Rahila syed
Hi,
On Fri, Nov 4, 2022 at 2:39 PM Simon Riggs
wrote:
> Hi Rahila,
>
> Thanks for your review.
>
> On Fri, 4 Nov 2022 at 07:37, Rahila Syed wrote:
>
> >> I would like to bring up a few points that I came across while looking
> into the vacuum code.
> >>
Hi Simon,
On Fri, Nov 4, 2022 at 10:15 AM Rahila Syed wrote:
> Hi Simon,
>
> On Thu, Nov 3, 2022 at 3:53 PM Simon Riggs
> wrote:
>
>> On Tue, 1 Nov 2022 at 23:56, Simon Riggs
>> wrote:
>>
>> > > I haven't checked the rest of the patch, but +
ewhat old as compared to current
lazy vacuum which
acquires a new snapshot just before scanning the table.
So, while I understand the need of the feature, I am wondering if there
should be some mention
of above caveats in documentation with the recommendation that VACUUM
should be run outside
a transaction, in general.
Thank you,
Rahila Syed
ecursive query to iterate
> through all
> 901 +* the parents of the partition and retreive the record for
> the parent
> 902 +* that exists in pg_publication_rel.
> 903 +*/
The above comment in fetch_remote_table_info() can be changed as the
recursive query
is no longer used.
Thank you,
Rahila Syed
el_sync_entry().
4. Missing documentation
5. Latest comments(last two messages) by Peter Smith.
Thank you,
Rahila Syed
Hi,
On Mon, Sep 6, 2021 at 8:53 AM Amit Kapila wrote:
> On Sat, Sep 4, 2021 at 8:11 PM Alvaro Herrera
> wrote:
> >
> > On 2021-Sep-04, Amit Kapila wrote:
> >
> > > On Thu, Sep 2, 2021 at 2:19 PM Alvaro Herrera
> wrote:
> > > >
> > > >
hey are
> publishing columns that they don't want to publish. I think as a user I
> would rather get an error in that case:
ERROR: invalid column list in published set
> DETAIL: The set of published commands does not include all the replica
> identity columns.
Added this.
Also added some more tests. Please find attached a rebased and updated
patch.
Thank you,
Rahila Syed
v4-0001-Add-column-filtering-to-logical-replication.patch
Description: Binary data
.c
> @@ -354,7 +354,6 @@ logicalrep_rel_open(LogicalRepRelId remoteid,
> LOCKMODE lockmode)
>
> attnum = logicalrep_rel_att_by_name(remoterel,
> NameStr(attr->attname));
> -
> entry->attrmap->attnums[i] = attnum;
>
> There are quite a few places in the patch that contains spurious line
> additions or removals.
>
>
Thank you for your comments, I will fix these.
Thank you,
Rahila Syed
nasty surprises of security-
> leaking nature.
Ok, Thank you for your opinion. I agree that giving an explicit error in
this case will be safer.
I will include this, in case there are no counter views.
Thank you for your review comments. Please find attached the rebased and
updated patch.
Thank you,
Rahila Syed
v3-0001-Add-column-filtering-to-logical-replication.patch
Description: Binary data
uch columns or maybe they
> have dealt with it in some other way unless they are unaware of this
> problem.
>
>
The column comparison for row filtering happens before the unchanged toast
columns are filtered. Unchanged toast columns are filtered just before
writing the tuple
to output stream. I think this is the case both for pglogical and the
proposed patch.
So, I can't see why the not logging of unchanged toast columns would be a
problem
for row filtering. Am I missing something?
Thank you,
Rahila Syed
tches oids and simply
doing Schema oid = liinital_oid(search_path)); should be enough.
2. In the same function should there be an if else condition block instead
of a switch case as
there are only two cases.
Thank you,
Rahila Syed
> OpenTableLists. Apparenly there's some confusion - the code expects the
> list to contain PublicationTable nodes, and tries to extract the
> RangeVar from the elements. But the list actually contains RangeVar, so
> this crashes and burns. See the attached backtrace.
>
>
Th
quot; line in OpenTableList.
>
> Removed.
> I got warnings from "git am" about trailing whitespace being added by
> the patch in two places.
>
> Should be fixed now.
Thank you,
Rahila Syed
v1-0001-Add-column-filtering-to-logical-replication.patch
Description: Binary data
PTION.
About having such a functionality, I don't immediately see any issue with
it as long
as we make sure replica identity columns are always present on both
instances.
However, need to carefully consider situations in which a server subscribes
to multiple
publications, each publishing a diff
checks are underway. I will post an updated patch with those
changes soon.
Kindly let me know your opinion.
Thank you,
Rahila Syed
0001-Add-column-filtering-to-logical-replication.patch
Description: Binary data
"test_sub2", table
> "pgbench_branches" has started
>
> ... this message. The code that reports this error is from the COPY
> command.
> Row filter modifications has no control over it. It seems somehow your
> subscriber close the replication connection causing this issue. Can you
> reproduce it consistently? If so, please share your steps.
>
> Please ignore the report.
Thank you,
Rahila Syed
g lost.
I didn't investigate it more, but looks like we should maintain the
existing behaviour when table synchronization fails
due to duplicate data.
Thank you,
Rahila Syed
;t be
applied, as row
does not exist on the subscriber. It would be good if ALTER SUBSCRIBER
REFRESH PUBLICATION
would help fetch such existing rows from publishers that match the qual
now(either because the row changed
or the qual changed)
Thank you,
Rahila Syed
On Tue, Mar 9, 2021 at 8:35 PM R
stead of defining a new
struct?
Thank you,
Rahila Syed
nation in this document. Thus,
explaining
impact on referencing tables here, as it already describes behaviour of
UPDATE on a partitioned table.
Thank you.
Rahila Syed
ion update is missing from the patches.
>
Thank you,
Rahila Syed
st
+ of schemas in the publication with the specified one. The ADD
There is a typo above s/SET TABLE/SET SCHEMA
Thank you,
Rahila Syed
oes not
exist. I think this is counterintuitive, it should throw a warning and
continue adding the rest.
> Drop some schema from the publication:
> ALTER PUBLICATION production_quarterly_publication DROP SCHEMA
> production_july;
>
> Same for drop schema, if one of these schemas does n
Hi David,
The feature seems useful to me. The code will need to be refactored due to
changes in commit : b05fe7b442
Please see the following comments.
1. Is there a specific reason behind having new relstate for truncate?
The current state flow is
INIT->DATATSYNC->SYNCWAIT->CATCHUP->SYNCDONE->RE
postgres=# CREATE TABLE tbl_test_5 (i int) PARTITION BY LIST((tbl_test_5))
CONFIGURATION (values in
('(1)'::tbl_test_5), ('(3)'::tbl_test_5) default partition tbl_default_5);
ERROR: relation "tbl_test_5_1" already exists
Thank you,
Rahila Syed
>
4. Typo in default_part_name
+VALUES IN ( class="parameter">partition_bound_expr [, ...] ), [(
> partition_bound_expr [, ...]
> )] [, ...] [DEFAULT PARTITION class="parameter">defailt_part_name]
> +MODULUS numeric_literal
Thank you,
Rahila Syed
am not sure. I think it is a reasonable change. It is even
indicated in the
comment above index_set_state_flags() that it can be made transactional.
At the same time, probably we can just go ahead with current
inconsistent update of relisreplident and indisvalid flags. Can't see what
will break with
Hi,
I couldn't test the patch as it does not apply cleanly on master.
Please find below some review comments:
1. Would it better to throw a warning at the time of dropping the
REPLICA IDENTITY
index that it would also drop the REPLICA IDENTITY of the parent table?
2. CCI is used after cal
for synchronous replication of empty txns with and without
the patch remains similar.
Having said that, these are initial findings and I understand better
performance tests are required to measure
reduction in consumption of network bandwidth and impact on synchronous
replication and replication lag.
Thank you,
Rahila Syed
Hi Amit,
Can you please rebase the patches as they don't apply on latest master?
Thank you,
Rahila Syed
On Thu, 26 Dec 2019 at 16:36, Amit Khandekar wrote:
> On Tue, 24 Dec 2019 at 14:02, Amit Khandekar
> wrote:
> >
> > On Thu, 19 Dec 2019 at 01:02, Rahila Syed
Also, there aren't any errors in logs indicating the cause.
--
Rahila Syed
Performance Engineer
2ndQuadrant
http://www.2ndQuadrant.com <http://www.2ndquadrant.com/>
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
ove:
# SELECT catalog_xmin::varchar::int >
# FROM pg_catalog.pg_replication_slots
# WHERE slot_name = 'master_physical';
#
# expecting this output:
# t
# last actual query output:
#
# with stderr:
# ERROR: syntax error at or near "FROM"
# LINE 3: FROM pg_catalog.pg_replicatio
ite of that,
>if any required rows get removed on standby, the slot gets dropped.
IIUC, you mean `if any required rows get removed on *the master* the slot
gets
dropped`, right?
Thank you,
--
Rahila Syed
Performance Engineer
2ndQuadrant
http://www.2ndQuadrant.com <http://www.2ndquadrant.com/
Hi,
On Mon, 1 Apr 2019 at 21:40, Alvaro Herrera
wrote:
> Hi Rahila, thanks for reviewing.
>
> On 2019-Mar-25, Rahila Syed wrote:
>
> > Please see few comments below:
> >
> > 1. Makecheck fails currently as view definition of expected rules.out
> does
> >
hscan->rs_nblocks - startblock +
+ hscan->rs_cblock;
+
+ return blocks_done;
I think parallel scan equivalent bpscan->phs_nblocks along with
hscan->rs_nblocks should be used similar to startblock computation above.
Thank you,
Rahila Syed
On Fri, 29 Mar 2019 at 23:46, Alvaro Herrer
on that some phases overlap, some are mutually exclusive
hence
may be skipped etc. reporting `phase number versus total phases` does
provide
valuable information.
We are able to give user a whole picture in addition to reporting progress
within phases.
Thank you,
--
Rahila Syed
Perfo
Filter: 333
Planning Time: 0.353 ms
Execution Time: 3793.572 ms
(8 rows)
postgres=# commit;
COMMIT
postgres=# select xact_commit from pg_stat_database where datname =
'postgres';
xact_commit
-
161
(1 row)
--
Rahila Syed
Performance Engineer
2ndQuadrant
http://www.2
e scan(2/5) ?
Although I think it has been rectified in the latest patch as I now get
'table scan' phase in output as I do CREATE INDEX on table with 100
records
Thank you,
.--
Rahila Syed
Performance Engineer
2ndQuadrant
http://www.2ndQuadrant.com <http://www.2ndquadrant.com/>
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
e for final btree sort & load phase we have not cleared
the blocks_done entry from previous phases. I think this can be confusing
as the blocks_done does not correspond to the tuples_done in the current
phase.
--
Rahila Syed
Performance Engineer
2ndQuadrant
http://www.2ndQuadrant.com <http://www.2ndquadrant.com/>
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
tuples of heap
relation.
/*
* If it's for an exclusion constraint, make a second pass over the
heap
* to verify that the constraint is satisfied. We must not do this
until
* the index is fully valid. (Broken HOT chains shouldn't matter,
though;
* see comments for IndexCheckExclusion.)
*/
if (indexInfo->ii_ExclusionOps != NULL)
IndexCheckExclusion(heapRelation, indexRelation, indexInfo);
*/
Thank you,
Rahila Syed
93 matches
Mail list logo