> It seems related to this thread? :
> https://www.postgresql.org/message-id/flat/5037A9C5.4030701%40optionshouse.com#5037a9c5.4030...@optionshouse.com
>
> And this wiki page : https://wiki.postgresql.org/wiki/Loose_indexscan
Yep. Now i can see 2 use cases for this feature:
1. DISTINCT queries.
examples? Thanks Regards,Dmitriy Sarafannikov
The following review has been posted through the commitfest application:
make installcheck-world: tested, passed
Implements feature: tested, passed
Spec compliant: tested, passed
Documentation:tested, passed
This is simple and intuitive patch. Code looks pretty clear a
The following review has been posted through the commitfest application:
make installcheck-world: not tested
Implements feature: not tested
Spec compliant: not tested
Documentation:not tested
Hi Andrew! Thanks for the patch, but patch 0001-allow-uncompressed-Gist-2.pat
> Why didn't rsync made the copies on master and replica same?
Because rsync was running with —size-only flag.
> I haven't looked in detail, but it sounds slightly risky proposition
> to manipulate the tuples by writing C functions of the form you have
> in your code. I would have preferred som
-pasted»
and simplified code from vacuum functions with SQL interface (see attachment).
Can you look on them? Do you think it is safe to use them for fixing corrupted
pages
or is there a better way not to loose data?
Regards,
Dmitriy Sarafannikov
freeze_tuple.c
Description: Binary data
--
Sent
xmin | ?column?
---+--
516651778 |
(1 row)
It seems like replica did not replayed corresponding WAL records.
Any thoughts?
Regards,
Dmitriy Sarafannikov
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
.3, wal_log_hints=off, full_page_writes=on, fsync=on,
checksums disabled.
We don’t think that it is any hardware-related problems because this databases
started from 9.4
and they survived 2 upgrades with pg_upgrade. And any hardware-related problems
was not detected.
Problem appears not only in
> Ok, i agree. Patch is attached.
I added a patch to the CF
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
> + else \
> + (snapshotdata).xmin = \
> + TransactionIdLimitedForOldSnapshots(RecentGlobalDataXmin, \
> + relation); \
>
> I think we don't need to use TransactionIdLimitedForOldSnapshots() as
> that is required to override xmin for table vacuum/pruning purposes.
>
>> Maybe we need
>> to use Ge
I think we can use RecentGlobalDataXmin for non-catalog relations andRecentGlobalXmin for catalog relations (probably a check similar towhat we have in heap_page_prune_opt).I took check from heap_page_prune_opt (Maybe this check must be present as separate function?)But it requires to initialize sn
Amit, thanks for comments!
> 1.
> +#define InitNonVacuumableSnapshot(snapshotdata) \
> + do { \
> + (snapshotdata).satisfies = HeapTupleSatisfiesNonVacuumable; \
> + (snapshotdata).xmin = RecentGlobalDataXmin; \
> + } while(0)
> +
>
> Can you explain and add comments why you think RecentGlobalDa
> Maybe we need another type of snapshot that would accept any
> non-vacuumable tuple. I really don't want SnapshotAny semantics here,
> but a tuple that was live more recently than the xmin horizon seems
> like it's acceptable enough. HeapTupleSatisfiesVacuum already
> implements the right beha
> If that is the case, then how would using SnapshotAny solve this
> problem. We get the value from index first and then check its
> visibility in heap, so if time is spent in _bt_checkkeys, why would
> using a different kind of Snapshot solve the problem?
1st scanning on the index with Snapshot
> Maybe we need another type of snapshot that would accept any
> non-vacuumable tuple. I really don't want SnapshotAny semantics here,
> but a tuple that was live more recently than the xmin horizon seems
> like it's acceptable enough. HeapTupleSatisfiesVacuum already
> implements the right beha
> What I'm thinking of is the regular indexscan that's done internally
> by get_actual_variable_range, not whatever ends up getting chosen as
> the plan for the user query. I had supposed that that would kill
> dead index entries as it went, but maybe that's not happening for
> some reason.
Rea
Buffers: shared hit=138Planning time: 6.139 msExecution time: 0.482 ms(16 rows)Time: 7.722 msInitially, the function used active snapshot from GetActiveSnapshot(). But in fccebe421d0c410e6378fb281419442c84759213this behavior was "weakened" to SnapshotDirty (I suppose for a similar reason).W
rtments ON INSERT WHEN (new.id_user >
1000) EXECUTE PROCEDURE departments_event_handler(); -- just like trigger
Regards,
Dmitriy Sarafannikov
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
f stat -ddd -a sleep 10 or something during both runs? I
>suspect that the context switch ratios will be quite different.
Perf show that in 9.5 case context switches occurs about 2 times less.
Perf output is attached.
Regards,
Dmitriy Sarafannikov
perf.stat
Description: Binary data
--
Sent
>>The results above are not really fair, pgbouncer.ini was a bit different on
>>Ubuntu host (application_name_add_host was disabled). Here are the right
>>results with exactly the same configuration:
>>
>>OS PostgreSQL version TPS Avg. latency
>>RHEL 6 9.4 44898 1.425 ms
>>RHEL 6 9.5 26199 2.443
signature:
void InitializeSessionUserId(const char *rolename)
and it is impossible to pass role Oid to this function.
In this way, the patch is relevant only to the master and 9.5 branches
Regards,
Dmitriy Sarafannikov
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Hi all,
I have found incorrect error message in InitializeSessionUserId function
if you try to connect to database by role Oid (for example
BackgroundWorkerInitializeConnectionByOid).
If role have no permissions to login, you will see error message like this:
FATAL: role "(null)" is not permit
22 matches
Mail list logo