On 2024-Mar-01, James Pang wrote:
> one question:
> we need to increase all SLRU buffers together , MULTIXACT, XACT,
> Subtrans, COMMIT TS , for example, got all of them doubled based on
> existing size ?
No need.
> or only increase Subtrans , or Subtrans and multixact ?
Just increase
Hi Chema,
On 2024-Feb-26, Chema wrote:
> Dear pgsqlers,
>
> I'm trying to optimize simple queries on two tables (tenders & items) with
> a couple million records. Besides the resulting records, the app also
> displays the count of total results. Doing count() takes as much time as
> the other
On 2024-Feb-02, James Pang (chaolpan) wrote:
> Possible to increase Subtrans SLRU buffer size ?
Not at present -- you need to recompile after changing
NUM_SUBTRANS_BUFFERS in src/include/access/subtrans.h,
NUM_MULTIXACTOFFSET_BUFFERS and NUM_MULTIXACTMEMBER_BUFFERS in
On 2023-Sep-14, bruno da silva wrote:
> This problem is more acute when the FK Table stores a small number of rows
> like types or codes.
Right, because the likelihood of multiple transactions creating
new references to the same row is higher.
> I think in those cases an enum type should be
On 2023-Sep-13, bruno da silva wrote:
> I just had an outage on postgres 14 due to multixact members limit exceeded.
Sadly, that's not as uncommon as we would like.
> So the documentation says "There is a separate storage area which holds the
> list of members in each multixact, which also uses
I came here to talk about partitionwise join, but then noticed you have
already thought of that:
On 2023-Jun-18, nicolas paris wrote:
> Note that both plan acome from the same partitioned by hash table with
> 100 parts, with a unique index on the list_id + hash_key. For strategy
> 2.1, I turned
On 2022-Nov-29, Mladen Gogala wrote:
> Hmmm, I think I will run pgbench with and without JIT on and see the
> difference.
I doubt you'll notice anything, because the pgbench queries will be far
below the JIT cost, so nothing will get JIT compiled at all. Or are you
planning on using a custom
On 2022-Nov-28, Mladen Gogala wrote:
> As for JIT, I've recently asked that question myself. I was told that
> PostgreSQL with LLVM enabled performs approximately 25% better than without
> it.
Hmm, actually, normally you're better off turning JIT off, because it's
very common to diagnose cases
On 2022-Nov-28, Mladen Gogala wrote:
> You'll probably be glad to learn that we have hints now.
What hints are you talking about? As I understand, we still don't have
Oracle-style query hints.
--
Álvaro HerreraBreisgau, Deutschland — https://www.EnterpriseDB.com/
On 2022-Apr-14, Benjamin Tingle wrote:
> It doesn't help if I partition temp_data by textfield beforehand either
> (using the same scheme as the target table). It still opts to concatenate
> all of temp_data, hash it, then perform a sequential scan against the
> target partitions.
Does it still
On 2021-Jul-22, l...@laurent-hasson.com wrote:
> Yes, agreed Peter... The "lower priority" issue was mentioned, but not
> in terms of the applicability of the fix overall. Personally, I would
> prefer going the size_t route vs int/long/int64 in C/C++/. Of course,
> as a user, I'd love a patch on
On 2021-Jul-08, Justin Pryzby wrote:
> If I'm not wrong, this is the same thing you asked 2 week ago.
>
> If so, why not continue the conversation on the same thread, and why not
> reference the old thread ?
>
> I went to the effort to find the old conversation.
>
Hello
On 2021-Apr-13, Andres Freund wrote:
> > The concerns that had come to my mind were more along the lines
> > of things like pg_dump requiring a larger footprint in the shared
> > lock table. We could alleviate that by increasing the default
> > value of max_locks_per_transaction, perhaps.
On 2021-Apr-13, Andres Freund wrote:
> > Sounds like I should file this as a requested improvement?
>
> The ability to lock a toast table? Yea, it might be worth doing that. I
> seem to recall this being discussed not too long ago...
Yep, commit 59ab4ac32460 reverted by eeda7f633809. There
On 2020-Sep-14, David Rowley wrote:
> On Tue, 8 Sep 2020 at 06:05, Raj wrote:
> >
> > > This would not exactly look like a bug, because the message says "to
> > > be locked", so at least it's not allowing two workers to lock the same
> > > tuple. But it seems that the skip-locked mode should not
On 2020-Jun-20, Tom Lane wrote:
> I wrote:
> > ... oh, now I see: apparently, your filter condition is such that *no*
> > rows of the objectcustomfieldvalues table get past the filter:
> >
> > -> Index Scan using objectcustomfieldvalues3 on
> > objectcustomfieldvalues
On 2020-Feb-16, Lars Aksel Opsahl wrote:
> On a server with 32 cores and 250 GB memory, with CentOS 7 and kernel
> 4.4.214-1.el7.elrepo.x86_64, I try to run 30 parallel threads using
> dblink. (https://github.com/larsop/postgres_execute_parallel) . I have
> tried to disconnect and reconnect in
On 2020-Jan-13, Shira Bezalel wrote:
> Hi All,
>
> I'm testing an upgrade from Postgres 9.6.16 to 12.1 and seeing a
> significant performance gain in one specific query. This is really great,
> but I'm just looking to understand why.
pg12 reads half the number of buffers. I bet it's because of
On 2019-Nov-14, Craig James wrote:
> I'm completely baffled by this problem: I'm doing a delete that joins three
> modest-sized tables, and it gets completely stuck: 100% CPU use forever.
Do you have any FKs there? If any delete is cascading, and you don't
have an index on the other side, it'd
On 2019-Jun-26, Justin Pryzby wrote:
> > Also, Should pg_buffercache perhaps be run at the beginning and end of the
> > week, to see if there is a significant difference?
>
> Yes; buffercache can be pretty volatile, so I'd save it numerous times each at
> beginning and end of week.
Be careful
On 2019-Jun-26, Hugh Ranalli wrote:
> From my research in preparing for the upgrade, I understood transparent
> huge pages were a good thing, and should be enabled. Is this not correct?
It is not.
> Wouldn't the plan be the same at both
> the start of the week (when the problematic table is
On 2019-Apr-15, Tom Lane wrote:
> It's barely conceivable that in your particular query, there's something
> acting to break that which doesn't manifest typically; but I think it's
> much more likely that you simply haven't found the culprit allocation.
> It's quite feasible that many many
On 2019-Apr-15, Gunther wrote:
> #0 AllocSetAlloc (context=0x1168230, size=385) at aset.c:715
> #1 0x0084e6cd in palloc (size=385) at mcxt.c:938
> #2 0x0061019c in ExecHashJoinGetSavedTuple
> (file=file@entry=0x8bbc528, hashvalue=hashvalue@entry=0x7fff2e4ca76c,
>
On 2019-Feb-14, Mariel Cherkassky wrote:
> I meant the anaylze, if anaylze will run very often on the original table,
> arent there disadvantages for it ?
It'll waste time and resources pointlessly. Don't do it -- it won't do
any good.
--
Álvaro Herrera
On 2019-Feb-13, Mariel Cherkassky wrote:
> Hey,
> I have a very big toasted table in my db(9.2.5).
Six years of bugfixes missing there ... you need to think about an
update.
> Autovacuum doesnt gather
> statistics on it because the analyze_scale/threshold are default and as a
> result
On 2019-Jan-30, Mariel Cherkassky wrote:
> It seems that the version of the db is 9.6.10 :
>
> psql -U db -d db -c "select version()";
> Password for user db:
> version
> ---
> PostgreSQL
On 2019-Jan-29, Shreeyansh Dba wrote:
> The virtualxid lock is special. It’s a exclusive lock on the transaction’s
> own virtual transaction ID that every transaction always holds. No other
> transaction can ever acquire it while the transaction is running.
> The purpose of this is to allow one
On 2019-Jan-17, Mariel Cherkassky wrote:
> But you said that the threshold that is chosen for the toasted table is
> identical to the originals table threshold right ?
You can configure them identical, or different. Up to you.
> Is that a normal behavior that the original table has 1000recrods
On 2019-Jan-17, Mariel Cherkassky wrote:
> I did it for the original table. But I see in the logs that the autovacuun
> on the toasted table isn't synced with the autovacuun of the original
> table. Therefore I thought that it worth to set it also for the toasted
> table. Can you explain why in
On 2018-Dec-17, Tom Lane wrote:
> Queries like yours are kinda sorta counterexamples to that, but pretty
> much all the ones I've seen seem like crude hacks (and this one is not
> an exception). Writing a bunch of code to support them feels like
> solving the wrong problem. Admittedly, it's not
On 2018-Dec-06, Amit Langote wrote:
Hi
> [ Parallel SeqScan on precio_126 to precio_998 ]
>
> > -> Parallel Seq Scan on precio_999 p_874
> > (cost=0.00..27.50 rows=1 width=16)
> >Filter: ((fecha >= '1990-05-06
> > 00:00:00'::timestamp
On 2018-Sep-25, Justin Pryzby wrote:
> I asked few weeks ago [0] but didn't get a response on -docs so resending here
> for wider review/discussion/.
I support the idea of adding a link to "Performance Optimization".
That's not a protected page, so you should be able to do it.
> [0]
>
On 2018-Aug-16, Fred Habash wrote:
> One of our database API's is run concurrently by near 40 sessions. We see
> all of them waiting back and forth on this wait state.
What version are you running?
> Why is it called Subtrans Control Lock?
It controls access to the pg_subtrans structure, which
Please show the output of these queries in the relevant databases:
select name, setting, source, sourcefile, sourceline from pg_settings where
name like '%vacuum%';
select oid::regclass, reloptions from pg_class where reloptions is not null;
--
Álvaro Herrera
Rodrigo Rosenfeld Rosas wrote:
> explain analyze delete from field_values where transaction_id=226;
> QUERY PLAN
> ---
> Delete on
35 matches
Mail list logo