Jim Nasby writes:
> On 7/5/16 7:14 AM, Robert Klemme wrote:
>> I was wondering whether there are any plans to include the plan of the
>> FK check in EXPLAIN output. Or is there a different way to get to see
>> all the plans of triggers as well as of the main SQL?
> Unfortunately, this isn't the t
Right, buffers are not rows, but still 8 times less...
The table I'm reading from is already aggregated on daily basis (so
there is no way to aggregate it more).
Will extending page to say 128K improve performance?
On 07/19/16 07:41, Jim Nasby wrote:
On 7/19/16 9:28 AM, trafdev wrote:
The d
On 6/29/16 1:01 PM, devel.brai...@xoxy.net wrote:
During my test run, there was one offending query invocation, a simple
SELECT * FROM job WHERE field = $1
Of course the actual query specified the list of fields as it was generated
by Hibernate, but that is what it boils down to - no joins etc. T
On 7/5/16 7:14 AM, Robert Klemme wrote:
I was wondering whether there are any plans to include the plan of the
FK check in EXPLAIN output. Or is there a different way to get to see
all the plans of triggers as well as of the main SQL?
When researching I found this thread from 2011 and the output
On 7/19/16 9:28 AM, trafdev wrote:
The difference is - you're fetching\grouping 8 times less rows than I:
Huh? The explain output certainly doesn't show that.
Why not?
My output:
Buffers: shared hit=1486949
Torsten's output:
Buffers: shared hit=155711
This is amount of rows fetched for fur
The difference is - you're fetching\grouping 8 times less rows than I:
Huh? The explain output certainly doesn't show that.
Why not?
My output:
Buffers: shared hit=1486949
Torsten's output:
Buffers: shared hit=155711
This is amount of rows fetched for further processing (when all data is
i
On 7/5/16 4:03 PM, Kouber Saparev wrote:
Could such a memory consumption be related to a GET DIAGNOSTICS plpgsql
block? The delete itself is within a stored procedure, and then I return
the amount of the deleted rows from the function:
Looking at the code, no, GET DIAG won't change anything;
e
On 7/9/16 12:26 PM, trafdev wrote:
So does that mean Postgres is not capable to scan\aggregate less than 10
mln rows and deliver result in less than 2 seconds?
That's going to depend entirely on your hardware, and how big the rows
are. At some point you're simply going to run out of memory ban