On 11.12.2011 02:27, Daniel Cristian Cruz wrote:
> I would start with 5 seconds.
>
> Reading the manual again and I saw that enabling analyze, it analyze all
> queries, even the ones that wasn't 5 second slower. And understood that
> there is no way to disable for slower queries, since there is no
On Sat, Dec 10, 2011 at 8:32 PM, Craig Ringer wrote:
> On 12/11/2011 09:27 AM, Jon Nelson wrote:
>>
>> The first method involved writing a C program to parse a file, parse
>> the lines and output newly-formatted lines in a format that
>> postgresql's COPY function can use.
>> End-to-end, this take
Start a transaction before the first insert and commit it after the last one
and it will be much better, but I believe that the copy code path is optimized
to perform better than any set of queries can, even in a single transaction
Sent from my iPhone
On Dec 10, 2011, at 5:27 PM, Jon Nelson wr
On 12/11/2011 09:27 AM, Jon Nelson wrote:
The first method involved writing a C program to parse a file, parse
the lines and output newly-formatted lines in a format that
postgresql's COPY function can use.
End-to-end, this takes 15 seconds for about 250MB (read 250MB, parse,
output new data to n
I was experimenting with a few different methods of taking a line of
text, parsing it, into a set of fields, and then getting that info
into a table.
The first method involved writing a C program to parse a file, parse
the lines and output newly-formatted lines in a format that
postgresql's COPY f
2011/12/10 Tomas Vondra
> On 10.12.2011 23:40, Daniel Cristian Cruz wrote:
> > At work we have a 24 cores server, with a load average around 2.5.
>
> A single query is processes by a single CPU, so even if the system is
> not busy a single query may hit CPU bottleneck. The real issue is the
> ins
On 10.12.2011 23:40, Daniel Cristian Cruz wrote:
> At work we have a 24 cores server, with a load average around 2.5.
A single query is processes by a single CPU, so even if the system is
not busy a single query may hit CPU bottleneck. The real issue is the
instrumentation overhead - timing etc. O
At work we have a 24 cores server, with a load average around 2.5.
I don't know yet if a system which use some unused CPU to minimize the load
of a bad query identified early is bad or worse.
Indeed, I don't know if my boss would let me test this at production too,
but it could be good to know ho
There's auto_explain contrib module that does exactly what you're asking
for. Anyway, explain analyze is quite expensive - think twice before
enabling that on production server where you already have performance
issues.
Tomas
On 10.12.2011 17:52, Daniel Cristian Cruz wrote:
> Hi all,
>
> I'm try
Daniel Cristian Cruz wrote:
> Hi all,
>
> I'm trying to figure out some common slow queries running on the server, by
> analyzing the slow queries log.
>
> I found debug_print_parse, debug_print_rewritten, debug_print_plan, which are
> too much verbose and logs all queries.
>
> I was thinking
Hi all,
I'm trying to figure out some common slow queries running on the server, by
analyzing the slow queries log.
I found debug_print_parse, debug_print_rewritten, debug_print_plan, which
are too much verbose and logs all queries.
I was thinking in something like a simple explain analyze just
11 matches
Mail list logo