On Thu, Sep 2, 2010 at 1:17 PM, Tom Lane wrote:
> A J writes:
>> Do you think changing log_destination to syslog may make a difference
>
> It's worth trying alternatives anyway. It is odd that you are seeing
> such a slowdown when using the collector --- many people push very high
> log volumes
PostgreSQL 8.4
CentoOS 5.5
I have got WCE=0, on the drive that mounts the data directory with all its
subdirectory (including pg_log)
Maybe I should try to mount pg_log to a different drive and have write cache
enabled on that one.
From: Tom Lane
To: A J
Cc
A J writes:
> Do you think changing log_destination to syslog may make a difference
It's worth trying alternatives anyway. It is odd that you are seeing
such a slowdown when using the collector --- many people push very high
log volumes through the collector without problems. What PG version is
On Thu, Sep 2, 2010 at 1:02 PM, A J wrote:
> Do you think changing log_destination to syslog may make a difference (Kevin
> mentioned even this timing is not totally immune from network effects but if
> possible to measure should be very close to the query time) ?
At least try putting it on a dif
A J wrote:
> With this approach, I will be assuming that the query time does
> not change due to client location, which though reasonable, is
> still an assumption.
As I explained in an earlier post, the query can block on the server
due to network bandwidth or latency. So the "wall time" for
Sorry, forgot to mention that:
logging_collector = on # Enable capturing of stderr and csvlog
Infact I was thinking of the other way, to switch it off and somehow display
the
stderr(or syslog) directly on console (rather than writing to a file) to see if
it helps.
_
A J writes:
> The performance as seen from the clients dropped substantially after turning
> on
> the extra logging. The numbers were real but the performance dropped
> significantly.
> All the log related settings in postgresql.conf are below:
Hmm, what about logging_collector? (Or it might
With this approach, I will be assuming that the query time does not change due
to client location, which though reasonable, is still an assumption. If I could
have tested without making this assumption (or any) , it would have been better.
But looks like there is no choice as getting to query tim
Scott Marlowe wrote:
> On Thu, Sep 2, 2010 at 11:34 AM, A J wrote:
>> The problem I am trying to solve is:
>> measure accurately both the database server time + network time
>> when several clients connect to the database from different
>> geographic location. All the clients hit the database
The performance as seen from the clients dropped substantially after turning on
the extra logging. The numbers were real but the performance dropped
significantly.
All the log related settings in postgresql.conf are below:
log_destination = 'stderr' # Valid values are combinations o
A J writes:
> On second try, by trying to log to log_directory/log_filename by
> setting log_min_duration_statement=0, seems to be doing something weird. The
> durations are very very high in the file and cannot be true.
You're not being very clear here. Did the logged durations not
correspond
On Thu, Sep 2, 2010 at 11:34 AM, A J wrote:
> Kevin
> The problem I am trying to solve is:
> measure accurately both the database server time + network time when several
> clients connect to the database from different geographic location.
> All the clients hit the database simultaneously with a l
Kevin
The problem I am trying to solve is:
measure accurately both the database server time + network time when several
clients connect to the database from different geographic location.
All the clients hit the database simultaneously with a long script each of
insert/update/select queries.
I d
WBL,
Unfortunately, with the way it originally designed, there is no column
that our existing queries and views use as predicate to take advantage
of constraint exclusion. That said, it might imply that these tables
are not good candidates for partitioning in the first place. The only
main purpos
A J writes:
> The problem I am now facing in using log_min_duration_statement is that all
> the
> clients have to write to a single log file in the pg_log directory. So they
> have
> to wait for the other writes to happen before completing their write. This
> seems
> to be reason why the mea
A J wrote:
> I am conducting the test with several concurrent clients.
I didn't see a question in your latest email. Do you now understand
why the network affects timings? Do you have any other questions?
Is there some particular problem you're trying to solve, for which
you don't yet have a
I am conducting the test with several concurrent clients.
The problem I am now facing in using log_min_duration_statement is that all the
clients have to write to a single log file in the pg_log directory. So they
have
to wait for the other writes to happen before completing their write. This
s
Silvio Brandani writes:
> Tom Lane ha scritto:
>> Is it really the *exact* same query both ways, or are you doing
>> something like parameterizing the query in the application?
> Is it exactly the same, the query text is from the postgres log.
> I just try it in test environment and we have same
Tom Lane ha scritto:
Silvio Brandani writes:
Still problems of Out of Memory:
the query is the following and if I run it from psql is working fine,
but from application I get error :
Is it really the *exact* same query both ways, or are you doing
something like parameterizing the q
Silvio Brandani writes:
>> Still problems of Out of Memory:
>> the query is the following and if I run it from psql is working fine,
>> but from application I get error :
Is it really the *exact* same query both ways, or are you doing
something like parameterizing the query in the application?
Silvio Brandani ha scritto:
Still problems of Out of Memory:
the query is the following and if I run it from psql is working fine,
but from application I get error :
SELECT MAX(oec.ctnr_nr) ::char(13) as Ctnr_nr,MAX(oec.file_ref)
::char(7) as File_Ref,MAX(oec.move_type) ::char(5)
as Ctnr_ty
Still problems of Out of Memory:
the query is the following and if I run it from psql is working fine,
but from application I get error :
SELECT MAX(oec.ctnr_nr) ::char(13) as Ctnr_nr,MAX(oec.file_ref)
::char(7) as File_Ref,MAX(oec.move_type) ::char(5)
as Ctnr_type,MAX(oec.ct_feet) ::char(3)
> Note: I just learned from this change that with the 3x2 additional
> tables from the 2 new partitioned table the query of 8 becomes one with
> 14 joined tables
It seems then that you did not partition your tables in such a way
that your query only needs to scan one of them?
That is where partiti
23 matches
Mail list logo