Tatsuo Ishii <[EMAIL PROTECTED]> writes:
> I'm a little bit worried about cranking up PG_SYSLOG_LIMIT in the back
> branches. Cranking it up will definitely change syslog messages text
> style and might confuse syslog handling scripts(I have no evince that
> such scripts exist though). So I suggest
> Jeff <[EMAIL PROTECTED]> writes:
> > On Jul 8, 2008, at 8:24 AM, Achilleas Mantzios wrote:
> >> File sizes of about 3M result in actual logging output of ~ 10Mb.
> >> In this case, the INSERT *needs* 20 minutes to return. This is
> >> because the logging through syslog seems to severely slow th
On Tue, 8 Jul 2008, Tom Lane wrote:
Jeff <[EMAIL PROTECTED]> writes:
On Jul 8, 2008, at 8:24 AM, Achilleas Mantzios wrote:
File sizes of about 3M result in actual logging output of ~ 10Mb.
In this case, the INSERT *needs* 20 minutes to return. This is
because the logging through syslog seems t
Jeff <[EMAIL PROTECTED]> writes:
> On Jul 8, 2008, at 8:24 AM, Achilleas Mantzios wrote:
>> File sizes of about 3M result in actual logging output of ~ 10Mb.
>> In this case, the INSERT *needs* 20 minutes to return. This is
>> because the logging through syslog seems to severely slow the system.
On Jul 8, 2008, at 8:24 AM, Achilleas Mantzios wrote:
File sizes of about 3M result in actual logging output of ~ 10Mb.
In this case, the INSERT *needs* 20 minutes to return. This is
because the logging through syslog seems to severely slow the system.
If instead, i use stderr, even with logg
In response to "Radhika S" <[EMAIL PROTECTED]>:
>
> when i issued the vaccuum cmd, I recieved this message:
>
> echo "VACUUM --full -d ARSys" | psql -d dbname
>
> WARNING: relation "public.tradetbl" contains more than
> "max_fsm_pages" pages with useful free space
> HINT: Consider compacting t
Scott Carey wrote:
Well, what does a revolution like this require of Postgres? That is the
question.
[...]
#1 Per-Tablespace optimizer tuning parameters.
... automatically measured?
Cheers,
Jeremy
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make ch
Hi,
when i issued the vaccuum cmd, I recieved this message:
echo "VACUUM --full -d ARSys" | psql -d dbname
WARNING: relation "public.tradetbl" contains more than
"max_fsm_pages" pages with useful free space
HINT: Consider compacting this relation or increasing the
configuration parameter "max_
Achilleas Mantzios <[EMAIL PROTECTED]> writes:
> ΣÏÎ¹Ï Tuesday 08 July 2008 17:35:16 ο/η Tom Lane ÎγÏαÏε:
>> Hmm. There's a function in elog.c that breaks log messages into chunks
>> for syslog. I don't think anyone's ever looked hard at its performance
>> --- maybe there's an O(N^2) b
Well, what does a revolution like this require of Postgres? That is the
question.
I have looked at the I/O drive, and it could increase our DB throughput
significantly over a RAID array.
Ideally, I would put a few key tables and the WAL, etc. I'd also want all
the sort or hash overflow from wo
Στις Tuesday 08 July 2008 17:35:16 ο/η Tom Lane έγραψε:
> Achilleas Mantzios <[EMAIL PROTECTED]> writes:
> > In this case, the INSERT *needs* 20 minutes to return. This is because the
> > logging through syslog seems to severely slow the system.
> > If instead, i use stderr, even with logging_coll
Achilleas Mantzios <[EMAIL PROTECTED]> writes:
> In this case, the INSERT *needs* 20 minutes to return. This is because the
> logging through syslog seems to severely slow the system.
> If instead, i use stderr, even with logging_collector=on, the same statement
> needs 15 seconds to return.
Hmm
Hi i have experienced really bad performance on both FreeBSD and linux, with
syslog,
when logging statements involving bytea of size ~ 10 Mb.
Consider this scenario:
[EMAIL PROTECTED] \d marinerpapers_atts
Table "public.marinerpapers_atts"
Column|
Hi,
Jonah H. Harris wrote:
I'm not sure how those cards work, but my guess is that the CPU will
go 100% busy (with a near-zero I/O wait) on any sizable workload. In
this case, the current pgbench configuration being used is quite small
and probably won't resemble this.
I'm not sure how they w
14 matches
Mail list logo