Alan Stange <[EMAIL PROTECTED]> writes:
> I do a truss on the process and see the output below looping over and
> over. Note the constant opening and closing of the file 42064889.3.
> Why the open/close cycle as opposed to caching the file descriptor
> somewhere?
This is probably a "blind writ
Hello all,
I have a question/observation about vacuum performance. I'm running
Solaris 9, pg 7.4.1.
The process in questions is doing a vacuum:
bash-2.05$ /usr/ucb/ps auxww | grep 4885
fiasco4885 19.1 3.7605896592920 ?O 19:29:44 91:38 postgres:
fiasco fiasco [local] VACUUM
I do a
Tom Lane wrote:
> > It really just shows whether the fsync fater the close has similar
> > timing to the one before the close. That was the best way I could think
> > to test it.
>
> Sure, but where's the "separate process" part? What this seems to test
> is whether a single process can sync its
I also tried this (printf1 in irc suggested "folding" the joins) :
SELECT
w8.wid,
w8.variant,
w8.num_variants,
sum_text(w8.unicode) as unicodes,
sum_text(w8.pinyin) as pinyins
FROM
(words as w8 JOIN
(words as w7 JOIN
(words as w6 JOIN
(words as w5 JOIN
(words as w4 JOIN
(words as w3 JOIN
(words as
Tom Lane wrote:
> Bruce Momjian <[EMAIL PROTECTED]> writes:
> > Well, I wrote the program to allow testing. I don't see a complex test
> > as being that much better than simple one. We don't need accurate
> > numbers. We just need to know if fsync or O_SYNC is faster.
>
> Faster than what? The
Bruce Momjian <[EMAIL PROTECTED]> writes:
> Well, I wrote the program to allow testing. I don't see a complex test
> as being that much better than simple one. We don't need accurate
> numbers. We just need to know if fsync or O_SYNC is faster.
Faster than what? The thing everyone is trying to
Josh Berkus <[EMAIL PROTECTED]> writes:
> 1) This is an OSS project. Why not just recruit a bunch of people on
> PERFORMANCE and GENERAL to test the 4 different synch methods using real
> databases? No test like reality, I say
I agree --- that is likely to yield *far* more useful result
Josh Berkus wrote:
> Tom, Bruce,
>
> > My previous point about checking different fsync spacings corresponds to
> > different assumptions about average transaction size. I think a useful
> > tool for determining wal_sync_method has got to be able to reflect that
> > range of possibilities.
>
> Q
Tom, Bruce,
> My previous point about checking different fsync spacings corresponds to
> different assumptions about average transaction size. I think a useful
> tool for determining wal_sync_method has got to be able to reflect that
> range of possibilities.
Questions:
1) This is an OSS project
Kurt Roeckx wrote:
> Here are my results on Linux 2.6.1 using cvs version 1.7.
>
> Those times with > 20 seconds, you really hear the disk go crazy.
>
> And I have the feeling something must be wrong. Those results
> are reproducible.
>
Wow, your O_SYNC times are great. Where can I buy some?
Kurt Roeckx wrote:
> On Thu, Mar 18, 2004 at 02:22:10PM -0500, Bruce Momjian wrote:
> >
> > OK, what better test do you suggest? Right now, there has been no
> > testing of these.
>
> I suggest you start by doing atleast preallocating a 16 MB file
> and do the tests on that, to atleast be somewh
Bruce Momjian <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> It's what tested out as the best bet. I think we were using pgbench
>> as the test platform, which as you know I have doubts about, but at
>> least it is testing one actual write/sync pattern Postgres can generate.
> I assume pgbench
Tom Lane wrote:
> Bruce Momjian <[EMAIL PROTECTED]> writes:
> > Tom Lane wrote:
> >> As I recall, that was based on testing on some different platforms.
>
> > But why perfer O_DSYNC over fdatasync if you don't prefer O_SYNC over
> > fsync?
>
> It's what tested out as the best bet. I think we wer
Bruce Momjian <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> As I recall, that was based on testing on some different platforms.
> But why perfer O_DSYNC over fdatasync if you don't prefer O_SYNC over
> fsync?
It's what tested out as the best bet. I think we were using pgbench
as the test plat
Kurt Roeckx wrote:
> On Thu, Mar 18, 2004 at 01:50:32PM -0500, Bruce Momjian wrote:
> > > I'm not sure I believe these numbers at all... my experience is that
> > > getting trustworthy disk I/O numbers is *not* easy.
> >
> > These numbers were reproducable on all the platforms I tested.
>
> It's
Tom Lane wrote:
> Bruce Momjian <[EMAIL PROTECTED]> writes:
> > I have been poking around with our fsync default options to see if I can
> > improve them. One issue is that we never default to O_SYNC, but default
> > to O_DSYNC if it exists, which seems strange.
>
> As I recall, that was based on
In porting an application from v7.2 and v7.3, I noticed that a join on a varchar
column and a text column was ignoring indices that were helpful in v7.2. When I
explicitly cast the text to a varchar (or set ENABLE_SEQSCAN TO false) the index is
scanned and it works as efficiently as in v7.2.
On Thu, 18 Mar 2004, Saleh, Amgad H (Amgad) wrote:
>
> Stephan:
>
> In each table we're storing the max. string length.
>
> For example:
>
> for TEST_1, we're storing 'abcdefghjk' and 'lmnop'
> for TEST_2, we're storing 'abcdefghjk' and 'lmnopqrstu'
> for TEST_3, we're storing 'abcdefghjk' and
I have been poking around with our fsync default options to see if I can
improve them. One issue is that we never default to O_SYNC, but default
to O_DSYNC if it exists, which seems strange.
What I did was to beef up my test program and get it into CVS for folks
to run. What I found was that di
I have updated my program with your suggested changes and put in
src/tools/fsync. Please see how you like it.
---
Zeugswetter Andreas SB SD wrote:
>
> > Running the attached test program shows on BSD/OS 4.3:
> >
> > w
On Thu, 18 Mar 2004, Saleh, Amgad H (Amgad) wrote:
> Stephan / Stephen
>
> We know about the overhead and do understand the math you've provided.
> This is not the question we're asking. We've just provided the table definitions as
> examples.
>
> The real question was, even with the 52 & 56 (assu
21 matches
Mail list logo