Hi Satar list,
To add to the issues and concerns that Jonathan has
already so eloquently outlined, let me add a key
factor that needs to be considered.
I/O tuning fundamentals require us to ensure that the
filesystem blocksize = db_block_size. The default
filesystem blocks size in Veritas is
I/O tuning fundamentals require us to ensure that the
filesystem blocksize = db_block_size. The default
filesystem blocks size in Veritas is 1K and it is more
than likely that almost every Veritas filesystem that
is out there is in fact created with an 1K block size.
This is true even
Hi George,
I wanted to make sure that the information I was
giving you was as accurate and current as possible.
This prompted me to have one of my guys to check it
out in the Veritas Documentation, before I sent out
the note. The documentation for version 3.3, clearly
states that the default
Hi George,
I wanted to make sure that the information I was
giving you was as accurate and current as possible.
This prompted me to have one of my guys to check it
out in the Veritas Documentation, before I sent out
the note. The documentation for version 3.3, clearly
states that the
I disagree in the 2k for OLTP as well, for similar reasons Jonathan
mentioned, as well as a few of the obvious. Most OLTP are not PERFECTLY
tuned to only do indexes scans either. And indexes are much more efficient
on the larger block sizes as well
Do not criticize someone until you walked a
Christofer,
maybe it is not black and white, though.
Bigger block size means more latch contention on cache buffers chains, for
example. That's why one may play around with minimize record_per_block or
artifically high pctfree. Both mean waste disk space and _memory_. Many of
larger block
But you can adjust the buffer chain latches to combat that.
I understand it isn't black and white, but a statement 2k for OLTP is a
black and white comment.
There is no simple answer for anything, in my opinion.
But there are many reasons why people claim on file systems to just use 8K
for
Hi Christopher,
Like I said, Oracle experts can argue this issue until
they are blue in the face, kinda like the
Certification debate. Without any information on the
data or application, I suggest a 2k block size.
Everyone is entitled to thier own opinion, and I hope
the author of the original
Is this the thread where Thomas says something about:
I've done the same (recommend 2k blocks). It is true. I am serious.
2k is
appropriate in some cases. some reasoning:
NOTE -- 'in some cases'
NOTE -- 'some reasoning'
and my follow-up post contains:
Me too -
Some more
Hi,
According to the Veritas manual, they claim that a db_block_size of 2K when
using the quick-io files and an oltp system has the most performance according
to benchmarks.
I know full well that benchmarks != real world. Has anyone had any experience
using quickio? If so what are your
If your application allows it, and if the Application
will not change in the future, then use a 2k block
size for OLTP database.
If you are not sure on the application needs, then
stick with 4k to be safe.
Regards,
Satar
--- Brian Haas [EMAIL PROTECTED] wrote:
Hi,
According to the Veritas
That's a fairly sweeping statement to make without
any justification - after all, at 2K:
The block header is a much larger percentage
of the block size - so you lose space.
The probability of wasting space from the PCTFREE
setting increases - so you lose space.
The
Hi Jonathan,
Sweeping statement...maybe. It all depends on your
application. That's why I put an emphasis on his/her
application (meaning both physical structure and data)
requirements. As a GENERAL rule of thumb, I
(personally) suggest (if possible) 2k for OLTP
databases. It's like if you ask
13 matches
Mail list logo