Hello,
The next question then is whether anything in your postgres configuration
is preventing it getting useful performance from the OS. What settings
have you changed in postgresql.conf?
The only options not commented out are the following (it's not even
tweaked for buffer sizes and such,
Please help me to set up optimal values in the postgresql.conf file for
PostgreSQL 8.2.3
Can you please give us an advice, which of your DBs and which
configuration should we take for a project that has the following
parameters:
1. DB size: 25-30Gb
2. number of tables: 100 - 150
3.
[EMAIL PROTECTED] wrote:
8*73GB SCSI 15k ...(dell poweredge 2900)...
24*320GB SATA II 7.2k ...(generic vendor)...
raid10. Our main requirement is highest TPS (focused on a lot of INSERTS).
Question: will 8*15k SCSI drives outperform 24*7K SATA II drives?
It's worth asking the vendors in
Hi Josh,
Josh Berkus wrote:
Arnau,
Is there anything similar in PostgreSQL? The idea behind this is how I
can do in PostgreSQL to have tables where I can query on them very often
something like every few seconds and get results very fast without
overloading the postmaster.
If you're only
This may be a silly question but: will not 3 times as many disk drives
mean 3 times higher probability for disk failure? Also rumor has it
that SATA drives are more prone to fail than SCSI drivers. More
failures will result, in turn, in more administration costs.
Thanks
Peter
On 4/4/07, [EMAIL
On 2007-04-04 Arnau wrote:
Josh Berkus wrote:
Is there anything similar in PostgreSQL? The idea behind this is how
I can do in PostgreSQL to have tables where I can query on them very
often something like every few seconds and get results very fast
without overloading the postmaster.
If
Hi Ansgar ,
On 2007-04-04 Arnau wrote:
Josh Berkus wrote:
Is there anything similar in PostgreSQL? The idea behind this is how
I can do in PostgreSQL to have tables where I can query on them very
often something like every few seconds and get results very fast
without overloading the
On 4-Apr-07, at 2:01 AM, Peter Schuller wrote:
Hello,
The next question then is whether anything in your postgres
configuration
is preventing it getting useful performance from the OS. What
settings
have you changed in postgresql.conf?
The only options not commented out are the
On 2007-04-04, Peter Schuller [EMAIL PROTECTED] wrote:
The next question then is whether anything in your postgres configuration
is preventing it getting useful performance from the OS. What settings
have you changed in postgresql.conf?
The only options not commented out are the following
Hi Thor,
Thor-Michael Støre wrote:
On 2007-04-04 Arnau wrote:
Josh Berkus wrote:
Arnau,
Is there anything similar in PostgreSQL? The idea behind this
is how I can do in PostgreSQL to have tables where I can query
on them very often something like every few seconds and get
results very fast
* Peter Kovacs [EMAIL PROTECTED] [070404 14:40]:
This may be a silly question but: will not 3 times as many disk drives
mean 3 times higher probability for disk failure? Also rumor has it
that SATA drives are more prone to fail than SCSI drivers. More
failures will result, in turn, in more
Probably another helpful solution may be to implement:
ALTER TABLE LOGGING OFF/ON;
just to disable/enable WAL?
First it may help in all cases of intensive data load while you slow
down other sessions with increasing WAL activity.
Then you have a way to implement MEMORY-like tables on RAM
Andreas Kostyrka escribió:
* Peter Kovacs [EMAIL PROTECTED] [070404 14:40]:
This may be a silly question but: will not 3 times as many disk drives
mean 3 times higher probability for disk failure? Also rumor has it
that SATA drives are more prone to fail than SCSI drivers. More
failures
But if an individual disk fails in a disk array, sooner than later you
would want to purchase a new fitting disk, walk/drive to the location
of the disk array, replace the broken disk in the array and activate
the new disk. Is this correct?
Thanks
Peter
On 4/4/07, Alvaro Herrera [EMAIL
Peter Kovacs escribió:
But if an individual disk fails in a disk array, sooner than later you
would want to purchase a new fitting disk, walk/drive to the location
of the disk array, replace the broken disk in the array and activate
the new disk. Is this correct?
Ideally you would have a
On 4-Apr-07, at 8:46 AM, Andreas Kostyrka wrote:
* Peter Kovacs [EMAIL PROTECTED] [070404 14:40]:
This may be a silly question but: will not 3 times as many disk
drives
mean 3 times higher probability for disk failure? Also rumor has it
that SATA drives are more prone to fail than SCSI
* Alvaro Herrera [EMAIL PROTECTED] [070404 15:42]:
Peter Kovacs escribió:
But if an individual disk fails in a disk array, sooner than later you
would want to purchase a new fitting disk, walk/drive to the location
of the disk array, replace the broken disk in the array and activate
the
Hello All,
I've been searching the archives for something similar, without success..
We have an application subjected do sign documents and store them
somewhere. The files size may vary from Kb to Mb. Delelopers are
arguing about the reasons to store files direcly on operating system
file
On Apr 3, 2007, at 6:54 PM, Geoff Tolley wrote:
I don't think the density difference will be quite as high as you
seem to think: most 320GB SATA drives are going to be 3-4 platters,
the most that a 73GB SCSI is going to have is 2, and more likely 1,
which would make the SCSIs more like 50%
On 04.04.2007, at 08:03, Alexandre Vasconcelos wrote:
We have an application subjected do sign documents and store them
somewhere. The files size may vary from Kb to Mb. Delelopers are
arguing about the reasons to store files direcly on operating system
file system or on the database, as large
Good point. On another note, I am wondering why nobody's brought up the
command-queuing perf benefits (yet). Is this because sata vs scsi are at
SATAII has similar features.
par here? I'm finding conflicting information on this -- some calling
sata's ncq mostly crap, others stating the
Joshua D. Drake wrote:
Good point. On another note, I am wondering why nobody's brought up
the command-queuing perf benefits (yet). Is this because sata vs scsi
are at
SATAII has similar features.
par here? I'm finding conflicting information on this -- some calling
sata's ncq mostly
SATAII brute forces itself through some of its performance, for
example 16MB write cache on each drive.
sure but for any serious usage one either wants to disable that
cache(and rely on tagged command queuing or how that is called in SATAII
Why? Assuming we have a BBU, why would you turn
* Joshua D. Drake [EMAIL PROTECTED] [070404 17:40]:
Good point. On another note, I am wondering why nobody's brought up the
command-queuing perf benefits (yet). Is this because sata vs scsi are at
SATAII has similar features.
par here? I'm finding conflicting information on this --
difference. OTOH, the SCSI discs were way less reliable than the SATA
discs, that might have been bad luck.
Probably bad luck. I find that SCSI is very reliable, but I don't find
it any more reliable than SATA. That is assuming correct ventilation etc...
Sincerely,
Joshua D. Drake
On Wed, 4 Apr 2007, Peter Kovacs wrote:
But if an individual disk fails in a disk array, sooner than later you
would want to purchase a new fitting disk, walk/drive to the location
of the disk array, replace the broken disk in the array and activate
the new disk. Is this correct?
correct,
Joshua D. Drake wrote:
SATAII brute forces itself through some of its performance, for
example 16MB write cache on each drive.
sure but for any serious usage one either wants to disable that
cache(and rely on tagged command queuing or how that is called in SATAII
Why? Assuming we have a
I had a 'scratch' database for testing, which I deleted, and then disk went
out. No problem, no precious data. But now I can't drop the tablespace, or
the user who had that as the default tablespace.
I thought about removing the tablespace from pg_tablespaces, but it seems wrong
to be
Craig A. James [EMAIL PROTECTED] writes:
I had a 'scratch' database for testing, which I deleted, and then disk went
out. No problem, no precious data. But now I can't drop the tablespace, or
the user who had that as the default tablespace.
I thought about removing the tablespace from
On Wed, Apr 04, 2007 at 08:50:44AM -0700, Joshua D. Drake wrote:
difference. OTOH, the SCSI discs were way less reliable than the SATA
discs, that might have been bad luck.
Probably bad luck. I find that SCSI is very reliable, but I don't find
it any more reliable than SATA. That is assuming
[EMAIL PROTECTED] wrote:
Good point. On another note, I am wondering why nobody's brought up
the command-queuing perf benefits (yet). Is this because sata vs scsi
are at par here? I'm finding conflicting information on this -- some
calling sata's ncq mostly crap, others stating the
Hello,
I'd always do benchmarks with a realistic value of shared_buffers (i.e.
much higher than that).
Another thought that comes to mind is that the bitmap index scan does
depend on the size of work_mem.
Try increasing your shared_buffers to a reasonable working value (say
10%-15% of
[EMAIL PROTECTED] wrote:
for that matter, with 20ish 320G drives, how large would a parition be
that only used the outer pysical track of each drive? (almost certinly
multiple logical tracks) if you took the time to set this up you could
eliminate seeking entirely (at the cost of not useing
[EMAIL PROTECTED] wrote:
Perhaps a basic question - but why does the interface matter? :-)
The interface itself matters not so much these days as the drives that
happen to use it. Most manufacturers make both SATA and SCSI lines, are
keen to keep the market segmented, and don't want to
sure but for any serious usage one either wants to disable that
cache(and rely on tagged command queuing or how that is called in SATAII
world) or rely on the OS/raidcontroller implementing some sort of
FUA/write barrier feature(which linux for example only does in pretty
recent kernels)
Does
On 4-4-2007 0:13 [EMAIL PROTECTED] wrote:
We need to upgrade a postgres server. I'm not tied to these specific
alternatives, but I'm curious to get feedback on their general qualities.
SCSI
dual xeon 5120, 8GB ECC
8*73GB SCSI 15k drives (PERC 5/i)
(dell poweredge 2900)
This is a SAS
On Apr 4, 2007, at 12:09 PM, Arjen van der Meijden wrote:
If you don't care about such things, it may actually be possible to
build a similar set-up as your SATA-system with 12 or 16 15k rpm
SAS disks or 10k WD Raptor disks. For the sata-solution you can
also consider a 24-port Areca
On 4-4-2007 21:17 [EMAIL PROTECTED] wrote:
fwiw, I've had horrible experiences with areca drivers on linux. I've
found them to be unreliable when used with dual AMD64 processors 4+ GB
of ram. I've tried kernels 2.16 up to 2.19... intermittent yet
inevitable ext3 corruptions. 3ware cards, on
[EMAIL PROTECTED] wrote:
On Wed, Apr 04, 2007 at 08:50:44AM -0700, Joshua D. Drake wrote:
difference. OTOH, the SCSI discs were way less reliable than the SATA
discs, that might have been bad luck.
Probably bad luck. I find that SCSI is very reliable, but I don't find
it any more
Bruce Momjian wrote:
[EMAIL PROTECTED] wrote:
On Wed, Apr 04, 2007 at 08:50:44AM -0700, Joshua D. Drake wrote:
difference. OTOH, the SCSI discs were way less reliable than the SATA
discs, that might have been bad luck.
Probably bad luck. I find that SCSI is very reliable, but I don't find
it
Joshua D. Drake wrote:
Bruce Momjian wrote:
[EMAIL PROTECTED] wrote:
On Wed, Apr 04, 2007 at 08:50:44AM -0700, Joshua D. Drake wrote:
difference. OTOH, the SCSI discs were way less reliable than the SATA
discs, that might have been bad luck.
Probably bad luck. I find that SCSI is very
Problem is :), you can purchase SATA Enterprise Drives.
Problem I would have thought that was a good thing!!! ;-)
Carlos
--
---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at
In a perhaps fitting compromise, I have decide to go with a hybrid
solution:
8*73GB 15k SAS drives hooked up to Adaptec 4800SAS
PLUS
6*150GB SATA II drives hooked up to mobo (for now)
All wrapped in a 16bay 3U server. My reasoning is that the extra SATA
drives are practically free compared
43 matches
Mail list logo