[EMAIL PROTECTED] wrote:
> 8*73GB SCSI 15k ...(dell poweredge 2900)...
> 24*320GB SATA II 7.2k ...(generic vendor)...
>
> raid10. Our main requirement is highest TPS (focused on a lot of INSERTS).
> Question: will 8*15k SCSI drives outperform 24*7K SATA II drives?
It's worth asking the vendor
Hi Josh,
Josh Berkus wrote:
Arnau,
Is there anything similar in PostgreSQL? The idea behind this is how I
can do in PostgreSQL to have tables where I can query on them very often
something like every few seconds and get results very fast without
overloading the postmaster.
If you're only que
This may be a silly question but: will not 3 times as many disk drives
mean 3 times higher probability for disk failure? Also rumor has it
that SATA drives are more prone to fail than SCSI drivers. More
failures will result, in turn, in more administration costs.
Thanks
Peter
On 4/4/07, [EMAIL P
On 2007-04-04 Arnau wrote:
> Josh Berkus wrote:
>>> Is there anything similar in PostgreSQL? The idea behind this is how
>>> I can do in PostgreSQL to have tables where I can query on them very
>>> often something like every few seconds and get results very fast
>>> without overloading the postmast
Hi Ansgar ,
On 2007-04-04 Arnau wrote:
Josh Berkus wrote:
Is there anything similar in PostgreSQL? The idea behind this is how
I can do in PostgreSQL to have tables where I can query on them very
often something like every few seconds and get results very fast
without overloading the postmaste
On 4-Apr-07, at 2:01 AM, Peter Schuller wrote:
Hello,
The next question then is whether anything in your postgres
configuration
is preventing it getting useful performance from the OS. What
settings
have you changed in postgresql.conf?
The only options not commented out are the followin
On 2007-04-04, Peter Schuller <[EMAIL PROTECTED]> wrote:
>> The next question then is whether anything in your postgres configuration
>> is preventing it getting useful performance from the OS. What settings
>> have you changed in postgresql.conf?
>
> The only options not commented out are the foll
Hi Thor,
Thor-Michael Støre wrote:
On 2007-04-04 Arnau wrote:
Josh Berkus wrote:
Arnau,
Is there anything similar in PostgreSQL? The idea behind this
is how I can do in PostgreSQL to have tables where I can query
on them very often something like every few seconds and get
results very fast w
* Peter Kovacs <[EMAIL PROTECTED]> [070404 14:40]:
> This may be a silly question but: will not 3 times as many disk drives
> mean 3 times higher probability for disk failure? Also rumor has it
> that SATA drives are more prone to fail than SCSI drivers. More
> failures will result, in turn, in mor
Probably another helpful solution may be to implement:
ALTER TABLE LOGGING OFF/ON;
just to disable/enable WAL?
First it may help in all cases of intensive data load while you slow
down other sessions with increasing WAL activity.
Then you have a way to implement MEMORY-like tables on RAM disk
Andreas Kostyrka escribió:
> * Peter Kovacs <[EMAIL PROTECTED]> [070404 14:40]:
> > This may be a silly question but: will not 3 times as many disk drives
> > mean 3 times higher probability for disk failure? Also rumor has it
> > that SATA drives are more prone to fail than SCSI drivers. More
> >
But if an individual disk fails in a disk array, sooner than later you
would want to purchase a new fitting disk, walk/drive to the location
of the disk array, replace the broken disk in the array and activate
the new disk. Is this correct?
Thanks
Peter
On 4/4/07, Alvaro Herrera <[EMAIL PROTECTE
Peter Kovacs escribió:
> But if an individual disk fails in a disk array, sooner than later you
> would want to purchase a new fitting disk, walk/drive to the location
> of the disk array, replace the broken disk in the array and activate
> the new disk. Is this correct?
Ideally you would have a s
On 4-Apr-07, at 8:46 AM, Andreas Kostyrka wrote:
* Peter Kovacs <[EMAIL PROTECTED]> [070404 14:40]:
This may be a silly question but: will not 3 times as many disk
drives
mean 3 times higher probability for disk failure? Also rumor has it
that SATA drives are more prone to fail than SCSI dri
* Alvaro Herrera <[EMAIL PROTECTED]> [070404 15:42]:
> Peter Kovacs escribió:
> > But if an individual disk fails in a disk array, sooner than later you
> > would want to purchase a new fitting disk, walk/drive to the location
> > of the disk array, replace the broken disk in the array and activate
Hello All,
I've been searching the archives for something similar, without success..
We have an application subjected do sign documents and store them
somewhere. The files size may vary from Kb to Mb. Delelopers are
arguing about the reasons to store files direcly on operating system
file system
On Apr 3, 2007, at 6:54 PM, Geoff Tolley wrote:
I don't think the density difference will be quite as high as you
seem to think: most 320GB SATA drives are going to be 3-4 platters,
the most that a 73GB SCSI is going to have is 2, and more likely 1,
which would make the SCSIs more like 50%
On 04.04.2007, at 08:03, Alexandre Vasconcelos wrote:
We have an application subjected do sign documents and store them
somewhere. The files size may vary from Kb to Mb. Delelopers are
arguing about the reasons to store files direcly on operating system
file system or on the database, as large o
At 07:16 AM 4/4/2007, Peter Kovacs wrote:
This may be a silly question but: will not 3 times as many disk drives
mean 3 times higher probability for disk failure?
Yes, all other factors being equal 3x more HDs (24 vs 8) means ~3x
the chance of any specific HD failing.
OTOH, either of these n
Good point. On another note, I am wondering why nobody's brought up the
command-queuing perf benefits (yet). Is this because sata vs scsi are at
SATAII has similar features.
par here? I'm finding conflicting information on this -- some calling
sata's ncq mostly crap, others stating the re
Joshua D. Drake wrote:
Good point. On another note, I am wondering why nobody's brought up
the command-queuing perf benefits (yet). Is this because sata vs scsi
are at
SATAII has similar features.
par here? I'm finding conflicting information on this -- some calling
sata's ncq mostly cr
SATAII brute forces itself through some of its performance, for
example 16MB write cache on each drive.
sure but for any serious usage one either wants to disable that
cache(and rely on tagged command queuing or how that is called in SATAII
Why? Assuming we have a BBU, why would you turn o
* Joshua D. Drake <[EMAIL PROTECTED]> [070404 17:40]:
>
> >Good point. On another note, I am wondering why nobody's brought up the
> >command-queuing perf benefits (yet). Is this because sata vs scsi are at
>
> SATAII has similar features.
>
> >par here? I'm finding conflicting information on
difference. OTOH, the SCSI discs were way less reliable than the SATA
discs, that might have been bad luck.
Probably bad luck. I find that SCSI is very reliable, but I don't find
it any more reliable than SATA. That is assuming correct ventilation etc...
Sincerely,
Joshua D. Drake
Andr
On Wed, 4 Apr 2007, Peter Kovacs wrote:
But if an individual disk fails in a disk array, sooner than later you
would want to purchase a new fitting disk, walk/drive to the location
of the disk array, replace the broken disk in the array and activate
the new disk. Is this correct?
correct, but
Joshua D. Drake wrote:
SATAII brute forces itself through some of its performance, for
example 16MB write cache on each drive.
sure but for any serious usage one either wants to disable that
cache(and rely on tagged command queuing or how that is called in SATAII
Why? Assuming we have a B
I had a 'scratch' database for testing, which I deleted, and then disk went
out. No problem, no precious data. But now I can't drop the tablespace, or
the user who had that as the default tablespace.
I thought about removing the tablespace from pg_tablespaces, but it seems wrong
to be monkey
"Craig A. James" <[EMAIL PROTECTED]> writes:
> I had a 'scratch' database for testing, which I deleted, and then disk went
> out. No problem, no precious data. But now I can't drop the tablespace, or
> the user who had that as the default tablespace.
> I thought about removing the tablespace fr
On Wed, Apr 04, 2007 at 08:50:44AM -0700, Joshua D. Drake wrote:
> >difference. OTOH, the SCSI discs were way less reliable than the SATA
> >discs, that might have been bad luck.
> Probably bad luck. I find that SCSI is very reliable, but I don't find
> it any more reliable than SATA. That is assu
[EMAIL PROTECTED] wrote:
Good point. On another note, I am wondering why nobody's brought up
the command-queuing perf benefits (yet). Is this because sata vs scsi
are at par here? I'm finding conflicting information on this -- some
calling sata's ncq mostly crap, others stating the real-wor
Hello,
> I'd always do benchmarks with a realistic value of shared_buffers (i.e.
> much higher than that).
>
> Another thought that comes to mind is that the bitmap index scan does
> depend on the size of work_mem.
>
> Try increasing your shared_buffers to a reasonable working value (say
> 10%-1
[EMAIL PROTECTED] wrote:
for that matter, with 20ish 320G drives, how large would a parition be
that only used the outer pysical track of each drive? (almost certinly
multiple logical tracks) if you took the time to set this up you could
eliminate seeking entirely (at the cost of not useing yo
[EMAIL PROTECTED] wrote:
Perhaps a basic question - but why does the interface matter? :-)
The interface itself matters not so much these days as the drives that
happen to use it. Most manufacturers make both SATA and SCSI lines, are
keen to keep the market segmented, and don't want to canni
>sure but for any serious usage one either wants to disable that
>cache(and rely on tagged command queuing or how that is called in SATAII
>world) or rely on the OS/raidcontroller implementing some sort of
>FUA/write barrier feature(which linux for example only does in pretty
>recent kernels)
Does
On 4-4-2007 0:13 [EMAIL PROTECTED] wrote:
We need to upgrade a postgres server. I'm not tied to these specific
alternatives, but I'm curious to get feedback on their general qualities.
SCSI
dual xeon 5120, 8GB ECC
8*73GB SCSI 15k drives (PERC 5/i)
(dell poweredge 2900)
This is a SAS set
On Apr 4, 2007, at 12:09 PM, Arjen van der Meijden wrote:
If you don't care about such things, it may actually be possible to
build a similar set-up as your SATA-system with 12 or 16 15k rpm
SAS disks or 10k WD Raptor disks. For the sata-solution you can
also consider a 24-port Areca card.
On 4-4-2007 21:17 [EMAIL PROTECTED] wrote:
fwiw, I've had horrible experiences with areca drivers on linux. I've
found them to be unreliable when used with dual AMD64 processors 4+ GB
of ram. I've tried kernels 2.16 up to 2.19... intermittent yet
inevitable ext3 corruptions. 3ware cards, on th
[EMAIL PROTECTED] wrote:
> On Wed, Apr 04, 2007 at 08:50:44AM -0700, Joshua D. Drake wrote:
> > >difference. OTOH, the SCSI discs were way less reliable than the SATA
> > >discs, that might have been bad luck.
> > Probably bad luck. I find that SCSI is very reliable, but I don't find
> > it any mo
Bruce Momjian wrote:
[EMAIL PROTECTED] wrote:
On Wed, Apr 04, 2007 at 08:50:44AM -0700, Joshua D. Drake wrote:
difference. OTOH, the SCSI discs were way less reliable than the SATA
discs, that might have been bad luck.
Probably bad luck. I find that SCSI is very reliable, but I don't find
it a
Joshua D. Drake wrote:
> Bruce Momjian wrote:
> > [EMAIL PROTECTED] wrote:
> >> On Wed, Apr 04, 2007 at 08:50:44AM -0700, Joshua D. Drake wrote:
> difference. OTOH, the SCSI discs were way less reliable than the SATA
> discs, that might have been bad luck.
> >>> Probably bad luck. I find
Problem is :), you can purchase SATA Enterprise Drives.
Problem I would have thought that was a good thing!!! ;-)
Carlos
--
---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at
http:
In a perhaps fitting compromise, I have decide to go with a hybrid
solution:
8*73GB 15k SAS drives hooked up to Adaptec 4800SAS
PLUS
6*150GB SATA II drives hooked up to mobo (for now)
All wrapped in a 16bay 3U server. My reasoning is that the extra SATA
drives are practically free compared t
>Right --- the point is not the interface, but whether the drive is built
>for reliability or to hit a low price point.
Personally I take the marketing mublings about the enterprise drives
with a pinch of salt. The low-price drives HAVE TO be reliable too,
because a non-negligible failure rate wi
"James Mansion" <[EMAIL PROTECTED]> writes:
>> Right --- the point is not the interface, but whether the drive is built
>> for reliability or to hit a low price point.
> Personally I take the marketing mublings about the enterprise drives
> with a pinch of salt. The low-price drives HAVE TO be re
44 matches
Mail list logo