On Tue, 2004-03-02 at 00:49, [EMAIL PROTECTED] wrote:
> what is the most performant way to select for example the first 99
> rows of a table and insert them into another table...
>
> at the moment i do this:
>
> for userrecord in select * from table where account_id = a_account_id
> and counte
Anjan Dave wrote:
We have a Quad-Intel XEON 2.0GHz (1MB cache), 12GB memory, running RH9,
PG 7.4.0. There's an internal U320, 10K RPM RAID-10 setup on 4 drives.
We are expecting a pretty high load, a few thousands of 'concurrent'
users executing either select, insert, update, statments.
The qui
On Tue, 2004-03-02 at 17:42, William Yu wrote:
> Anjan Dave wrote:
> > We have a Quad-Intel XEON 2.0GHz (1MB cache), 12GB memory, running RH9,
> > PG 7.4.0. There's an internal U320, 10K RPM RAID-10 setup on 4 drives.
> >
> > We are expecting a pretty high load, a few thousands of 'concurrent'
"By lots I mean dozen(s) in a raid 10 array with a good controller."
I believe, for RAID-10, I will need even number of drives. Currently,
the size of the database is about 13GB, and is not expected to grow
exponentially with thousands of concurrent users, so total space is not
of paramount impor
> "JA" == John Allgood <[EMAIL PROTECTED]> writes:
JA> I am planning on separating the OS, Data, WAL on to separate drives
JA> which will be mirrored. I am looking for input on setting kernel parameters,
JA> and Postgres server runtime parameters and other settings relating to
I did a bun
> "CW" == Christopher Weimann <[EMAIL PROTECTED]> writes:
>> which brings me back to my question why not make Freebsd use more of its
>> memory for disk caching and then tell postgres about it.
>>
CW> Because you can't. It already uses ALL RAM that isn't in use for
CW> something else.
No,
Anjan,
> Question - Are 73GB drives supposed to give better performance because
> of higher number of platters?
Not for your situation, no. Your issue is random seek times for multiple
simultaneous seek requests and batched checkpoint updates. Things that help
with this are:
More spindles
B
On Tue, 2 Mar 2004, Anjan Dave wrote:
> "By lots I mean dozen(s) in a raid 10 array with a good controller."
>
> I believe, for RAID-10, I will need even number of drives.
Correct.
> Currently,
> the size of the database is about 13GB, and is not expected to grow
> exponentially with thousands
On Mar 2, 2004, at 4:27 PM, Josh Berkus wrote:
Vivek,
I did a bunch of testing with different RAID levels on a 14 disk
array. I finally settled on this: RAID5 across 14 disks for the
data, the OS (including syslog directory) and WAL on a RAID1 pair on
the other channel of the same controller (I
>Tom Lane
> "Simon Riggs" <[EMAIL PROTECTED]> writes:
> > - why checkpoint interval of 300 secs causes them to happen every 10
> > mins in quieter periods; is that an occaisional update occurring?
>
> There is code in there to suppress a checkpoint if no WAL-loggable
> activity has happened since
>Josh Berkus wrote
> >Simon Riggs wrote
> > Please set WAL_DEBUG to 1 so we can see a bit more info: thanks.
>
> I'm pretty sure that WAL_DEBUG requires a compile-time option.
In my naiveté, I just set and use it. I discovered it in the code, then
set it to take advantage.
I'm surprised, but you
That was part of my original question - whether it makes sense to go for
a mid-range SunFire machine (64bit HW, 64bit OS), which is scalable to
high amounts of memory, and shouldn't have any issues addressing it all.
I've had that kind of setup once temporarily on a V480 (quad UltraSparc,
16GB RAM)
Here's what I recorded today from iostat (linux, iostat -x -k, sda3 is
the pg slice, logs included) during peak time on the RAID-10 array -
What i see is mostly writes, and sometimes, quite a bit of writing,
during which the average wait times shoot up.
Device:rrqm/s wrqm/s r/s w/s rsec/s
On Tue, 2 Mar 2004, Anjan Dave wrote:
> That was part of my original question - whether it makes sense to go for
> a mid-range SunFire machine (64bit HW, 64bit OS), which is scalable to
> high amounts of memory, and shouldn't have any issues addressing it all.
> I've had that kind of setup once te
> For speed, the X86 32 and 64 bit architectures seem to be noticeable
> faster than Sparc. However, running Linux or BSD on Sparc make them
> pretty fast too, but you lose the fault tolerant support for things like
> hot swappable CPUs or memory.
Agreed.. You can get a Quad Opteron with 16GB
On Tue, Mar 02, 2004 at 02:16:24PM -0700, scott.marlowe wrote:
> It's a common misconception that faster RPM drives are a lot faster,
> when, in fact, their only speed advantage is slight faster seeks.
> The areal density of faster spinning hard drives tends to be
> somewhat less than the slower sp
Simon Riggs wrote:
Josh Berkus wrote
Simon Riggs wrote
Please set WAL_DEBUG to 1 so we can see a bit more info: thanks.
I'm pretty sure that WAL_DEBUG requires a compile-time option.
I'm surprised, but you are right, the manual does SAY this requires a
compile time option; it is unfortunately not
On Mar 2, 2004, at 5:36 PM, scott.marlowe wrote:
Some folks on the list have experience with Postgresql on Solaris, and
they generally say they use Solaris not for performance reasons, but
for
reliability reasons. I.e. the bigger Sun hardware is fault tolerant.
Solaris isn't nearly as bad for P
On Tue, 2004-03-02 at 18:24, Anjan Dave wrote:
> Can you describe the vendors/components of a "cheap SAN setup?"
heh.. Excellent point.
My point was that you could get away with a smaller setup (number of
disks) if it doesn't have to deal with reads and writes are not time
dependent than you will
19 matches
Mail list logo