Guy Rouillier wrote:
Scott Marlowe wrote:
I assume you're talking about solid state drives? They have their
uses, but for most use cases, having plenty of RAM in your server will
be a better way to spend your money. For certain high throughput,
relatively small databases (i.e. transactional
Scott Marlowe wrote:
I'm getting more and more motivated to rewrite the vacuum docs. I
think a rewrite from the ground up might be best... I keep seeing
people doing vacuum full on this list and I'm thinking it's as much
because of the way the docs represent vacuum full as anything. Is
that
Bryan Murphy wrote:
Our database server connects to the san via iSCSI over Gig/E using
jumbo frames. File system is XFS (noatime).
...
Throughput, however, kinda sucks. I just can't get the kind of
throughput to it I was hoping to get. When our memory cache is blown,
the database can
runic wrote:
Hello Group,
I'm new in PostgreSQL Business, therefore please forgive me a newbie
Question. I have a table with ca. 1.250.000 Records. When I execute
a select count (*) from table (with pgAdmin III) it takes about 40
secs.
I think that takes much to long. Can you please give me
[EMAIL PROTECTED] wrote:
I need some help on recommendations to solve a perf problem.
I've got a table with ~121 million records in it. Select count on it
currently takes ~45 minutes, and an update to the table to set a value
on one of the columns I finally killed after it ran 17 hours and
Craig A. James wrote:
One of our biggest single problems is this very thing. It's not a
Postgres problem specifically, but more embedded in the idea of a
relational database: There are no job status or rough estimate of
results or give me part of the answer features that are critical to
Is there any experience with Postgresql and really huge tables? I'm
talking about terabytes (plural) here in a single table. Obviously the
table will be partitioned, and probably spread among several different
file systems. Any other tricks I should know about?
We have a problem of that
Brian Wipf wrote:
All tests are with bonnie++ 1.03a
Thanks for posting these tests. Now I have actual numbers to beat our
storage server provider about the head and shoulders with. Also, I
found them interesting in and of themselves.
These numbers are close enough to bus-saturation
Luke Lonergan wrote:
Brian,
On 12/6/06 8:02 AM, Brian Hurt [EMAIL PROTECTED] wrote:
These numbers are close enough to bus-saturation rates
PCIX is 1GB/s + and the memory architecture is 20GB/s+, though each CPU is
likely to obtain only 2-3GB/s.
We routinely achieve 1GB/s I/O rate
Luke Lonergan wrote:
Brian,
On 12/6/06 8:40 AM, Brian Hurt [EMAIL PROTECTED] wrote:
But actually looking things up, I see that PCI-Express has a theoretical 8
Gbit/sec, or about 800Mbyte/sec. It's PCI-X that's 533 MByte/sec. So there's
still some headroom available there.
See here
Ron Mayer wrote:
Before asking them to remove it, are we sure priority inversion
is really a problem?
I thought this paper: http://www.cs.cmu.edu/~bianca/icde04.pdf
did a pretty good job at studying priority inversion on RDBMs's
including PostgreSQL on various workloads (TCP-W and TCP-C) and
Mark Lewis wrote:
On Wed, 2006-11-29 at 08:25 -0500, Brian Hurt wrote:
...
I have the same question. I've done some embedded real-time
programming, so my innate reaction to priority inversions is that
they're evil. But, especially given priority inheritance, is there any
situation where
Ron Mayer wrote:
Brian Hurt wrote:
Mark Lewis wrote:
On Wed, 2006-11-29 at 08:25 -0500, Brian Hurt wrote:
I have the same question. I've done some embedded real-time
programming, so my innate reaction to priority inversions is that
they're evil. But, especially given
I'm having a spot of problem with out storage device vendor. Read
performance (as measured by both bonnie++ and hdparm -t) is abysmal
(~14Mbyte/sec), and we're trying to get them to fix it. Unfortunately,
they're using the fact that bonnie++ is an open source benchmark to
weasle out of doing
Carlo Stonebanks wrote:
You may try to figure out what's the process doing (the backend
obviously, not the frontend (Tcl) process) by attaching to it with
strace.
It's so sad when us poor Windows guys get helpful hints from people assume
that we're smart enough to run
I haven't weighed in on this because 1) I'm not a postgresql developer,
and am firmly of the opinion that they who are doing the work get to
decide how the work gets done (especially when you aren't paying them
for the work), and 2) I don't have any experience as a developer with
hints, and
I'm experiencing a problem with our postgres database. Queries that
normally take seconds suddenly start taking hours, if they complete at
all.
This isn't a vacuuming or analyzing problem- I've been on this list long
enough that they were my first response, and besides it doesn't happen
For long involved reasons I'm hanging out late at work today, and rather
than doing real, productive work, I thought I'd run some benchmarks
against our development PostgreSQL database server. My conclusions are
at the end.
The purpose of the benchmarking was to find out how fast Postgres
Tim Allen wrote:
We have a customer who are having performance problems. They have a
large (36G+) postgres 8.1.3 database installed on an 8-way opteron
with 8G RAM, attached to an EMC SAN via fibre-channel (I don't have
details of the EMC SAN model, or the type of fibre-channel card at the
19 matches
Mail list logo