On 20-10-2006 22:33 Ben Suffolk wrote:
How about the Fujitsu Siemens Sun Clones? I have not really looked at
them but have heard the odd good thing about them.
Fujitsu doesn't build Sun clones! That really is insulting for them ;-)
They do offer Sparc-hardware, but that's a bit higher up the
Mike wrote:
Hello friends,
I am responsible for maintaining a high volume website using postgresql
8.1.4. Given the amount of reads and writes, I vacuum full the server a
few times a week around 1, 2 AM shutting down the site for a few
minutes. The next day morning around 10 - 11 AM the server
Hi, Worky,
Worky Workerson wrote:
I am currently getting
between 10K and 15K inserts/second.
I ran trivial little insert into a table with a single integer row and
came close to 250K inserts/second using psql's \copy, so I'm thinking
that my code could be optimized a bit more, but wanted
The thing I would ask is would you not be better
with SAS drives?
Since the comments on Dell, and the highlighted
issues I have been
looking at HP and the the Smart Array P600
controller with 512 BBWC.
Although I am looking to stick with the 8 internal
disks, rather than
use
Hi, Peter,
Péter Kovács wrote:
Sorry for the amateurish question, but what are heap tuples?
Also, my understanding is that the following statement applies only for
composite indexes: PostgreSQL can't use the values stored in the index
to check the join condition. I assume that PostgreSQL
Markus,
On 10/23/06 2:27 AM, Markus Schaber [EMAIL PROTECTED] wrote:
On a table with no indices, triggers and contstraints, we managed to
COPY about 7-8 megabytes/second with psql over our 100 MBit network, so
here the network was the bottleneck.
We routinely get 10-12MB/s on I/O hardware
Hi, Alvaro,
Alvaro Herrera wrote:
Additionally, in most UPDATE cases, the new row version will fit into
the same page as the old version. In this case, the index does not have
to be changed, which is an additional speed improvement.
Actually, when the UPDATE puts a new row version in the
I am most interested in loading two tables, one with about 21 (small)
VARCHARs where each record is about 200 bytes, and another with 7
INTEGERs, 3 TIMESTAMPs, and 1 BYTEA where each record is about 350
bytes.
indexes/keys? more memory for sorting during index creation can have
a dramatic
Markus Schaber [EMAIL PROTECTED] writes:
Alvaro Herrera wrote:
Actually, when the UPDATE puts a new row version in the same heap page,
the index must be updated anyway.
AFAICS only, when the index covers (directly or via function) a column
that's actually changed.
Changing columns the index
-logic raid-controller) and found it to be a very nice machine.
But again, they also offer (the same?) Broadcom networking on board.
Just like Dell and HP. And it is a LSI Logic sas-controller on board,
so
if FBSD has trouble with either of those, its hard to find anything
suitable at all in
On Oct 21, 2006, at 11:43 AM, John Philips wrote:
Can you guys see any glaring bottlenecks in my layout?
Any other suggestions to offer (throw in more
controllers, different RAID layout, etc.)? Our budget
limit is $50k.
If I had $50k budget, I'd be buying the SunFire X4500 and running
On Oct 20, 2006, at 10:58 AM, Dave Cramer wrote:
My advice is to find another supplier. check the archives for Dell.
Not necessarily bad to go with Dell. There are *some* of their
controllers that are wicked fast in some configurations. However,
finding which ones are fast is very
FUD!!!
They don't randomly change the controllers under the same name. If you
order a PERC4e/Si controller you will get the same controller every
time.
Actually Vivek this isn't true. Yes the hardware will likely be the
same, but the firmware rev will likely be different and I have seen
On Oct 23, 2006, at 4:59 PM, Joshua D. Drake wrote:
If I had $50k budget, I'd be buying the SunFire X4500 and running
Solaris + ZFS on it. However, you're limited to 2 dual core
Opterons,
it seems.
The HP 585 will give you quad dual core :)
but can you sling the bits to and from the
Vivek Khera wrote:
On Oct 23, 2006, at 4:59 PM, Joshua D. Drake wrote:
If I had $50k budget, I'd be buying the SunFire X4500 and running
Solaris + ZFS on it. However, you're limited to 2 dual core Opterons,
it seems.
The HP 585 will give you quad dual core :)
but can you sling the
Vivek Khera wrote:
On Oct 23, 2006, at 5:08 PM, Joshua D. Drake wrote:
They don't randomly change the controllers under the same name. If you
order a PERC4e/Si controller you will get the same controller every
time.
Actually Vivek this isn't true. Yes the hardware will likely be the
On Mon, Oct 23, 2006 at 09:45:59AM +0100, Richard Huxton wrote:
Mike wrote:
Hello friends,
I am responsible for maintaining a high volume website using postgresql
8.1.4. Given the amount of reads and writes, I vacuum full the server a
few times a week around 1, 2 AM shutting down the site
On Sat, Oct 21, 2006 at 12:12:59AM +0930, Shane Ambler wrote:
Generally more disks at slower speed - 2 10K disks in raid 0 is faster
than 1 15K disk. More disks also allow more options.
Not at writing they're not (unless you're using RAID0... ugh).
--
Jim Nasby
On Mon, Oct 23, 2006 at 11:10:19AM -0400, Worky Workerson wrote:
I am most interested in loading two tables, one with about 21 (small)
VARCHARs where each record is about 200 bytes, and another with 7
INTEGERs, 3 TIMESTAMPs, and 1 BYTEA where each record is about 350
bytes.
On Sat, Oct 21, 2006 at 08:43:05AM -0700, John Philips wrote:
I heard some say that the transaction log should be on
it's own array, others say it doesn't hurt to have it
on the same array as the OS. Is it really worthwhile
to put it on it's own array?
It all depends on the controller and
Ultimately, you might be best of using triggers instead of rules for the
partitioning since then you could use copy. Or go to raw insert commands
that are wrapped in a transaction.
My experience is that triggers are quite a bit faster than rules in any
kind of partitioning that involves more
Hello there;
I've got an application that has to copy an existing database to a new
database on the same machine.
I used to do this with a pg_dump command piped to psql to perform the
copy; however the database is 18 gigs large on disk and this takes a LONG
time to do.
So I read up, found
Hello,
I have a query with several join operations and applying the same
filter condition over each involved table. This condition is a complex
predicate over an indexed timestamp field, depending on some
parameters.
To factorize code, I wrote the filter into a plpgsql function, but
the
Jim C. Nasby wrote:
http://stats.distributed.net used to use a perl script to do some
transformations before loading data into the database. IIRC, when we
switched to using C we saw 100x improvement in speed, so I suspect that
if you want performance perl isn't the way to go. I think you can
Markus,
Thank you for your kind explanation.
Peter
Markus Schaber wrote:
Hi, Peter,
Péter Kovács wrote:
Sorry for the amateurish question, but what are heap tuples?
Also, my understanding is that the following statement applies only for
composite indexes: PostgreSQL can't use the values
25 matches
Mail list logo