On 10 May 2005, at 15:41, John A Meinel wrote:
Alex Stapleton wrote:
What is the status of Postgres support for any sort of multi-machine
scaling support? What are you meant to do once you've upgraded
your box
and tuned the conf files as much as you can? But your query load is
just too high
: Tuesday, May 10, 2005 7:41 AM
To: Alex Stapleton
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Partitioning / Clustering
Alex Stapleton wrote:
What is the status of Postgres support for any sort of multi-machine
scaling support? What are you meant to do once you've upgraded your
box
On 11 May 2005, at 08:57, David Roussel wrote:
For an interesting look at scalability, clustering, caching, etc for a
large site have a look at how livejournal did it.
http://www.danga.com/words/2004_lisa/lisa04.pdf
I have implemented similar systems in the past, it's a pretty good
technique,
On 11 May 2005, at 09:50, Alex Stapleton wrote:
On 11 May 2005, at 08:57, David Roussel wrote:
For an interesting look at scalability, clustering, caching, etc
for a
large site have a look at how livejournal did it.
http://www.danga.com/words/2004_lisa/lisa04.pdf
I have implemented similar
On 11 May 2005, at 23:35, PFC wrote:
However, memcached (and for us, pg_memcached) is an excellent way
to improve
horizontal scalability by taking disposable data (like session
information)
out of the database and putting it in protected RAM.
So, what is the advantage of such a system
On 12 May 2005, at 15:08, Alex Turner wrote:
Having local sessions is unnesesary, and here is my logic:
Generaly most people have less than 100Mb of bandwidth to the
internet.
If you make the assertion that you are transferring equal or less
session data between your session server (lets say an
On 12 May 2005, at 18:33, Josh Berkus wrote:
People,
In general I think your point is valid. Just remember that it
probably
also matters how you count page views. Because technically images
are a
separate page (and this thread did discuss serving up images). So if
there are 20 graphics on a
Is using a ramdisk in situations like this entirely ill-advised then?
When data integrity isn't a huge issue and you really need good write
performance it seems like it wouldn't hurt too much. Unless I am
missing something?
On 20 May 2005, at 02:45, Christopher Kings-Lynne wrote:
I'm
I am interested in optimising write performance as well, the machine
I am testing on is maxing out around 450 UPDATEs a second which is
quite quick I suppose. I haven't tried turning fsync off yet. The
table has...a lot of indices as well. They are mostly pretty simple
partial indexes
We have two index's like so
l1_historical=# \d N_intra_time_idx
Index N_intra_time_idx
Column |Type
+-
time | timestamp without time zone
btree
l1_historical=# \d N_intra_pkey
Index N_intra_pkey
Column |Type
Oh, we are running 7.4.2 btw. And our random_page_cost = 1
On 13 Jun 2005, at 14:02, Alex Stapleton wrote:
We have two index's like so
l1_historical=# \d N_intra_time_idx
Index N_intra_time_idx
Column |Type
+-
time | timestamp without
On 13 Jun 2005, at 15:47, John A Meinel wrote:
Alex Stapleton wrote:
Oh, we are running 7.4.2 btw. And our random_page_cost = 1
Which is only correct if your entire db fits into memory. Also, try
updating to a later 7.4 version if at all possible.
I am aware of this, I didn't
Hi, i'm trying to optimise our autovacuum configuration so that it
vacuums / analyzes some of our larger tables better. It has been set
to the default settings for quite some time. We never delete
anything (well not often, and not much) from the tables, so I am not
so worried about the
On 20 Jun 2005, at 15:59, Jacques Caron wrote:
Hi,
At 16:44 20/06/2005, Alex Stapleton wrote:
We never delete
anything (well not often, and not much) from the tables, so I am not
so worried about the VACUUM status
DELETEs are not the only reason you might need to VACUUM. UPDATEs
On 20 Jun 2005, at 18:46, Josh Berkus wrote:
Alex,
Hi, i'm trying to optimise our autovacuum configuration so that it
vacuums / analyzes some of our larger tables better. It has been set
to the default settings for quite some time. We never delete
anything (well not often, and not much)
On 21 Jun 2005, at 18:13, Josh Berkus wrote:
Alex,
Downtime is something I'd rather avoid if possible. Do you think we
will need to run VACUUM FULL occasionally? I'd rather not lock tables
up unless I cant avoid it. We can probably squeeze an automated
vacuum tied to our data inserters
On 8 Jul 2005, at 20:21, Merlin Moncure wrote:
Stuart,
I'm putting together a road map on how our systems can scale as our
load
increases. As part of this, I need to look into setting up some fast
read only mirrors of our database. We should have more than enough
RAM
to fit
On 2 Sep 2005, at 10:42, Richard Huxton wrote:
Ricardo Humphreys wrote:
Hi all.
In a cluster, is there any way to use the main memory of the
other nodes instead of the swap? If I have a query with many sub-
queries and a lot of data, I can easily fill all the memory in a
node. The
On 28 Sep 2005, at 15:32, Arnau wrote:
Hi all,
I have been googling a bit searching info about a way to
monitor postgresql (CPU Memory, num processes, ... ) and I
haven't found anything relevant. I'm using munin to monitor others
parameters of my servers and I'd like to include
On 16 Nov 2005, at 12:51, William Yu wrote:
Alex Turner wrote:
Not at random access in RAID 10 they aren't, and anyone with their
head screwed on right is using RAID 10. The 9500S will still beat
the
Areca cards at RAID 10 database access patern.
The max 256MB onboard for 3ware cards
On 1 Dec 2005, at 16:03, Tom Lane wrote:
Michael Riess [EMAIL PROTECTED] writes:
(We NEED that many tables, please don't recommend to reduce them)
No, you don't. Add an additional key column to fold together
different
tables of the same structure. This will be much more efficient than
On 2 Dec 2005, at 14:16, Alex Stapleton wrote:
On 1 Dec 2005, at 16:03, Tom Lane wrote:
Michael Riess [EMAIL PROTECTED] writes:
(We NEED that many tables, please don't recommend to reduce them)
No, you don't. Add an additional key column to fold together
different
tables of the same
I hope this isn't too far off topic for this list. Postgres is
the main application that I'm looking to accomodate. Anything
else I can do with whatever solution we find is just gravy...
You've given me a lot to go on... Now I'm going to have to do some
research as to real-world RAID
We got a quote for one of these (entirely for comedy value of course)
and it was in the region of £1,500,000 give or take a few thousand.
On 16 Mar 2006, at 18:33, Jim Nasby wrote:
PostgreSQL tuned to the max and still too slow? Database too big to
fit into memory? Here's the solution!
On 12 Jun 2006, at 00:21, Joshua D. Drake wrote:
Mario Splivalo wrote:
On Sat, 2006-06-03 at 11:43 +0200, Steinar H. Gunderson wrote:
On Sat, Jun 03, 2006 at 10:31:03AM +0100, [EMAIL PROTECTED] wrote:
I do have 2 identical beasts (4G - biproc Xeon 3.2 - 2 Gig NIC)
One beast will be apache,
On 3 Oct 2006, at 16:04, Merlin Moncure wrote:
On 10/3/06, Carlo Stonebanks [EMAIL PROTECTED] wrote:
Some very helpful people had asked that I post the troublesome
code that was
generated by my import program.
I installed a SQL log feature in my import program. I have
posted samples of the
On 23 Oct 2006, at 22:59, Jim C. Nasby wrote:
http://stats.distributed.net used to use a perl script to do some
transformations before loading data into the database. IIRC, when we
switched to using C we saw 100x improvement in speed, so I suspect
that
if you want performance perl isn't the
27 matches
Mail list logo