-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi,
I having the same problem I told here a few weeks before. Database is
using too much resources again.
I do a vacumm full each day, but seems it is not working. I am preparing
an update to postgres 8.2.4 (actually I am using at 8.1.3, and tests
Luke Lonergan [EMAIL PROTECTED] writes:
Right now the pattern for index scan goes like this:
- Find qualifying TID in index
- Seek to TID location in relfile
- Acquire tuple from relfile, return
...
If we implement AIO and allow for multiple pending I/Os used to prefetch
groups of
Last time I had this problem i solved it stopping website, restarting
database, vacuumm it, run again website. But I guess this is going to
happen again.
I would like to detect and solve the problem. Any ideas to detect it?
Do you have very long transactions? Maybe some client that is
Greg,
I think this seems pretty impractical for regular
(non-bitmap) index probes though. You might be able to do it
sometimes but not very effectively and you won't know when it
would be useful.
Maybe so, though I think it's reasonable to get multiple actuators going
even if the seeks
Hello,
Now that both 4x4 out it's time for us to decide which one should be better
for our PostgreSQL and Oracle. And especially for Oracle we really need such
server to squeeze everything from Oracle licenses. Both of the databases handle
OLTP type of the load.
Since we plan to buy 4U
Mindaugas,
The Anandtech results appear to me to support a 2.5 GHz Barcelona
performing better than the available Intel CPUs overall.
If you can wait for the 2.5 GHz AMD parts to come out, they'd be a
better bet IMO especially considering 4 sockets. In fact, have you seen
quad QC Intel
Luke Lonergan wrote:
For plans that qualify with the above conditions, the executor will issue
blocking calls to lseek(), which will translate to a single disk actuator
moving to the needed location in seek_time, approximately 8ms.
I doubt it's actually the lseeks, but the reads/writes after
Gregory Stark wrote (in part):
The extra spindles speed up sequential i/o too so the ratio between sequential
and random with prefetch would still be about 4.0. But the ratio between
sequential and random without prefetch would be even higher.
I never figured out how extra spindles help
Thank you very much for your ideas. I've tried to change the protocol
from C to B and I got an increase in the number of TPS: 64.555763.
Now I would like to follow the advice of Mr. Bernd Helmle and change the
value of snd-bufsize.
The servers are cross connected with a common 100 Mbit/sec
On Tue, Sep 11, 2007 at 04:47:40PM +0200, Maila Fatticcioni wrote:
The servers are cross connected with a common 100 Mbit/sec Ethernet so I
think they have a bandwidth around 80 Mbit/sec (even if I haven't yet
done any test on it). A rate of 70Mb seems reasonable to me.
Umm, seriously? Unless
On Tue, 11 Sep 2007, Mindaugas wrote:
Now that both 4x4 out it's time for us to decide which one should be
better for our PostgreSQL and Oracle.
You're going to have to wait a bit for that. No one has had both to
compare for long enough yet to reach a strong conclusion, and you're
probably
Hi List;
I've recently started cleaning up some postgres db's which previous to my
recent arrival had no DBA to care for them.
I quickly figured out that there were several tables which were grossly full
of dead space. One table in particular had 75G worth of dead pages (or the
equivelant in
Kevin Kempter wrote:
It's difficult to try and vacuum this table during the day as it seems to
begin blocking all the other queries against the database after some time.
Vacuum can generate so much I/O that it overwhelms all other
transactions, but it shouldn't block other queries otherwise.
Jean-David Beyer [EMAIL PROTECTED] writes:
Gregory Stark wrote (in part):
The extra spindles speed up sequential i/o too so the ratio between
sequential
and random with prefetch would still be about 4.0. But the ratio between
sequential and random without prefetch would be even higher.
El-Lotso [EMAIL PROTECTED] writes:
sorry.. I sent this as I was about to go to bed and the explain analyse
of the query w/ 4 tables joined per subquery came out.
It's those factor-of-1000 misestimates of the join sizes that are
killing you, eg this one:
- Hash Join
Simon Riggs schrieb:
On Fri, 2007-09-07 at 11:37 +0200, Maila Fatticcioni wrote:
protocol C;
Try protocol B instead.
Sure? I've always heard that there has yet to be a case found, where B
is better than C. We use DRBD with protocol C, and are quite happy with it.
Hi,
I have a table containing some ~13 million rows. Queries on
indexed fields run fast, but unanchored pattern queries on a
text column are slow. Indexing the column doesn't help
(this is already mentioned in the manual).
http://www.postgresql.org/docs/8.2/interactive/indexes-types.html
Fernan Aguero schrieb:
Hi,
I have a table containing some ~13 million rows. Queries on
indexed fields run fast, but unanchored pattern queries on a
text column are slow. Indexing the column doesn't help
(this is already mentioned in the manual).
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Gregory Stark wrote:
Jean-David Beyer [EMAIL PROTECTED] writes:
Gregory Stark wrote (in part):
The extra spindles speed up sequential i/o too so the ratio between
sequential
and random with prefetch would still be about 4.0. But the ratio
Mindaugas wrote:
Hello,
Now that both 4x4 out it's time for us to decide which one should be better
for our PostgreSQL and Oracle. And especially for Oracle we really need such
server to squeeze everything from Oracle licenses. Both of the databases handle
OLTP type of the load.
Since
On Wed, Sep 05, 2007 at 11:06:03AM -0400, Carlo Stonebanks wrote:
Unfortunately, LINUX is not an option at this time. We looked into it; there
is no *NIX expertise in the enterprise. However, I have raised this issue in
various forums before, and when pressed no one was willing to say that *NIX
On Thu, Sep 06, 2007 at 11:26:46AM +0200, Willo van der Merwe wrote:
Richard Huxton wrote:
Willo van der Merwe wrote:
Hi guys,
I'm have the rare opportunity to spec the hardware for a new database
server. It's going to replace an older one, driving a social networking
web application. The
On Fri, Sep 07, 2007 at 02:10:32PM -0700, [EMAIL PROTECTED] wrote:
3. Easy to set up warm standby functionality. (Then again, if the
postgres server fails miserably, it's likely to be due to a disk
crash).
and if postgres dies for some other reason the image on disk needs repair,
unless
On Mon, Sep 10, 2007 at 12:06:40AM -0400, Greg Smith wrote:
On Sat, 8 Sep 2007, Joshua D. Drake wrote:
You would have to have lightning handed by God to your server to have a
total power failure without proper shutdown in the above scenario.
Do you live somewhere without thunderstorms?
On Tue, Sep 11, 2007 at 03:55:51PM -0500, Decibel! wrote:
Also, to reply to someone else's email... there is one big reason to use
a SAN over direct storage: you can do HA that results in 0 data loss.
Good SANs are engineered to be highly redundant, with multiple
controllers, PSUs, etc, so that
On Mon, Sep 10, 2007 at 12:54:37AM -0400, Tom Lane wrote:
Greg Smith [EMAIL PROTECTED] writes:
On Sat, 8 Sep 2007, Joshua D. Drake wrote:
You would have to have lightning handed by God to your server to have a
total power failure without proper shutdown in the above scenario.
Do you
On Tue, Sep 11, 2007 at 04:57:24PM +0200, Steinar H. Gunderson wrote:
On Tue, Sep 11, 2007 at 04:47:40PM +0200, Maila Fatticcioni wrote:
The servers are cross connected with a common 100 Mbit/sec Ethernet so I
think they have a bandwidth around 80 Mbit/sec (even if I haven't yet
done any
On Mon, Sep 10, 2007 at 06:22:06PM -0400, Mark Mielke wrote:
In my case, I set effective_cache_size to 25% of the RAM available to
the system (256 Mbytes), for a database that was about 100 Mbytes or
less. I found performance to increase when reducing random_page_cost
from 4.0 to 3.0.
Just
On Tue, Sep 11, 2007 at 02:49:28PM -0400, Jean-David Beyer wrote:
It is tricky for me to find a big enough file to test. I tried one of the
dd if=/dev/zero of=bigfile bs=8192 count=100
--
Decibel!, aka Jim Nasby[EMAIL PROTECTED]
EnterpriseDB
Decibel! wrote:
On Mon, Sep 10, 2007 at 06:22:06PM -0400, Mark Mielke wrote:
In my case, I set effective_cache_size to 25% of the RAM available to
the system (256 Mbytes), for a database that was about 100 Mbytes or
less. I found performance to increase when reducing random_page_cost
from
On Tue, Sep 11, 2007 at 09:49:37AM +0200, Ruben Rubio wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
[EMAIL PROTECTED] escribi?:
Last time I had this problem i solved it stopping website, restarting
database, vacuumm it, run again website. But I guess this is going to
happen
On Tue, Sep 11, 2007 at 10:57:43AM +0300, Mindaugas wrote:
Hello,
Now that both 4x4 out it's time for us to decide which one should be better
for our PostgreSQL and Oracle. And especially for Oracle we really need such
server to squeeze everything from Oracle licenses. Both of the
Decibel! [EMAIL PROTECTED] writes:
On Tue, Sep 11, 2007 at 02:49:28PM -0400, Jean-David Beyer wrote:
It is tricky for me to find a big enough file to test. I tried one of the
dd if=/dev/zero of=bigfile bs=8192 count=100
On linux another useful trick is:
echo 1 /proc/sys/vm/drop_caches
On Tue, Sep 11, 2007 at 05:09:00PM -0400, Michael Stone wrote:
On Tue, Sep 11, 2007 at 03:55:51PM -0500, Decibel! wrote:
Also, to reply to someone else's email... there is one big reason to use
a SAN over direct storage: you can do HA that results in 0 data loss.
Good SANs are engineered to be
On Tue, Sep 11, 2007 at 10:24:58AM -0600, Kevin Kempter wrote:
I let the cron vacuum run for 14 days. in that 14 days the time it takes to
vacuum the table grew from 1.2hours directly after the rebuild to 8hours
last nite.
Sounds to me like daily isn't enough, and that your FSM is too
On Wed, Sep 12, 2007 at 12:02:46AM +0100, Gregory Stark wrote:
Decibel! [EMAIL PROTECTED] writes:
On Tue, Sep 11, 2007 at 02:49:28PM -0400, Jean-David Beyer wrote:
It is tricky for me to find a big enough file to test. I tried one of the
dd if=/dev/zero of=bigfile bs=8192 count=100
On Tue, 11 Sep 2007, Decibel! wrote:
On Tue, Sep 11, 2007 at 05:09:00PM -0400, Michael Stone wrote:
On Tue, Sep 11, 2007 at 03:55:51PM -0500, Decibel! wrote:
Also, to reply to someone else's email... there is one big reason to use
a SAN over direct storage: you can do HA that results in 0
On Wed, 12 Sep 2007, Gregory Stark wrote:
Also, it helps to run a vmstat 1 in another window and watch the bi and bo
columns.
Recently on Linux systems I've been using dstat (
http://dag.wieers.com/home-made/dstat/ ) for live monitoring in this sort
of situation. Once you get the command
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Greg Smith wrote:
On Wed, 12 Sep 2007, Gregory Stark wrote:
Also, it helps to run a vmstat 1 in another window and watch the bi
and bo
columns.
Recently on Linux systems I've been using dstat (
http://dag.wieers.com/home-made/dstat/ ) for
Decibel! wrote:
dons EE hat
Pretty much every surge supressor out there is a POS... 99.9% of them
just wire a varistor across the line; like a $0.02 part is going to stop
a 10,00+ amp discharge.
The only use I have for those things is if they come with an equipment
guarantee, though I
Yeah, the DAS we are considering is Dell MD3000, it has redundant hot
swappable raid controllers in active-active mode. Provision for hot spare
hard-disk. And it can take upto 15 disks in 3U, you can attach two more
MD1000 to it, giving a total of 45 disks in total.
-- Harsh
On 9/12/07, [EMAIL
[Decibel! - Tue at 06:07:44PM -0500]
It's still in the same chassis, though, which means if you lose memory
or mobo you're still screwed. In a SAN setup for redundancy, there's
very little in the way of a single point of failure; generally only the
backplane, and because there's very little
42 matches
Mail list logo