Perhaps we should put a link on the home page underneath LATEST
RELEASEs saying
7.2: de-supported
with a link to a scary note along the lines of the above.
ISTM that there are still too many people on older releases.
We probably need an explanation of why we support so
On Thu, Nov 17, 2005 at 09:40:42AM +0800, Christopher Kings-Lynne wrote:
In my experience not many pgsql admins have test servers or the skills
to build up test machines with the latest pg_dump, etc. (Seriously.)
In fact, few realise at all that they should use the 8.1 dumper.
Isn't your
Welty, Richard wrote:
David Boreham wrote:
I guess I've never bought into the vendor story that there are
two reliability grades. Why would they bother making two
different kinds of bearing, motor etc ? Seems like it's more
likely an excuse to justify higher prices.
then how to account for
On Nov 16, 2005, at 4:50 PM, Claus Guttesen wrote:
I'm (also) FreeBSD-biased but I'm not shure whether the 5 TB fs will
work so well if tools like fsck are needed. Gvinum could be one option
but I don't have any experience in that area.
Then look into an external filer and mount via NFS.
Isn't your distribution supposed to do this for you? Mine does these days...
A distribution that tries to automatically do a major postgresql update
is doomed to fail - spectacularly...
Chris
---(end of broadcast)---
TIP 1: if posting/reading
That way if someone wanted to upgrade from 7.2 to 8.1, they
can just grab the latest dumper from the website, dump their
old database, then upgrade easily.
But if they're upgrading to 8.1, don't they already have the new
pg_dump? How else are they going to dump their *new* database?
Erm.
Christopher Kings-Lynne wrote:
That most people don't know they should use the new one I understand
though. But I don't see how this will help against that :-)
It'll make it easy...
As the miscreant that caused this thread to get started, let me
*wholeheartedly* agree with Chris. An easy
That way if someone wanted to upgrade from 7.2 to 8.1, they
can just
grab the latest dumper from the website, dump their old
database, then
upgrade easily.
But if they're upgrading to 8.1, don't they already have the new
pg_dump? How else are they going to dump their *new*
On Wed, 2005-11-16 at 19:40, Christopher Kings-Lynne wrote:
Perhaps we should put a link on the home page underneath LATEST RELEASEs
saying
7.2: de-supported
with a link to a scary note along the lines of the above.
ISTM that there are still too many people on older releases.
On Thu, Nov 17, 2005 at 11:07:42PM +0800, Christopher Kings-Lynne wrote:
Isn't your distribution supposed to do this for you? Mine does these
days...
A distribution that tries to automatically do a major postgresql update
is doomed to fail - spectacularly...
Automatically? Well, you can
Rich Doughty [EMAIL PROTECTED] writes:
However, the following query (which i believe should be equivalent)
SELECT *
FROM
tokens.ta_tokenhist h INNER JOIN
tokens.ta_tokens t ON h.token_id = t.token_id LEFT JOIN
tokens.ta_tokenhist i ON t.token_id = i.token_id AND
On 11/16/05, William Yu [EMAIL PROTECTED] wrote:
Alex Turner wrote:
Spend a fortune on dual core CPUs and then buy crappy disks... I bet
for most applications this system will be IO bound, and you will see a
nice lot of drive failures in the first year of operation with
consumer grade
Just pick up a SCSI drive and a consumer ATA drive.
Feel their weight.
You don't have to look inside to tell the difference.
Alex
On 11/16/05, David Boreham [EMAIL PROTECTED] wrote:
I suggest you read this on the difference between enterprise/SCSI and
desktop/IDE drives:
On 11/16/05, Joshua D. Drake [EMAIL PROTECTED] wrote:
The only questions would be:
(1) Do you need a SMP server at all? I'd claim yes -- you always need
2+ cores whether it's DC or 2P to avoid IO interrupts blocking other
processes from running.
I would back this up. Even for smaller
Remember - large DB is going to be IO bound. Memory will get thrashed
for file block buffers, even if you have large amounts, it's all gonna
be cycled in and out again.
'fraid I have to disagree here. I manage ERP systems for manufacturing
companies of various sizes. My systems are all
Alex Turner wrote:
Opteron 242 - $178.00
Opteron 242 - $178.00
Tyan S2882 - $377.50
Total: $733.50
Opteron 265 - $719.00
Tyan K8E - $169.00
Total: $888.00
You're comparing the wrong CPUs. The 265 is the 2x of the 244 so you'll
have to bump up the price more although not enough to make a
Hi all,
We are operating a 1.5GB postgresql database for a year and we have
problems for nearly a month. Usually everything is OK with the database,
queries are executed fast even if they are complicated but sometimes and
for half an hour, we have a general slow down.
The server is a
On 11/17/05, William Yu [EMAIL PROTECTED] wrote:
No argument there.But it's pointless if you are IO bound.Why would you just accept we're IO bound, nothing we can do? I'd doeverything in my power to make my app go from IO bound to CPU bound --
whether by optimizing my code or buying more
Alan Stange wrote:
Not sure I get your point. We would want the lighter one,
all things being equal, right ? (lower shipping costs, less likely
to break when dropped on the floor)
Why would the lighter one be less likely to break when dropped on the
floor?
They'd have less kinetic
Joshua Marsh wrote:
On 11/17/05, *William Yu* [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote:
No argument there. But it's pointless if you are IO bound.
Why would you just accept we're IO bound, nothing we can do? I'd do
everything in my power to make my app go from IO bound
On Thu, Nov 17, 2005 at 06:47:09PM +0100, Guillaume Smet wrote:
queries are executed fast even if they are complicated but sometimes and
for half an hour, we have a general slow down.
Is it exactly half an hour? What changes at the time that happens
(i.e. what else happens on the machine?).
On Thu, Nov 17, 2005 at 01:04:21PM -0800, Craig A. James wrote:
When I set statement_timeout in the config file, it just didn't do anything
- it never timed out (PG 8.0.3). I finally found in the documentation that
I can do set statement_timeout = xxx from PerlDBI on a per-client basis,
Andrew,
Andrew Sullivan wrote:
Is it exactly half an hour? What changes at the time that happens
(i.e. what else happens on the machine?). Is this a time, for
example, when logrotate is killing your I/O with file moves?
No, it's not exactly half an hour. It's just that it slows down for
I forgot to give our non default postgresql.conf parameters:
shared_buffers = 28800
sort_mem = 32768
vacuum_mem = 32768
max_fsm_pages = 35
max_fsm_relations = 2000
checkpoint_segments = 16
effective_cache_size = 27
random_page_cost = 2
Isn't sort_mem quite high? Remember that
Christopher Kings-Lynne wrote:
Quite seriously, if you're still using 7.2.4 for production purposes
you could justifiably be accused of negligence
Perhaps we should put a link on the home page underneath LATEST RELEASEs
saying
7.2: de-supported
with a link to a scary note along the
I do not really see why all the distributions could do something like this,
instead of mucking around with special statically compiled pg_dumps and the
like...
Contrib modules and tablespaces.
Plus, no version of pg_dump before 8.0 is able to actually perform such
reliable dumps and reloads
Joshua Marsh [EMAIL PROTECTED] writes:
We all want our systems to be CPU bound, but it's not always possible.
Sure it is, let me introduce you to my router, a 486DX100...
Ok, I guess that wasn't very helpful, I admit.
--
greg
---(end of
Joshua Marsh [EMAIL PROTECTED] writes:
We all want our systems to be CPU bound, but it's not always possible.
Remember, he is managing a 5 TB Databse. That's quite a bit different than a
100 GB or even 500 GB database.
Ok, a more productive point: it's not really the size of the database
[Please copy the mailing list on replies.]
On Thu, Nov 17, 2005 at 05:38:13PM -0800, Craig A. James wrote:
You probably shouldn't set statement_timeout on a global basis
anyway
The server is a one trick pony so setting a global timeout value is
actually appropriate.
Beware that
29 matches
Mail list logo