i have the following query involving a view that i really need to optimise:
SELECT *
FROM
tokens.ta_tokenhist h INNER JOIN
tokens.vw_tokenst ON h.token_id = t.token_id
WHERE
h.sarreport_id = 9
;
where vw_tokens is defined as
CREATE VIEW tokens.vw_tokens AS SELECT
Perhaps we should put a link on the home page underneath LATEST RELEASEs
saying
7.2: de-supported
with a link to a scary note along the lines of the above.
ISTM that there are still too many people on older releases.
We probably need an explanation of why we support so many releases (in
Update to 7.4 or later ;-)
Quite seriously, if you're still using 7.2.4 for production purposes
you could justifiably be accused of negligence. There are three or four
data-loss-grade bugs fixed in the later 7.2.x releases, not to mention
security holes; and that was before we abandoned support
On Wed, Nov 16, 2005 at 12:59:21PM -0800, Craig A. James wrote:
> eval {
> local $SIG{ALRM} = sub {die("Timeout");};
> $time = gettimeofday;
> alarm 20;
> $sth = $dbh->prepare("a query that may take a long time...");
> $sth->execute();
> alarm 0;
> };
> if ($@ &&
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
> I am mystified by the behavior of "alarm" in conjunction with
> Postgres/perl/DBD. Here is roughly what I'm doing:
> A
On 15-11-2005 15:18, Steve Wampler wrote:
Magnus Hagander wrote:
(This is after putting an index on the (id,name,value) tuple.) That outer seq
scan
is still annoying, but maybe this will be fast enough.
I've passed this on, along with the (strong) recommendation that they
upgrade PG.
Have yo
Hi, I just have a little question, does PgPool keeps the same session
between different connections? I say it cuz I have a server with the
following specifications:
P4 3.2 ghz
80 gig sata drives x 2
1 gb ram
5 ips
1200 gb bandwidth
100 mbit/s port speed.
I am running a PgSQL 8.1 server with 100 m
> at 5TB data, i'd vote that the application is disk I/O bound, and the
> difference in CPU speed at the level of dual opteron vs. dual-core
> opteron is not gonna be noticed.
>
> to maximize disk, try getting a dedicated high-end disk system like
> nstor or netapp file servers hooked up to fiber c
Yeah those big disks arrays are real sweet.
One day last week I was in a data center in Arizona when the big LSI/Storagetek
array in the cage next to mine had a hard drive failure. So the alarm shrieked
at like 13225535 decibles continuously for hours. BEEEP BP BP BP.
Of course sinc
Amendment: there are graphs where the 1GB Areca 1160's do not do as
well. Given that they are mySQL specific and that similar usage
scenarios not involving mySQL (as well as most of the usage scenarios
involving mySQL; as I said these did not follow the pattern of the
rest of the benchmarks) s
William Yu wrote:
Our SCSI drives have failed maybe a little less than our IDE drives.
Microsoft in their database showcase terraserver project has
had the same experience. They studied multiple configurations
including a SCSI/SAN solution as well as a cluster of SATA boxes.
They measured a
I am mystified by the behavior of "alarm" in conjunction with
Postgres/perl/DBD. Here is roughly what I'm doing:
eval {
local $SIG{ALRM} = sub {die("Timeout");};
$time = gettimeofday;
alarm 20;
$sth = $dbh->prepare("a query that may take a long time...");
$sth->execut
You _ARE_ kidding right? In what hallucination?
The performance numbers for the 1GB cache version of the Areca 1160
are the _grey_ line in the figures, and were added after the original
article was published:
"Note: Since the original Dutch article was published in late
January, we have fin
Arjen van der Meijden wrote:
> On 15-11-2005 15:18, Steve Wampler wrote:
>
>> Magnus Hagander wrote:
>> (This is after putting an index on the (id,name,value) tuple.) That
>> outer seq scan
>> is still annoying, but maybe this will be fast enough.
>>
>> I've passed this on, along with the (strong
On Wed, 2005-11-16 at 12:51, Steinar H. Gunderson wrote:
> On Wed, Nov 16, 2005 at 11:06:25AM -0600, Scott Marlowe wrote:
> > There was a big commercial EMC style array in the hosting center at the
> > same place that had something like a 16 wide by 16 tall array of IDE
> > drives for storing pdf /
On Mon, 2005-11-14 at 18:42 -0500, Tom Lane wrote:
> Steve Wampler <[EMAIL PROTECTED]> writes:
> > We've got an older system in production (PG 7.2.4).
>
> Update to 7.4 or later ;-)
>
> Quite seriously, if you're still using 7.2.4 for production purposes
> you could justifiably be accused of n
On Nov 15, 2005, at 3:28 AM, Claus Guttesen wrote:
Hardware-wise I'd say dual core opterons. One dual-core-opteron
performs better than two single-core at the same speed. Tyan makes
at 5TB data, i'd vote that the application is disk I/O bound, and the
difference in CPU speed at the level of
David Boreham wrote:
> I guess I've never bought into the vendor story that there are
> two reliability grades. Why would they bother making two
> different kinds of bearing, motor etc ? Seems like it's more
> likely an excuse to justify higher prices.
then how to account for the fact that bleedin
AMD added quad-core processors to their public roadmap for 2007.
Beyond 2007, the quad-cores will scale up to 32 sockets
(using Direct Connect Architecture 2.0)
Expect Intel to follow.
douglas
On Nov 16, 2005, at 9:38 AM, Steve Wampler wrote:
[...]
Got it - the cpu is only
On 11/16/05, Steinar H. Gunderson <[EMAIL PROTECTED]> wrote:
> If you have a cool SAN, it alerts you and removes all data off a disk
> _before_ it starts giving hard failures :-)
>
> /* Steinar */
> --
> Homepage: http://www.sesse.net/
Good point. I have avoided data loss *twice* this year by usin
On Wed, Nov 16, 2005 at 11:06:25AM -0600, Scott Marlowe wrote:
> There was a big commercial EMC style array in the hosting center at the
> same place that had something like a 16 wide by 16 tall array of IDE
> drives for storing pdf / tiff stuff on it, and we had at least one
> failure a month in i
On 11/16/05, David Boreham <[EMAIL PROTECTED]> wrote:
> >Spend a fortune on dual core CPUs and then buy crappy disks... I bet
> >for most applications this system will be IO bound, and you will see a
> >nice lot of drive failures in the first year of operation with
> >consumer grade drives.
>
On Wed, 2005-11-16 at 11:47, Luke Lonergan wrote:
> Scott,
Some cutting for clarity... I agree on the OLTP versus OLAP
discussion.
> Here are the facts so far:
> * Postgres can only use 1 CPU on each query
> * Postgres I/O for sequential scan is CPU limited to 110-120
> MB/
Title: Re: [PERFORM] Hardware/OS recommendations for large databases (
Oops,
Last point should be worded: “All CPUs on all machines used by a parallel database”
- Luke
On 11/16/05 9:47 AM, "Luke Lonergan" <[EMAIL PROTECTED]> wrote:
Scott,
On 11/16/05 9:09 AM, "Scott Marlowe" <[EMAIL PROTE
Title: Re: [PERFORM] Hardware/OS recommendations for large databases (
Scott,
On 11/16/05 9:09 AM, "Scott Marlowe" <[EMAIL PROTECTED]> wrote:
The biggest gain is going from 1 to 2 CPUs (real cpus, like the DC
Opterons or genuine dual CPU mobo, not "hyperthreaded"). Part of the
issue isn't jus
Yes - that very benchmark shows that for a MySQL Datadrive in RAID 10,
the 3ware controllers beat the Areca card.
Alex.
On 11/16/05, Ron <[EMAIL PROTECTED]> wrote:
> Got some hard numbers to back your statement up? IME, the Areca
> 1160's with >= 1GB of cache beat any other commodity RAID
> con
On Wed, 2005-11-16 at 09:33, William Yu wrote:
> Alex Turner wrote:
> > Spend a fortune on dual core CPUs and then buy crappy disks... I bet
> > for most applications this system will be IO bound, and you will see a
> > nice lot of drive failures in the first year of operation with
> > consumer gr
On Wed, 2005-11-16 at 08:51, David Boreham wrote:
> >Spend a fortune on dual core CPUs and then buy crappy disks... I bet
> >for most applications this system will be IO bound, and you will see a
> >nice lot of drive failures in the first year of operation with
> >consumer grade drives.
>
> I
The only questions would be:
(1) Do you need a SMP server at all? I'd claim yes -- you always need
2+ cores whether it's DC or 2P to avoid IO interrupts blocking other
processes from running.
I would back this up. Even for smaller installations (single raid 1, 1
gig of ram). Why? Well becaus
I suggest you read this on the difference between enterprise/SCSI and
desktop/IDE drives:
http://www.seagate.com/content/docs/pdf/whitepaper/D2c_More_than_Interface_ATA_vs_SCSI_042003.pdf
This is exactly the kind of vendor propaganda I was talking about
and it proves my point quite
I guess I've never bought into the vendor story that there are
two reliability grades. Why would they bother making two
different kinds of bearing, motor etc ? Seems like it's more
likely an excuse to justify higher prices. In my experience the
expensive SCSI drives I own break frequently while t
David Boreham wrote:
>Spend a fortune on dual core CPUs and then buy crappy disks... I bet
>for most applications this system will be IO bound, and you will see a
>nice lot of drive failures in the first year of operation with
>consumer grade drives.
I guess I've never bought into the vendo
Alex Turner wrote:
Spend a fortune on dual core CPUs and then buy crappy disks... I bet
for most applications this system will be IO bound, and you will see a
nice lot of drive failures in the first year of operation with
consumer grade drives.
Spend your money on better Disks, and don't bother
>Spend a fortune on dual core CPUs and then buy crappy disks... I bet
>for most applications this system will be IO bound, and you will see a
>nice lot of drive failures in the first year of operation with
>consumer grade drives.
I guess I've never bought into the vendor story that there are
two
Alex Stapleton wrote:
Your going to have to factor in the increased failure rate in your cost
measurements, including any downtime or performance degradation whilst
rebuilding parts of your RAID array. It depends on how long your
planning for this system to be operational as well of course.
David Boreham wrote:
> Steve Wampler wrote:
>
>> Joshua D. Drake wrote:
>>
>>
>>> The reason you want the dual core cpus is that PostgreSQL can only
>>> execute 1 query per cpu at a time,...
>>>
>>
>>
>> Is that true? I knew that PG only used one cpu per query, but how
>> does PG know how ma
Steve Wampler wrote:
Joshua D. Drake wrote:
The reason you want the dual core cpus is that PostgreSQL can only
execute 1 query per cpu at a time,...
Is that true? I knew that PG only used one cpu per query, but how
does PG know how many CPUs there are to limit the number of queries?
Joshua D. Drake wrote:
> The reason you want the dual core cpus is that PostgreSQL can only
> execute 1 query per cpu at a time,...
Is that true? I knew that PG only used one cpu per query, but how
does PG know how many CPUs there are to limit the number of queries?
--
Steve Wampler -- [EMAIL P
On 16 Nov 2005, at 12:51, William Yu wrote:
Alex Turner wrote:
Not at random access in RAID 10 they aren't, and anyone with their
head screwed on right is using RAID 10. The 9500S will still beat
the
Areca cards at RAID 10 database access patern.
The max 256MB onboard for 3ware cards is
Got some hard numbers to back your statement up? IME, the Areca
1160's with >= 1GB of cache beat any other commodity RAID
controller. This seems to be in agreement with at least one
independent testing source:
http://print.tweakers.net/?reviews/557
RAID HW from Xyratex, Engino, or Dot Hill
I agree - you can get a very good one from www.acmemicro.com or
www.rackable.com with 8x 400GB SATA disks and the new 3Ware 9550SX SATA
RAID controller for about $6K with two Opteron 272 CPUs and 8GB of RAM
on a Tyan 2882 motherboard. We get about 400MB/s sustained disk read
performance on thes
James Mello wrote:
Unless there was a way to guarantee consistency, it would be hard at
best to make this work. Convergence on large data sets across boxes is
non-trivial, and diffing databases is difficult at best. Unless there
was some form of automated way to ensure consistency, going 8 ways i
Alex Turner wrote:
Not at random access in RAID 10 they aren't, and anyone with their
head screwed on right is using RAID 10. The 9500S will still beat the
Areca cards at RAID 10 database access patern.
The max 256MB onboard for 3ware cards is disappointing though. While
good enough for 95% o
Hi Luke,
> It is very important with the 3Ware cards to match the driver to the
> firmware revision.
> So, if you can get your “dd bigfile” test to write data at 50MB/s+
> with a blocksize of 8KB, you should be doing well enough.
I recompiled my kernel, added the driver and:
[EMAIL PROTECTED]:~
44 matches
Mail list logo