On 2003-07-22 09:04:42 +0200, Alexander Priem wrote:
Hi all,
Vincent, You said that using RAID1, you don't have real redundancy. But
RAID1 is mirroring, right? So if one of the two disks should fail, there
should be no data lost, right?
Right. But the proposal was a single disk for WAL,
On Mon, 2003-07-21 at 04:33, Shridhar Daithankar wrote:
Hi Alexander ,
On 21 Jul 2003 at 11:23, Alexander Priem wrote:
[snip]
I use ext3 filesystem, which probably is not the best performer, is it?
No. You also need to check ext2, reiser and XFS. There is no agreement between
users as
Wow, I never figured how many different RAID configurations one could think
of :)
After reading lots of material, forums and of course, this mailing-list, I
think I am going for a RAID5 configuration of 6 disks (18Gb, 15.000 rpm
each), one of those six disks will be a 'hot spare'. I will just
AP == Alexander Priem [EMAIL PROTECTED] writes:
AP Hmmm. I keep changing my mind about this. My Db would be mostly
AP 'selecting', but there would also be pretty much inserting and
AP updating done. But most of the work would be selects. So would
AP this config be OK?
I'm about to order a new
Mindaugas Riauba wrote:
I missed your orig. post, but AFAIK multiprocessing kernels will handle
HT
CPUs as 2 CPUs each. Thus, our dual Xeon 2.4 is recognized as 4 Xeon 2.4
CPUs.
This way, I don't think HT would improve any single query (afaik no
postgres
process uses more
On Tue, Jul 22, 2003 at 11:40:35 +0200,
Vincent van Leeuwen [EMAIL PROTECTED] wrote:
About RAID types: the fastest RAID type by far is RAID-10. However, this will
cost you a lot of useable diskspace, so it isn't for everyone. You need at
least 4 disks for a RAID-10 array. RAID-5 is a nice
by default -- do you mean there is a way to tell Linux to favor the second
real cpu over the HT one? how?
G.
--- cut here ---
- Original Message -
From: Bruce Momjian [EMAIL PROTECTED]
Sent: Tuesday, July 22, 2003 6:26 PM
Subject:
SZUCS Gábor wrote:
by default -- do you mean there is a way to tell Linux to favor the second
real cpu over the HT one? how?
Right now there is no way the kernel can tell which virtual cpu's are on
each physical cpu's, and that is the problem. Once there is a way,
hyperthreading will be more
Gaetano,
QUERY PLAN
Hash Join (cost=265.64..32000.76 rows=40612 width=263) (actual
time=11074.21..11134.28 rows=10 loops=1)
Hash Cond: (outer.id_user = inner.id_user)
- Seq Scan on user_logs ul (cost=0.00..24932.65 rows=1258965 width=48)
(actual time=0.02..8530.21 rows=1258966
BM == Bruce Momjian [EMAIL PROTECTED] writes:
BM I know Linux has pagable shared memory, and you can resize the maximum
BM in a running kernel, so it seems they must have abandonded the linkage
BM between shared page tables and the kernel. This looks interesting:
Thanks for the info. You can
Jord Tanner wrote:
On Tue, 2003-07-22 at 10:39, Bruce Momjian wrote:
But CPU affinity isn't realated to hyperthreading, as far as I know.
CPU affinity tries to keep processes on the same cpu in case there is
still valuable info in the cpu cache.
It is true that CPU affinity is
On Tue, 2003-07-22 at 11:50, Bruce Momjian wrote:
Jord Tanner wrote:
On Tue, 2003-07-22 at 10:39, Bruce Momjian wrote:
But CPU affinity isn't realated to hyperthreading, as far as I know.
CPU affinity tries to keep processes on the same cpu in case there is
still valuable info in the
Gaetano,
SELECT * from user_logs where id_user in (
10943, 10942, 10934, 10927, 10910, 10909
);
[SNIPPED]
Why the planner or the executor ( I don't know ) do not follow
the same strategy ?
It is, actually, according to the query plan.
Can you post the EXPLAIN ANALYZE for the
Josh Berkus [EMAIL PROTECTED]
Gaetano,
SELECT * from user_logs where id_user in (
10943, 10942, 10934, 10927, 10910, 10909
);
[SNIPPED]
Why the planner or the executor ( I don't know ) do not follow
the same strategy ?
It is, actually, according to the query plan.
Can
Josh Berkus [EMAIL PROTECTED]
Gaetano,
QUERY PLAN
Hash Join (cost=265.64..32000.76 rows=40612 width=263) (actual
time=11074.21..11134.28 rows=10 loops=1)
Hash Cond: (outer.id_user = inner.id_user)
- Seq Scan on user_logs ul (cost=0.00..24932.65 rows=1258965
width=48)
On Tue, 2003-07-22 at 20:34, Castle, Lindsay wrote:
Hi all,
I'm working on a project that has a data set of approximately 6million rows
with about 12,000 different elements, each element has 7 columns of data.
Are these 7 columns the same for each element?
signature.asc
Description: This
Ok.. Unless I'm missing something, the data will be static (or near
static). It also sounds as if the structure is common for elements, so
you probably only want 2 tables.
One with 6 million rows and any row information. The other with 6
million * 12000 rows with the element data linking to the
Apologies, let me clear this up a bit (hopefully) :-)
The data structure looks like this:
element
date
num1
num2
num3
num4
units
There are approx 12,000 distinct elements for a total of about 6 million
rows of data.
The scanning technology
Thanks Rod
My explanations will be better next time. :-)
-Original Message-
From: Rod Taylor [mailto:[EMAIL PROTECTED]
Sent: Wednesday, 23 July 2003 11:41 AM
To: Castle, Lindsay
Cc: Postgresql Performance
Subject: Re: One table or many tables for data set
On Tue, 2003-07-22 at 21:50,
On Tue, 2003-07-22 at 21:50, Rod Taylor wrote:
Ok.. Unless I'm missing something, the data will be static (or near
static). It also sounds as if the structure is common for elements, so
you probably only want 2 tables.
I misunderstood. Do what Joe suggested.
signature.asc
Description: This
Hi all,
I'm working on a project that has a data set of approximately 6million rows
with about 12,000 different elements, each element has 7 columns of data.
I'm wondering what would be faster from a scanning perspective (SELECT
statements with some calculations) for this type of set up;
Castle, Lindsay wrote:
The data structure looks like this:
element
date
num1
num2
num3
num4
units
There are approx 12,000 distinct elements for a total of about 6 million
rows of data.
Ahh, that helps! So are the elements evenly distributed,
22 matches
Mail list logo