1= RAID 1improves data =intregrity=, not IO performance.
Your HD IO performance is essentially that of 1 160GB HD of whatever
performance one of those HDs have.
(what kind of HDs are they anyway? For instance 7200rpm 160GB HDs
are not particularly high performance)
BEST case is streaming IO
On 4/4/07, Alexandre Vasconcelos [EMAIL PROTECTED] wrote:
We have an application subjected do sign documents and store them
somewhere. The files size may vary from Kb to Mb. Delelopers are
arguing about the reasons to store files direcly on operating system
file system or on the database, as
On 12.04.2007, at 07:26, Ron wrote:
You need to buy RAM and HD.
Before he does that, wouldn't it be more useful, to find out WHY he
has so much IO?
Have I missed that or has nobody suggested finding the slow queries
(when you have much IO on them, they might be slow at least with a
At 10:08 AM 4/12/2007, Guido Neitzer wrote:
On 12.04.2007, at 07:26, Ron wrote:
You need to buy RAM and HD.
Before he does that, wouldn't it be more useful, to find out WHY he
has so much IO?
1= Unless I missed something, the OP described pg being used as a
backend DB for a webserver.
I
On 12.04.2007, at 08:59, Ron wrote:
1= Unless I missed something, the OP described pg being used as a
backend DB for a webserver.
Yep.
I know the typical IO demands of that scenario better than I
sometimes want to.
:-(
Yep. Same here. ;-)
2= 1GB of RAM + effectively 1 160GB HD = p*ss
On Thu, 12 Apr 2007, Jason Lustig wrote:
0 -- BM starts here
10 0180 700436 16420 9174000 0 176 278 2923 59 41 0
0 0
11 0180 696736 16420 9174000 0 0 254 2904 57 43 0
0 0
12 0180 691272 16420 9174000 0 0 255 3043 60
Steve [EMAIL PROTECTED] writes:
- What is temp_buffers used for exactly?
Temporary tables. Pages of temp tables belonging to your own backend
don't ever get loaded into the main shared-buffers arena, they are read
into backend-local memory. temp_buffers is the max amount (per backend)
of
On Thu, 2007-04-12 at 10:19, Guido Neitzer wrote:
On 12.04.2007, at 08:59, Ron wrote:
Depends. As I said - if the whole DB fits into the remaining space,
and a lot of website backend DBs do, it might just work out. But this
seems not to be the case - either the site is chewing on seq
On Thu, 12 Apr 2007, Scott Marlowe wrote:
On Thu, 2007-04-12 at 10:19, Guido Neitzer wrote:
On 12.04.2007, at 08:59, Ron wrote:
Depends. As I said - if the whole DB fits into the remaining space,
and a lot of website backend DBs do, it might just work out. But this
seems not to be the case
On Wednesday 04 April 2007 07:51, Arnau wrote:
Hi Ansgar ,
On 2007-04-04 Arnau wrote:
Josh Berkus wrote:
Is there anything similar in PostgreSQL? The idea behind this is how
I can do in PostgreSQL to have tables where I can query on them very
often something like every few seconds and
Jeff Frost wrote:
You know, I should answer emails at night...
Indeed you shouldN'T ;-)
Carlos
--
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
Hey there;
On a Postgres 8.2.3 server, I've got a query that is running very slow in
some cases. With some work, I've determined the 'slow part' of the query.
:) This is a query on a table with like 10 million rows or something like
that. encounter_id is an integer and receipt is of type
Steve [EMAIL PROTECTED] writes:
On a Postgres 8.2.3 server, I've got a query that is running very slow in
some cases.
Could we see the exact definition of that table and its indexes?
It looks like the planner is missing the bitmap scan for some reason,
but I've not seen a case like that
Could we see the exact definition of that table and its indexes?
It looks like the planner is missing the bitmap scan for some reason,
but I've not seen a case like that before.
Also, I assume the restriction on receipt date is very nonselective?
It doesn't seem to have changed the estimated
On Thu, 2007-04-12 at 16:03, Steve wrote:
Hey there;
On a Postgres 8.2.3 server, I've got a query that is running very slow in
some cases. With some work, I've determined the 'slow part' of the query.
:) This is a query on a table with like 10 million rows or something like
that.
Hi all,
Wow! That's a lot to respond to. Let me go through some of the
ideas... First, I just turned on autovacuum, I forgot to do that. I'm
not seeing a major impact though. Also, I know that it's not optimal
for a dedicated server. It's not just for postgres, it's also got our
apache
Oy vey ... I hope this is a read-mostly table, because having that many
indexes has got to be killing your insert/update performance.
Hahaha yeah these are read-only tables. Nightly inserts/updates.
Takes a few hours, depending on how many records (between 4 and 10
usually). But during
Seq Scan on detail_summary ds (cost=0.00..1902749.83 rows=9962 width=4)
Filter: ((receipt = '1998-12-30'::date) AND (encounter_id = ANY
On Thu, 2007-04-12 at 17:04, Steve wrote:
Seq Scan on detail_summary ds (cost=0.00..1902749.83 rows=9962 width=4)
Filter: ((receipt = '1998-12-30'::date) AND (encounter_id = ANY
On 12.04.2007, at 15:58, Jason Lustig wrote:
Wow! That's a lot to respond to. Let me go through some of the
ideas... First, I just turned on autovacuum, I forgot to do that.
I'm not seeing a major impact though. Also, I know that it's not
optimal for a dedicated server.
Hmm, why not?
Steve [EMAIL PROTECTED] writes:
Datum ddd_text_revcmp(PG_FUNCTION_ARGS){
char* arg1=(char*)VARDATA(PG_GETARG_TEXT_P(0));
char* arg2=(char*)VARDATA(PG_GETARG_TEXT_P(1));
if((*arg1) != (*arg2)){
PG_RETURN_INT32(*arg2 - *arg1);
[ itch... ] That code is just completely wrong, because the contents
of a TEXT datum aren't guaranteed null-terminated. It'd be better to
invoke bttextcmp and negate its result.
That's not relevant to your immediate problem, but if you've noticed
any strange behavior with your text_revop
So there's a misjudgment of the number of rows returned by a factor of
about 88. That's pretty big. Since you had the same number without the
receipt date (I think...) then it's the encounter_id that's not being
counted right.
Try upping the stats target on that column and running analyze
Scott Marlowe [EMAIL PROTECTED] writes:
So there's a misjudgment of the number of rows returned by a factor of
about 88. That's pretty big. Since you had the same number without the
receipt date (I think...) then it's the encounter_id that's not being
counted right.
I don't think that's
Here's my planner parameters:
seq_page_cost = 1.0 # measured on an arbitrary scale
random_page_cost = 1.5 # same scale as above
cpu_tuple_cost = 0.001 # same scale as above
cpu_index_tuple_cost = 0.0005 # same scale as above
Steve [EMAIL PROTECTED] writes:
Here's my planner parameters:
I copied all these, and my 8.2.x still likes the bitmap scan a lot
better than the seqscan. Furthermore, I double-checked the CVS history
and there definitely haven't been any changes in that area in REL8_2
branch since 8.2.3. So
Steve [EMAIL PROTECTED] writes:
... even if I force it to use the indexes
(enable_seqscan=off) it doesn't make it any faster really :/
Does that change the plan, or do you still get a seqscan?
BTW, how big is this table really (how many rows)?
regards, tom lane
It's a redhat enterprise machine running AMD x64 processors.
Linux ers3.dddcorp.com 2.6.9-42.0.10.ELsmp #1 SMP Fri Feb 16 17:13:42 EST
2007 x86_64 x86_64 x86_64 GNU/Linux
It was compiled by me, straight up, nothing weird at all, no odd compiler
options or wahtever :)
So yeah :/ I'm quite
Table size: 16,037,728 rows
With enable_seqscan=off I get:
Bitmap Heap Scan on detail_summary ds (cost=4211395.20..4213045.32
rows=1099 width=4)
Recheck Cond: ((receipt = '1998-12-30'::date) AND (encounter_id = ANY
Here's the explain analyze with seqscan = off:
Bitmap Heap Scan on detail_summary ds (cost=4211395.20..4213045.32
rows=1099 width=4) (actual time=121288.825..121305.908 rows=112 loops=1)
Recheck Cond: ((receipt = '1998-12-30'::date) AND (encounter_id = ANY
Steve [EMAIL PROTECTED] writes:
With enable_seqscan=off I get:
- Bitmap Index Scan on detail_summary_receipt_encounter_idx
(cost=0.00..4211395.17 rows=1099 width=0)
Index Cond: ((receipt = '1998-12-30'::date) AND (encounter_id =
ANY ...
The explain analyze is pending,
If the other indexes are removed, with enable_seqscan=on:
Bitmap Heap Scan on detail_summary ds (cost=154.10..1804.22 rows=1099
width=4)
Recheck Cond: (encounter_id = ANY
Steve [EMAIL PROTECTED] writes:
Either way, it runs perfectly fast. So it looks like the indexes are
confusing this query like you suspected. Any advise?
Wow --- sometimes grasping at straws pays off. I was testing here with
just a subset of the indexes to save build time, but I bet that
Okay -- I started leaving indexes on one by one.
The explain broke when the detail_summary_receipt_encounter_idx index was
left on (receipt, encounter_id).
Just dropping that index had no effect, but there's a LOT of indexes that
refer to receipt. So on a hunch I tried dropping all indexes
Steve [EMAIL PROTECTED] writes:
Okay -- I started leaving indexes on one by one.
...
So does this mean I should experiment with dropping those indexes?
No, I think this means there's a planner bug to fix. I haven't quite
scoped out what it is yet, though.
On Tue, 10 Apr 2007, Steve wrote:
- I've set up a configuration (I'll show important values below), and Im
wondering if there's any way I can actually see the distribution of memory in
the DB and how the memory is being used.
I didn't notice anyone address this for you yet. There is a tool
Steve [EMAIL PROTECTED] writes:
Just dropping that index had no effect, but there's a LOT of indexes that
refer to receipt. So on a hunch I tried dropping all indexes that refer
to receipt date and that worked -- so it's the indexes that contain
receipt date that are teh problem.
I'm
Here you go:
detail_summary_b_record_status_idx
detail_summary_batch_id_idx
detail_summary_batchnum_idx
detail_summary_carrier_id_idx
detail_summary_duplicate_id_idx
detail_summary_e_record_status_idx
detail_summary_encounter_id_idx
detail_summary_encounternum_idx
I didn't notice anyone address this for you yet. There is a tool in
contrib/pg_buffercache whose purpose in life is to show you what the shared
buffer cache has inside it. The documentation in that directory leads
through installing it. The additional variable you'll likely never know is
39 matches
Mail list logo