[EMAIL PROTECTED] writes:
> If anyone knows what may cause this problem, or has any other ideas, I
> would be grateful.
Submit the command "VACUUM ANALYZE VERBOSE locations;" on both
servers, and post the output of that. That might help us tell for
sure whether the table is bloated (and needs VAC
At 11:02 AM 12/7/2006, Gene wrote:
I'm building a SuperServer 6035B server (16 scsi drives). My schema
has basically two large tables (million+ per day) each which are
partitioned daily, and queried independently of each other. Would
you recommend a raid1 system partition and 14 drives in a rai
I was working on a project that was considering using a Dell/EMC (dell's
rebranded emc hardware) and here's some thoughts on your questions based
on that.
> 1. Is iscsi a decent way to do a san? How much performance do I
loose
> vs connecting the hosts directly with a fiber channel controller?
On 12/6/06, Brian Wipf <[EMAIL PROTECTED]> wrote:
> Hmmm. Something is not right. With a 16 HD RAID 10 based on 10K
> rpm HDs, you should be seeing higher absolute performance numbers.
>
> Find out what HW the Areca guys and Tweakers guys used to test the
> 1280s.
> At LW2006, Areca was demons
* Bill Moran ([EMAIL PROTECTED]) wrote:
> What I'm fuzzy on is how to discretely know when I'm overflowing
> work_mem? Obviously, if work_mem is exhausted by a particular
> query, temp files will be created and performance will begin to suck,
I don't believe this is necessairly *always* the case.
Bill Moran <[EMAIL PROTECTED]> writes:
> I haven't been able to find anything regarding how much of the
> shared buffer space PostgreSQL is actually using, as opposed to
> simply allocating.
In 8.1 and up, contrib/pg_buffercache/ would give you some visibility
of this.
reg
Bill Moran <[EMAIL PROTECTED]> writes:
> Does the creation of a temp file trigger any logging?
No; but it wouldn't be hard to add some if you wanted. I'd do it at
deletion, not creation, so you could log the size the file reached.
See FileClose() in src/backend/storage/file/fd.c.
> That leads to
I'm gearing up to do some serious investigation into performance for
PostgreSQL with regard to our application. I have two issues that I've
questions about, and I'll address them in two seperate emails.
This one regards tuning shared_buffers.
I believe I have a good way to monitor database acti
I'm gearing up to do some serious investigation into performance for
PostgreSQL with regard to our application. I have two issues that I've
questions about, and I'll address them in two seperate emails.
This email regards the tuning of work_mem.
I'm planning on going through all of the queries
I'm building a SuperServer 6035B server (16 scsi drives). My schema has
basically two large tables (million+ per day) each which are partitioned
daily, and queried independently of each other. Would you recommend a raid1
system partition and 14 drives in a raid 10 or should i create separate
parti
One thing that is clear from what you've posted thus far is that you
are going to needmore HDs if you want to have any chance of fully
utilizing your Areca HW.
Do you know off hand where I might find a chassis that can fit 24[+]
drives? The last chassis we ordered was through Supermicro, and t
Arjen van der Meijden <[EMAIL PROTECTED]> writes:
> I've been mailing off-list with Tom and we found at least one
> query that in some circumstances takes a lot more time than it should,
> due to it mistakenly chosing to do a bitmap index scan rather than a
> normal index scan.
Just to clue fol
07/12/2006 04:31
SQL_CALC_FOUND_ROWS in POSTGRESQL
In mysqln i m using the command SQL_CALC_FOUND_ROWS in follow sintax.
SELECT SQL_CALC_FOUND_ROWS name, email, tel FROM mytable WHERE name
<> '' LIMIT 0, 10
to have the recorset data.
and
SELECT FOUND_ROWS();
to have the total of registers fou
At 03:37 AM 12/7/2006, Brian Wipf wrote:
On 6-Dec-06, at 5:26 PM, Ron wrote:
All this stuff is so leading edge that it is far from clear what
the RW performance of DBMS based on these components will be
without extensive testing of =your= app under =your= workload.
I want the best performance
On 7-12-2006 12:05 Mindaugas wrote:
Now about 2 core vs 4 core Woodcrest. For HP DL360 I see similarly
priced dual core [EMAIL PROTECTED] and four core [EMAIL PROTECTED] According to
article's scaling data PostgreSQL performance should be similar (1.86GHz
* 2 * 80% = ~3GHz). And quad core has
These benchmarks are all done using 64 bit linux:
http://tweakers.net/reviews/646
I see. Thanks.
Now about 2 core vs 4 core Woodcrest. For HP DL360 I see similarly priced
dual core [EMAIL PROTECTED] and four core [EMAIL PROTECTED] According to article's
scaling data PostgreSQL performanc
We're planning new server or two for PostgreSQL and I'm wondering Intel
Core 2 (Woodcrest for servers?) or Opteron is faster for PostgreSQL now?
When I look through hardware sites Core 2 wins. But I believe those tests
mostly are being done in 32 bits. Does the picture change in 64 bits?
We
These benchmarks are all done using 64 bit linux:
http://tweakers.net/reviews/646
Best regards,
Arjen
On 7-12-2006 11:18 Mindaugas wrote:
Hello,
We're planning new server or two for PostgreSQL and I'm wondering Intel
Core 2 (Woodcrest for servers?) or Opteron is faster for PostgreSQL now?
Hello,
We're planning new server or two for PostgreSQL and I'm wondering Intel
Core 2 (Woodcrest for servers?) or Opteron is faster for PostgreSQL now?
When I look through hardware sites Core 2 wins. But I believe those tests
mostly are being done in 32 bits. Does the picture change in 64 bi
On 7-12-2006 7:01 Jim C. Nasby wrote:
Can you post them on the web somewhere so everyone can look at them?
No, its not (only) the size that matters, its the confidentiality I'm
not allowed to just break by myself. Well, at least not on a scale like
that. I've been mailing off-list with Tom and
On 6-Dec-06, at 5:26 PM, Ron wrote:
At 06:40 PM 12/6/2006, Brian Wipf wrote:
I appreciate your suggestions, Ron. And that helps answer my question
on processor selection for our next box; I wasn't sure if the lower
MHz speed of the Kentsfield compared to the Woodcrest but with double
the cores w
21 matches
Mail list logo