On 08/24/2012 05:47 AM, Felix Schubert wrote:
Hello List,
I've got a system on a customers location which has a XEON E5504 @
2.00GHz Processor (HP Proliant)
It's postgres 8.4 on a Debian Squeeze System running with 8GB of ram:
The Postgres Performance on this system measured with pgbench is
Hello List,
I've got a system on a customers location which has a XEON E5504 @ 2.00GHz
Processor (HP Proliant)
It's postgres 8.4 on a Debian Squeeze System running with 8GB of ram:
The Postgres Performance on this system measured with pgbench is very poor:
transaction type: TPC-B (sort of)
sca
No problem, hope it helps. The single most important part of any
fast, transactional server is the RAID controller and its cache.
On Sat, Aug 25, 2012 at 3:26 PM, Felix Schubert wrote:
> Don't know but I forwarded the question to the System Administrator.
>
> Anyhow thanks for the information up
Don't know but I forwarded the question to the System Administrator.
Anyhow thanks for the information up to now!
best regards,
Felix
Am 25.08.2012 um 14:59 schrieb Scott Marlowe :
> Well it sounds like it does NOT have a battery back caching module on
> it, am I right?
On Sat, Aug 25, 2012 at 6:53 AM, Felix Schubert wrote:
> Hi Scott,
>
> the controller is a HP i410 running 3x300GB SAS 15K / Raid 5
Well it sounds like it does NOT have a battery back caching module on
it, am I right?
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org
On Sat, Aug 25, 2012 at 6:59 AM, Scott Marlowe wrote:
> On Sat, Aug 25, 2012 at 6:53 AM, Felix Schubert wrote:
>> Hi Scott,
>>
>> the controller is a HP i410 running 3x300GB SAS 15K / Raid 5
>
> Well it sounds like it does NOT have a battery back caching module on
> it, am I right?
Also what sof
Hi Scott,
the controller is a HP i410 running 3x300GB SAS 15K / Raid 5
Mit freundlichen Grüßen
Felix Schubert
Von meinem iPhone gesendet :-)
Am 25.08.2012 um 14:42 schrieb Scott Marlowe :
> On Sat, Aug 25, 2012 at 6:07 AM, Felix Schubert wrote:
>> Hello List,
>>
>> I've got a system on a c
On Sat, Aug 25, 2012 at 6:07 AM, Felix Schubert wrote:
> Hello List,
>
> I've got a system on a customers location which has a XEON E5504 @ 2.00GHz
> Processor (HP Proliant)
>
> It's postgres 8.4 on a Debian Squeeze System running with 8GB of ram:
>
> The Postgres Performance on this system measu
Hello List,
I've got a system on a customers location which has a XEON E5504 @ 2.00GHz
Processor (HP Proliant)
It's postgres 8.4 on a Debian Squeeze System running with 8GB of ram:
The Postgres Performance on this system measured with pgbench is very poor:
transaction type: TPC-B (sort of)
sca
am going to test another database to check the performance of the
hardware.
-Kai
> -Original Message-
> From: Kevin Grittner [mailto:kevin.gritt...@wicourts.gov]
> Sent: Wednesday, August 31, 2011 8:59 PM
> To: Kai Otto; pgsql-performance@postgresql.org
> Subject: Re:
On August 31, 2011 11:56:56 AM Andy Colson wrote:
> On 8/31/2011 1:51 PM, Alan Hodgson wrote:
> > On August 31, 2011 11:26:57 AM Andy Colson wrote:
> >> When you ran it, did it really feel like 30 seconds? Or did it come
> >> right back real quick?
> >>
> >> Because your report says:
> >> > 35
"Kai Otto" wrote:
> Time taken:
>
> 35.833 ms (i.e. roughly 35 seconds)
Which is it? 35 ms or 35 seconds?
> Number of rows:
>
> 121830
>
> Number of columns:
>
> 38
> This is extremely slow for a database server.
>
> Can anyone help me in finding the problem?
> "Seq Scan on "Frame"
On 8/31/2011 1:51 PM, Alan Hodgson wrote:
On August 31, 2011 11:26:57 AM Andy Colson wrote:
When you ran it, did it really feel like 30 seconds? Or did it come
right back real quick?
Because your report says:
> 35.833 ms
Thats ms, or milliseconds, or 0.035 seconds.
I think the "." is a
On August 31, 2011 11:26:57 AM Andy Colson wrote:
> When you ran it, did it really feel like 30 seconds? Or did it come
> right back real quick?
>
> Because your report says:
> > 35.833 ms
>
> Thats ms, or milliseconds, or 0.035 seconds.
>
I think the "." is a thousands separator in some loca
When you ran it, did it really feel like 30 seconds? Or did it come
right back real quick?
Because your report says:
> 35.833 ms
Thats ms, or milliseconds, or 0.035 seconds.
-Andy
On 8/31/2011 8:04 AM, Kai Otto wrote:
Hi all,
I am running a simple query:
SELECT * FROM public.“Frame”
Ti
Hi all,
I am running a simple query:
SELECT * FROM public."Frame"
Time taken:
35.833 ms (i.e. roughly 35 seconds)
Number of rows:
121830
Number of columns:
38
This is extremely slow for a database server.
Can anyone help me in finding the problem?
Thanks,
KOtto
Client
On 06/28/2011 07:50 PM, Craig McIlwee wrote:
I was thinking that shared buffers controlled the amount of data,
primarily table and index pages, that the database could store in
memory at once. Based on that assumption, I thought that a larger
value would enable an entire table + index to be in
On 06/28/2011 07:26 PM, Craig McIlwee wrote:
Yes, the data import is painfully slow but I hope to make up for that
with the read performance later.
You can probably improve that with something like this:
shared_buffers=512MB
checkpoint_segments=64
Maybe bump up maintenance_work_mem too, if th
Dne 29.6.2011 01:50, Craig McIlwee napsal(a):
>> > work_mem: 512MB
>> > shared_buffers: 64MB, 512MB, and 1024MB, each yielded the same query
>> > plan and took the same amount of time to execute give or take a few
>> > seconds
>>
>> shared_buffers doesn't normally impact the query plan; it impacts
Dne 29.6.2011 01:26, Craig McIlwee napsal(a):
>> Dne 28.6.2011 23:28, Craig McIlwee napsal(a):
>> Are you sure those two queries are exactly the same? Because the daily
>> case output says the width is 50B, while the half-month case says it's
>> 75B. This might be why the sort/aggregate steps are s
> On 06/28/2011 05:28 PM, Craig McIlwee wrote:
> > Autovacuum is disabled for these tables since the data is never
> > updated. The tables that we are testing with at the moment will not
> > grow any larger and have been both clustered and analyzed.
>
> Note that any such prep to keep from ever
> Dne 28.6.2011 23:28, Craig McIlwee napsal(a):
> > Daily table explain analyze: http://explain.depesz.com/s/iLY
> > Half month table explain analyze: http://explain.depesz.com/s/Unt
>
> Are you sure those two queries are exactly the same? Because the daily
> case output says the width is 50B, whi
On 06/28/2011 05:28 PM, Craig McIlwee wrote:
Autovacuum is disabled for these tables since the data is never
updated. The tables that we are testing with at the moment will not
grow any larger and have been both clustered and analyzed.
Note that any such prep to keep from ever needing to main
Dne 28.6.2011 23:28, Craig McIlwee napsal(a):
> Daily table explain analyze: http://explain.depesz.com/s/iLY
> Half month table explain analyze: http://explain.depesz.com/s/Unt
Are you sure those two queries are exactly the same? Because the daily
case output says the width is 50B, while the half-
Hello,
I have a handful of queries that are performing very slowly. I realize that I
will be hitting hardware limits at some point, but want to make sure Ive
squeezed out every bit of performance I can before calling it quits.
Our database is collecting traffic data at the rate of about 3 mill
Hello
> > Filter: ((COALESCE((at_type)::integer, 1) = 1) AND
> > (COALESCE(at_language, 0::numeric) = 0::numeric))
>
> If this is slow, it must be that the scan of fpuarticletext actually
> returns many more rows than the single row the planner is expecting.
> The reason the estimat
"Marten Verhoeven" <[EMAIL PROTECTED]> writes:
> This is the query analysis:
> Nested Loop Left Join (cost=1796.69..3327.98 rows=5587 width=516)
> Join Filter: (fpuarticle.a_no = fpuarticletext.at_a_no)
> Filter: (strpos(lowerCOALESCE(fpuarticle.a_code, ''::character
> varying))::text |
Hello
please, send output EXPLAIN ANALYZE statement
Regards
Pavel Stehule
On 21/01/2008, Marten Verhoeven <[EMAIL PROTECTED]> wrote:
>
>
> Hi,
>
> Since I moved from PostgreSQL 7.3 to 8.2 I have a query which suddenly runs
> very slow. In 7.3 it was really fast. It seems that the query analyser
Hi,
Since I moved from PostgreSQL 7.3 to 8.2 I have a query which suddenly runs
very slow. In 7.3 it was really fast. It seems that the query analyser makes
other choices, which I don't understand.
I have the query:
SELECT * FROM fpuArticle
LEFT OUTER JOIN fpuArticleText ON a_No=at_a_N
PROTECTED]>
To: "Greg Quinn" <[EMAIL PROTECTED]>
Sent: Wednesday, March 29, 2006 11:02 AM
Subject: Re: [PERFORM] Slow performance on Windows .NET and OleDb
select * from users
which returns 4 varchar fields, there is no where clause
how many rows does it return ? a few, or
On 3/29/06, Greg Quinn <[EMAIL PROTECTED]> wrote:
> > how many rows does it return ? a few, or a lot ?
>
> 3000 Rows - 7 seconds - very slow
>
> Which client library may have a problem? I am using OleDb, though haven't
> tried the .NET connector yet.
esilo=# create temp table use_npgsql as select
3000 Rows - 7 seconds - very slow
On my PC (athlon 64 3000+ running Linux), selecting 3000 rows with 4
columns out of a 29 column table takes about 105 ms, including time to
transfer the results and convert them to native Python objects. It takes
about 85 ms on a test table with only th
Hi, Greg,
Greg Quinn wrote:
>>> I populate 3000 records into the table to test PostGreSql's speed.
>>> It takes about 3-4 seconds.
>> When you do the population, is it via inserts or copy?
> Via insert
Are those inserts encapsulated into a single transaction? If not, that's
the reason why it's so
g Ram. Windows XP
- Original Message -
From: "PFC" <[EMAIL PROTECTED]>
To: "Greg Quinn" <[EMAIL PROTECTED]>
Sent: Wednesday, March 29, 2006 11:02 AM
Subject: Re: [PERFORM] Slow performance on Windows .NET and OleDb
select * from users
which returns 4 var
Ruben Rubio Rey wrote:
Greg Quinn wrote:
The query is,
select * from users
which returns 4 varchar fields, there is no where clause
Yes, I am running the default postgres config. Basically I have been a
MySQL user and thought I would like to check out PostGreSql. So I did
a quick performan
Greg Quinn wrote:
The query is,
select * from users
which returns 4 varchar fields, there is no where clause
Yes, I am running the default postgres config. Basically I have been a
MySQL user and thought I would like to check out PostGreSql. So I did
a quick performance test. The performance
Via insert
When you do the population, is it via inserts or copy?
Joshua D. Drake
--
=== The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive PostgreSQL solutions since 19
give me some tips?
Thanks
- Original Message -
From: "Joshua D. Drake" <[EMAIL PROTECTED]>
To: "Jim C. Nasby" <[EMAIL PROTECTED]>
Cc: "Greg Quinn" <[EMAIL PROTECTED]>;
Sent: Tuesday, March 28, 2006 7:52 PM
Subject: Re: [PERFORM] Slow perfo
Jim C. Nasby wrote:
On Tue, Mar 28, 2006 at 02:14:00PM +0200, Greg Quinn wrote:
Hello,
I have just installed PostGreSql 8.1 on my Windows XP PC. I created a
simple table called users with 4 varchar fields.
I am using the OleDb connection driver. In my .NET application, I populate
3000 recor
On Tue, Mar 28, 2006 at 02:14:00PM +0200, Greg Quinn wrote:
> Hello,
>
> I have just installed PostGreSql 8.1 on my Windows XP PC. I created a
> simple table called users with 4 varchar fields.
>
> I am using the OleDb connection driver. In my .NET application, I populate
> 3000 records into th
On 3/28/06, Greg Quinn <[EMAIL PROTECTED]> wrote:
> I am using the OleDb connection driver. In my .NET application, I populate
> 3000 records into the table to test PostGreSql's speed. It takes about 3-4
> seconds.
have you tried:
1. npgsql .net data provider
2. odbc ado.net bridge
merlin
--
Hello,
I have just installed PostGreSql 8.1 on my Windows XP PC. I created a simple
table called users with 4 varchar fields.
I am using the OleDb connection driver. In my .NET application, I populate
3000 records into the table to test PostGreSql's speed. It takes about 3-4
seconds.
Even
Erik Norvelle <[EMAIL PROTECTED]> writes:
>>> it=> explain select codelemm, sectref, count(codelemm) from indethom
> group by codelemm, sectref;
>>> QUERY PLAN
>>> ---
> -
>>> GroupAggregate (cost=2339900.60..2444149.44
Greetings all,
This question has probably been asked many times, but I was unable to
use the list archives to search, since the term "Group" matches
thousands of of messages with the names of user groups in them... so
sorry if I'm repeating!
Here's the problem: I have a table of 10,000,000
Yonatan Goraly kirjutas P, 26.10.2003 kell 00:25:
> I am in the process of adding PostgreSQL support for an application,
> in addition to Oracle and MS SQL.
> I am using PostgreSQL version 7.3.2, Red Hat 9.0 on Intel Pentium III
> board.
>
> I have a query that generally looks like this:
>
> SEL
I run ANALYZE and the problem resolved
Thanks
Yonatan Goraly kirjutas P, 26.10.2003 kell 00:25:
I am in the process of adding PostgreSQL support for an application,
in addition to Oracle and MS SQL.
I am using PostgreSQL version 7.3.2, Red Hat 9.0 on Intel Pentium III
board.
I have a query th
I am in the process of adding PostgreSQL support for an application,
in addition to Oracle and MS SQL.
I am using PostgreSQL version 7.3.2, Red Hat 9.0 on Intel Pentium III
board.
I have a query that generally looks like this:
SELECT t1.col1, t2.col1 FROM t1, t2 WHERE t1.x=t2.y AND t2.p='str
47 matches
Mail list logo