On 2011-03-18 01:51, Oliver Charles wrote:
Hello,
At MusicBrainz we're looking to get a new database server, and are
hoping to buy this in the next couple of days. I'm mostly a software
guy, but I'm posting this on behalf of Rob, who's actually going to be
buying the hardware. Here's a quote of
Hello list,
I am working on a Entity-Attribute-Value (EAV) database using PostgreSQL
8.4.7. The basic problem is that when joining multiple times different
entities the planner thinks that there is vastly less rows to join than
there is in reality and decides to use multiple nested loops for
Hello
for example queries with LIMIT clause can be significantly faster with
nested loop. But you don't need to disable nested loop globally.
You can wrap your query to sql functions and disable nested loop just
for these functions.
Regards
Pavel Stehule
2011/3/18 Anssi Kääriäinen
On 18-3-2011 4:02 Scott Marlowe wrote:
On Thu, Mar 17, 2011 at 6:51 PM, Oliver Charles
postgresql-p...@ocharles.org.uk wrote:
Another point. My experience with 1U chassis and cooling is that they
don't move enough air across their cards to make sure they stay cool.
You'd be better off
On Fri, Mar 18, 2011 at 1:16 AM, Arjen van der Meijden
acmmail...@tweakers.net wrote:
On 18-3-2011 4:02 Scott Marlowe wrote:
On Thu, Mar 17, 2011 at 6:51 PM, Oliver Charles
postgresql-p...@ocharles.org.uk wrote:
Another point. My experience with 1U chassis and cooling is that they
don't
On 03/18/2011 09:02 AM, Pavel Stehule wrote:
for example queries with LIMIT clause can be significantly faster with
nested loop. But you don't need to disable nested loop globally.
You can wrap your query to sql functions and disable nested loop just
for these functions.
Thank you for your
18.03.11 09:15, Anssi Kääriäinen написав(ла):
Hello list,
I am working on a Entity-Attribute-Value (EAV) database using
PostgreSQL 8.4.7. The basic problem is that when joining multiple
times different entities the planner thinks that there is vastly less
rows to join than there is in
Anssi Kääriäinen, 18.03.2011 08:15:
Hello list,
I am working on a Entity-Attribute-Value (EAV) database using
PostgreSQL 8.4.7. The basic problem is that when joining multiple
times different entities the planner thinks that there is vastly less
rows to join than there is in reality and decides
On 03/18/2011 12:52 PM, Vitalii Tymchyshyn wrote:
If your queries work on single attribute, you can try adding partial
indexes for different attributes. Note that in this case parameterized
statements may prevent index usage, so check also with attribute id inlined.
Best regards, Vitalii
On 03/18/2011 01:14 PM, Thomas Kellerer wrote:
Did you consider using hstore instead?
I think in the PostgreSQL world, this is a better alternative than EAV and most
probably faster as well.
No, we did not. The reason is that we want to track each attribute with
bi-temporal timestamps. The
On Mar 17, 2011, at 9:39 PM, Scott Marlowe wrote:
My experience puts the 23xx series opterons in a same general
neighborhood as the E5300 and a little behind the E5400 series Xeons.
OTOH, the newer Magny Cours Opterons stomp both of those into the
ground.
Do any of those machines have
On 18-3-2011 10:11, Scott Marlowe wrote:
On Fri, Mar 18, 2011 at 1:16 AM, Arjen van der Meijden
acmmail...@tweakers.net wrote:
On 18-3-2011 4:02 Scott Marlowe wrote:
We have several 1U boxes (mostly Dell and Sun) running and had several in
the past. And we've never had any heating problems
On 03/18/2011 12:17 AM, Adarsh Sharma wrote:
Thanks , it works now ..:-)
Here is the output :
pdc_uima=# SELECT count(*) from page_content WHERE publishing_date like
'%2010%' and
pdc_uima-# content_language='en' and content is not null and isprocessable = 1
and
pdc_uima-#
Hi all,
Our system has a postgres database that has a table for statistic which is
updated every hour by about 10K clients. Each client only make update to its
own row in the table. So far I am only seeing one core out of eight cores on
my server being active which tells me that the update is
Red Maple redmaplel...@gmail.com wrote:
Our system has a postgres database that has a table for statistic
which is updated every hour by about 10K clients. Each client only
make update to its own row in the table. So far I am only seeing
one core out of eight cores on my server being active
From: pgsql-performance-ow...@postgresql.org
[mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of Red Maple
Sent: Friday, March 18, 2011 9:05 AM
To: pgsql-performance@postgresql.org
Subject: [PERFORM] Help: massive parallel update to the same table
Hi all,
Our system has a postgres
Andy Colson a...@squeakycode.net wrote:
On 03/17/2011 09:25 AM, Michael Andreasen wrote:
I've been looking around for information on doing a pg_restore as
fast as possible.
I am using a twin processor box with 2GB of memory
shared_buffers = 496MB
Probably about right.
On Fri, Mar 18, 2011 at 7:52 AM, Vitalii Tymchyshyn tiv...@gmail.com wrote:
18.03.11 09:15, Anssi Kääriäinen написав(ла):
Hello.
If your queries work on single attribute, you can try adding partial indexes
for different attributes. Note that in this case parameterized statements
may prevent
Thanks , it works now .. :-)
Here is the output :
pdc_uima=# SELECT count(*) from page_content WHERE publishing_date like
'%2010%' and
pdc_uima-# content_language='en' and content is not null and
isprocessable = 1 and
pdc_uima-# to_tsvector('english',content) @@
On Fri, Mar 18, 2011 at 3:19 AM, Jesper Krogh jes...@krogh.cc wrote:
* Dependent on your workload of-course, you're typically not
bottlenecked by the amount of cpu-cores, so strive for fewer
faster cores.
Depending on your workload again, but faster memory is even more
important than faster
[rearranged - please don't top-post]
[also, bringing this back to the list - please keep the list copied]
Red Maple redmaplel...@gmail.com wrote:
Kevin Grittner kevin.gritt...@wicourts.gov wrote:
It should be parallel by default. Are you taking out any
explicit locks?
my clients use
On Fri, Mar 18, 2011 at 6:44 AM, Arjen van der Meijden
acmmail...@tweakers.net wrote:
On 18-3-2011 10:11, Scott Marlowe wrote:
On Fri, Mar 18, 2011 at 1:16 AM, Arjen van der Meijden
acmmail...@tweakers.net wrote:
On 18-3-2011 4:02 Scott Marlowe wrote:
We have several 1U boxes (mostly Dell
Hi,
Here is my function. If I comment out the update then it would run all the
cores, if not then only one core will run
CREATE OR REPLACE FUNCTION my_update_device(this_mac text, number_of_devices
integer, this_sysuptime integer)
RETURNS integer AS
$BODY$
DECLARE
fake_mac
On 3/18/2011 9:38 AM, Kevin Grittner wrote:
Andy Colsona...@squeakycode.net wrote:
On 03/17/2011 09:25 AM, Michael Andreasen wrote:
I've been looking around for information on doing a pg_restore as
fast as possible.
bgwriter_lru_maxpages = 0
I hadn't thought much about that last one --
Red Maple redmaplel...@gmail.com wrote:
Here is my function. If I comment out the update then it would run
all the cores, if not then only one core will run
CREATE OR REPLACE FUNCTION
[...]
select sysuptime
into this_sysuptime
from ap_sysuptime
On Fri, Mar 18, 2011 at 10:32 AM, Scott Marlowe scott.marl...@gmail.com wrote:
On Fri, Mar 18, 2011 at 6:44 AM, Arjen van der Meijden
acmmail...@tweakers.net wrote:
On 18-3-2011 10:11, Scott Marlowe wrote:
On Fri, Mar 18, 2011 at 1:16 AM, Arjen van der Meijden
acmmail...@tweakers.net wrote:
I have a large table but not as large as the kind of numbers that get
discussed on this list. It has 125 million rows.
REINDEXing the table takes half a day, and it's still not finished.
To write this post I did SELECT COUNT(*), and here's the output -- so long!
select count(*) from links;
On Fri, Mar 18, 2011 at 9:07 PM, Phoenix Kiula phoenix.ki...@gmail.com wrote:
I have a large table but not as large as the kind of numbers that get
discussed on this list. It has 125 million rows.
REINDEXing the table takes half a day, and it's still not finished.
To write this post I did
On Fri, Mar 18, 2011 at 9:07 PM, Phoenix Kiula phoenix.ki...@gmail.com wrote:
autovacuum = on
autovacuum_vacuum_cost_delay = 20
vacuum_cost_delay = 20
vacuum_cost_limit = 600
autovacuum_naptime = 10
also, if vacuum can't keep up you can
29 matches
Mail list logo