Hello
for example queries with LIMIT clause can be significantly faster with
nested loop. But you don't need to disable nested loop globally.
You can wrap your query to sql functions and disable nested loop just
for these functions.
Regards
Pavel Stehule
2011/3/18 Anssi Kääriäinen :
> Hello li
On 18-3-2011 4:02 Scott Marlowe wrote:
On Thu, Mar 17, 2011 at 6:51 PM, Oliver Charles
wrote:
Another point. My experience with 1U chassis and cooling is that they
don't move enough air across their cards to make sure they stay cool.
You'd be better off ordering a 2U chassis with 8 3.5" drive
On Fri, Mar 18, 2011 at 1:16 AM, Arjen van der Meijden
wrote:
> On 18-3-2011 4:02 Scott Marlowe wrote:
>>
>> On Thu, Mar 17, 2011 at 6:51 PM, Oliver Charles
>> wrote:
>>
>> Another point. My experience with 1U chassis and cooling is that they
>> don't move enough air across their cards to make
On 03/18/2011 09:02 AM, Pavel Stehule wrote:
for example queries with LIMIT clause can be significantly faster with
nested loop. But you don't need to disable nested loop globally.
You can wrap your query to sql functions and disable nested loop just
for these functions.
Thank you for your hel
18.03.11 09:15, Anssi Kääriäinen написав(ла):
Hello list,
I am working on a Entity-Attribute-Value (EAV) database using
PostgreSQL 8.4.7. The basic problem is that when joining multiple
times different entities the planner thinks that there is vastly less
rows to join than there is in reality
Anssi Kääriäinen, 18.03.2011 08:15:
Hello list,
I am working on a Entity-Attribute-Value (EAV) database using
PostgreSQL 8.4.7. The basic problem is that when joining multiple
times different entities the planner thinks that there is vastly less
rows to join than there is in reality and decides
On 03/18/2011 12:52 PM, Vitalii Tymchyshyn wrote:
If your queries work on single attribute, you can try adding partial
indexes for different attributes. Note that in this case parameterized
statements may prevent index usage, so check also with attribute id inlined.
Best regards, Vitalii Tymchys
On 03/18/2011 01:14 PM, Thomas Kellerer wrote:
Did you consider using hstore instead?
I think in the PostgreSQL world, this is a better alternative than EAV and most
probably faster as well.
No, we did not. The reason is that we want to track each attribute with
bi-temporal timestamps. The act
On Mar 17, 2011, at 9:39 PM, Scott Marlowe wrote:
My experience puts the 23xx series opterons in a same general
neighborhood as the E5300 and a little behind the E5400 series Xeons.
OTOH, the newer Magny Cours Opterons stomp both of those into the
ground.
Do any of those machines have zone.re
On 18-3-2011 10:11, Scott Marlowe wrote:
On Fri, Mar 18, 2011 at 1:16 AM, Arjen van der Meijden
wrote:
On 18-3-2011 4:02 Scott Marlowe wrote:
We have several 1U boxes (mostly Dell and Sun) running and had several in
the past. And we've never had any heating problems with them. That includes
ma
On 03/18/2011 12:17 AM, Adarsh Sharma wrote:
Thanks , it works now ..:-)
Here is the output :
pdc_uima=# SELECT count(*) from page_content WHERE publishing_date like
'%2010%' and
pdc_uima-# content_language='en' and content is not null and isprocessable = 1
and
pdc_uima-# to_tsvector('engli
Hi all,
Our system has a postgres database that has a table for statistic which is
updated every hour by about 10K clients. Each client only make update to its
own row in the table. So far I am only seeing one core out of eight cores on
my server being active which tells me that the update is bein
Red Maple wrote:
> Our system has a postgres database that has a table for statistic
> which is updated every hour by about 10K clients. Each client only
> make update to its own row in the table. So far I am only seeing
> one core out of eight cores on my server being active which tells
> me th
>From: pgsql-performance-ow...@postgresql.org
>[mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of Red Maple
>Sent: Friday, March 18, 2011 9:05 AM
>To: pgsql-performance@postgresql.org
>Subject: [PERFORM] Help: massive parallel update to the same table
>
>Hi all,
>
>Our system has a postg
Andy Colson wrote:
> On 03/17/2011 09:25 AM, Michael Andreasen wrote:
>> I've been looking around for information on doing a pg_restore as
>> fast as possible.
>> I am using a twin processor box with 2GB of memory
>> shared_buffers = 496MB
Probably about right.
>> maintenance_work_mem = 1
On Fri, Mar 18, 2011 at 7:52 AM, Vitalii Tymchyshyn wrote:
> 18.03.11 09:15, Anssi Kääriäinen написав(ла):
> Hello.
>
> If your queries work on single attribute, you can try adding partial indexes
> for different attributes. Note that in this case parameterized statements
> may prevent index usage
> Thanks , it works now .. :-)
>
> Here is the output :
>
> pdc_uima=# SELECT count(*) from page_content WHERE publishing_date like
> '%2010%' and
> pdc_uima-# content_language='en' and content is not null and
> isprocessable = 1 and
> pdc_uima-# to_tsvector('english',content) @@
> to_tsquery('eng
On Fri, Mar 18, 2011 at 3:19 AM, Jesper Krogh wrote:
> * Dependent on your workload of-course, you're typically not
> bottlenecked by the amount of cpu-cores, so strive for fewer
> faster cores.
Depending on your workload again, but faster memory is even more
important than faster math.
So go
[rearranged - please don't top-post]
[also, bringing this back to the list - please keep the list copied]
Red Maple wrote:
> Kevin Grittner wrote:
>> It should be parallel by default. Are you taking out any
>> explicit locks?
> my clients use psql to remotely run an update function on the
On Fri, Mar 18, 2011 at 6:44 AM, Arjen van der Meijden
wrote:
> On 18-3-2011 10:11, Scott Marlowe wrote:
>>
>> On Fri, Mar 18, 2011 at 1:16 AM, Arjen van der Meijden
>> wrote:
>>>
>>> On 18-3-2011 4:02 Scott Marlowe wrote:
>>> We have several 1U boxes (mostly Dell and Sun) running and had severa
Hi,
Here is my function. If I comment out the update then it would run all the
cores, if not then only one core will run
CREATE OR REPLACE FUNCTION my_update_device(this_mac text, number_of_devices
integer, this_sysuptime integer)
RETURNS integer AS
$BODY$
DECLARE
fake_mac m
On 3/18/2011 9:38 AM, Kevin Grittner wrote:
Andy Colson wrote:
On 03/17/2011 09:25 AM, Michael Andreasen wrote:
I've been looking around for information on doing a pg_restore as
fast as possible.
bgwriter_lru_maxpages = 0
I hadn't thought much about that last one -- do you have benchmar
Red Maple wrote:
> Here is my function. If I comment out the update then it would run
> all the cores, if not then only one core will run
> CREATE OR REPLACE FUNCTION
> [...]
> select sysuptime
> into this_sysuptime
> from ap_sysuptime
> where ap_id = this_
On Fri, Mar 18, 2011 at 10:32 AM, Scott Marlowe wrote:
> On Fri, Mar 18, 2011 at 6:44 AM, Arjen van der Meijden
> wrote:
>> On 18-3-2011 10:11, Scott Marlowe wrote:
>>>
>>> On Fri, Mar 18, 2011 at 1:16 AM, Arjen van der Meijden
>>> wrote:
On 18-3-2011 4:02 Scott Marlowe wrote:
We
I have a large table but not as large as the kind of numbers that get
discussed on this list. It has 125 million rows.
REINDEXing the table takes half a day, and it's still not finished.
To write this post I did "SELECT COUNT(*)", and here's the output -- so long!
select count(*) from links;
On Fri, Mar 18, 2011 at 9:07 PM, Phoenix Kiula wrote:
> I have a large table but not as large as the kind of numbers that get
> discussed on this list. It has 125 million rows.
>
> REINDEXing the table takes half a day, and it's still not finished.
>
> To write this post I did "SELECT COUNT(*)", a
On Fri, Mar 18, 2011 at 9:07 PM, Phoenix Kiula wrote:
> autovacuum = on
> autovacuum_vacuum_cost_delay = 20
>
> vacuum_cost_delay = 20
> vacuum_cost_limit = 600
>
> autovacuum_naptime = 10
also, if vacuum can't keep up you can increase the vacuum
27 matches
Mail list logo