On Thu, Dec 6, 2012 at 1:28 AM, Shaun Thomas wrote:
> This isn't a question, but a kind of summary over a ton of investigation
> I've been doing since a recent "upgrade". Anyone else out there with
> "big iron" might want to confirm this, but it seems pretty reproducible.
> This seems to affect t
On Wed, Dec 5, 2012 at 4:04 PM, Shaun Thomas wrote:
> On 12/05/2012 04:41 PM, Bruce Momjian wrote:
>
>> Ah, that is interesting about 2.6. I had wondered how Debian stable
>> would have performed, 2.6.32-5. This relates to a recent discussion
>> about the appropriateness of Ubuntu for database s
On 12/05/2012 04:41 PM, Bruce Momjian wrote:
Ah, that is interesting about 2.6. I had wondered how Debian stable
would have performed, 2.6.32-5. This relates to a recent discussion
about the appropriateness of Ubuntu for database servers:
Hmm. I may have to recant. I just removed our fusionI
On Wed, Dec 5, 2012 at 04:25:28PM -0600, Shaun Thomas wrote:
> On 12/05/2012 04:19 PM, Daniel Farina wrote:
>
> >Is 3.2 a significant regression from previous releases, or is 3.4 just
> >faster? Your wording only indicates that "older kernel is slow," but
> >your tone would suggest that you feel
On 12/05/2012 04:19 PM, Daniel Farina wrote:
Is 3.2 a significant regression from previous releases, or is 3.4 just
faster? Your wording only indicates that "older kernel is slow," but
your tone would suggest that you feel this is a regression, cf.
It's definitely a regression. I'm trying to
On Wed, Dec 5, 2012 at 10:28 AM, Shaun Thomas wrote:
> Hey guys,
>
> This isn't a question, but a kind of summary over a ton of investigation
> I've been doing since a recent "upgrade". Anyone else out there with
> "big iron" might want to confirm this, but it seems pretty reproducible.
> This see
Where as I can't say I yet tried out the 3.4 kernel, I can say that I am
running 3.2 too, and maybe there is a connection to the past issues of strange
CPU behavior I have had (as you know and have been so kind to try helping me
solve). I will without a doubt try out 3.4 or 3.6 within the coming
Hi,
I'm struggling with a query for some time and the major problem of the
query is that the statistics are way wrong on a particular operation:
-> Nested Loop (cost=3177.72..19172.84 rows=*2* width=112) (actual
time=139.221..603.929 rows=*355331* loops=1)
Join Filter: (l.location_id = r.l
Hey guys,
This isn't a question, but a kind of summary over a ton of investigation
I've been doing since a recent "upgrade". Anyone else out there with
"big iron" might want to confirm this, but it seems pretty reproducible.
This seems to affect the latest 3.2 mainline and by extension, any
platf
Jeff Janes writes:
> I now see where the cost is coming from. In commit 21a39de5809 (first
> appearing in 9.2) the "fudge factor" cost estimate for large indexes
> was increased by about 10 fold, which really hits this index hard.
> This was fixed in commit bf01e34b556 "Tweak genericcostestimate
On Wed, Dec 5, 2012 at 2:39 PM, Jeff Janes wrote:
> I'm not sure that this change would fix your problem, because it might
> also change the costs of the alternative plans in a way that
> neutralizes things. But I suspect it would fix it. Of course, a
> correct estimate of the join size would al
On Tue, Dec 4, 2012 at 3:42 PM, Jeff Janes wrote:
(Regarding http://explain.depesz.com/s/4MWG, wrote)
>
> But I am curious about how the cost estimate for the primary key look
> up is arrived at:
>
> Index Scan using cons_pe_primary_key on position_effect
> (cost=0.00..42.96 rows=1 width=16)
>
>
On 12/05/2012 11:51 AM, Jean-David Beyer wrote:
I thought that postgreSQL did its own journalling, if that is the
proper term, so why not use an ext2 file system to lower overhead?
Postgres journalling will not save you from a corrupt file system.
cheers
andrew
--
Sent via pgsql-perfo
On Wed, Dec 5, 2012 at 1:51 PM, Jean-David Beyer wrote:
> I thought that postgreSQL did its own journalling, if that is the proper
> term, so why not use an ext2 file system to lower overhead?
Because you can still have metadata-level corruption.
--
Sent via pgsql-performance mailing list (pgs
On 12/05/2012 10:34 AM, Andrea Suisani wrote:
> [sorry for resuming an old thread]
>
> [cut]
>
Question is... will that remove the performance penalty of
HyperThreading?
>>>
>>> So I've added to my todo list to perform a test to verify this claim :)
>>
>> done.
>
> on this box:
>
>> in a
[sorry for resuming an old thread]
[cut]
Question is... will that remove the performance penalty of HyperThreading?
So I've added to my todo list to perform a test to verify this claim :)
done.
on this box:
in a brief: the box is dell a PowerEdge r720 with 16GB of RAM,
the cpu is a Xeon
Hello Suhas,
You need to supply good information for an accurate answer. Please have a look
at this link:
http://wiki.postgresql.org/wiki/Slow_Query_Questions
Kind regards,
Willem
> Date: Wed, 5 Dec 2012 00:10:10 -0800
> From: suha...@verse.in
> To: pgsql-performance@postgresql.org
> Su
Hi,
I have a partitioned table(partitioned on date) .There are about 1 million
insertions per day. There is a column called mess_id. This column is updated
.But update query is taking huge time.When i checked , this column is not
unique, and most of the time its having null . Say everyday out of
18 matches
Mail list logo