> On Aug 25, 2017, at 17:07, Tom Lane wrote:
>
> =?utf-8?Q?Felix_Geisend=C3=B6rfer?= writes:
>> I recently came across a performance difference between two machines that
>> surprised me:
>> ...
>> As you can see, Machine A spends 5889ms on the Sort Node
es Sort benefit
so massively from the advancement here (~10x), but Seq Scan, Unique and
HashAggregate don't benefit as much (~2x)?
As you can probably tell, my hardware knowledge is very superficial, so I
apologize if this is a stupid question. But I'd genuinely like to improve my
understanding and intuition
on the XEON than
on the Intel i7.
Any idea where to search for the bottleneck?
Mit freundlichen Grüßen
Felix Schubert
FEScon
... and work flows!
felix schubert
haspelgasse 5
69117 heidelberg
mobil: +49-151-25337718
mail: in...@fescon.de
skype: fesmac
on the XEON than
on the Intel i7.
Any idea where to search for the bottleneck?
best regards,
Felix Schubert
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Hi Scott,
the controller is a HP i410 running 3x300GB SAS 15K / Raid 5
Mit freundlichen Grüßen
Felix Schubert
Von meinem iPhone gesendet :-)
Am 25.08.2012 um 14:42 schrieb Scott Marlowe scott.marl...@gmail.com:
On Sat, Aug 25, 2012 at 6:07 AM, Felix Schubert in...@fescon.de wrote:
Hello
Don't know but I forwarded the question to the System Administrator.
Anyhow thanks for the information up to now!
best regards,
Felix
Am 25.08.2012 um 14:59 schrieb Scott Marlowe scott.marl...@gmail.com:
Well it sounds like it does NOT have a battery back caching module on
it, am I right?
Hi, I am running a 9.1 server at Ubuntu. When I upgraded to the current version
I did a pg_dump followed by pg_restore and found that the db was much faster.
But slowed down again after two days. I did the dump-restore again and could
now
compare the two (actually identical) databases. This is
without the cluster?
regards,
Felix
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
I posted many weeks ago about a severe problem with a table that was
obviously bloated and was stunningly slow. Up to 70 seconds just to get a
row count on 300k rows.
I removed the text column, so it really was just a few columns of fixed
data.
Still very bloated. Table size was 450M
The advice
, and then use the init script in /etc/init.d to restart it.
I'm getting another slicehost slice. hopefully I can clone the whole thing
over without doing a full install and go screw around with it there.
its a fairly complicated install, even with buildout doing most of the
configuration.
=felix
+1
this is exactly what I was looking for at the time: a -t (configtest)
option to pg_ctl
and I think it should fall back to lower shared buffers and log it.
SHOW ALL; would show the used value
On Mon, Feb 7, 2011 at 11:30 AM, Marti Raudsepp ma...@juffo.org wrote:
On Mon, Feb 7, 2011 at
On Mon, Feb 7, 2011 at 6:05 AM, Shaun Thomas stho...@peak6.com wrote:
That’s one of the things I talked about. To be safe, PG will start to shut
down but disallow new connections, and **that’s all**. Old connections are
grandfathered in until they disconnect, and when they all go away, it
BRUTAL
http://www.postgresql.org/docs/8.3/interactive/runtime-config-resource.html
max_fsm_pages
See Section
17.4.1http://www.postgresql.org/docs/8.3/interactive/kernel-resources.html#SYSVIPC
for
information on how to adjust those parameters, if necessary.
I see absolutely nothing in there
On Sun, Feb 6, 2011 at 4:23 PM, Scott Marlowe scott.marl...@gmail.comwrote:
Let's review:
1: No test or staging system used before production
no, I do not have a full ubuntu machine replicating the exact memory and
application load of the production server.
this was changing one
yeah, it already uses memcached with db save. nothing important in session
anyway
the session table is not the issue
and I never clustered that one or ever will
thanks for the tip, also the other one about HOT
On Sun, Feb 6, 2011 at 8:19 PM, Pierre C li...@peufeu.com wrote:
I have
I am having huge performance problems with a table. Performance deteriorates
every day and I have to run REINDEX and ANALYZE on it every day. auto
vacuum is on. yes, I am reading the other thread about count(*) :)
but obviously I'm doing something wrong here
explain analyze select count(*)
sorry, reply was meant to go to the list.
-- Forwarded message --
From: felix crucialfe...@gmail.com
Date: Fri, Feb 4, 2011 at 5:17 PM
Subject: Re: [PERFORM] Really really slow select count(*)
To: stho...@peak6.com
On Fri, Feb 4, 2011 at 4:00 PM, Shaun Thomas stho...@peak6.com
reply was meant for the list
-- Forwarded message --
From: felix crucialfe...@gmail.com
Date: Fri, Feb 4, 2011 at 4:39 PM
Subject: Re: [PERFORM] Really really slow select count(*)
To: Greg Smith g...@2ndquadrant.com
On Fri, Feb 4, 2011 at 3:56 PM, Greg Smith g
On Fri, Feb 4, 2011 at 5:35 PM, Shaun Thomas stho...@peak6.com wrote:
vacuumdb -a -v -z vacuum.log
And at the end of the log, it'll tell you how many pages it wants, and how
many pages were available.
this is the dev, not live. but this is after it gets done with that table:
CPU
vacuumdb -a -v -z -U postgres -W vacuum.log
that's all, isn't it ?
it did each db
8.3 in case that matters
the very end:
There were 0 unused item pointers.
0 pages are entirely empty.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO: analyzing public.seo_partnerlinkcategory
INFO:
:
On 02/04/2011 11:44 AM, felix wrote:
the very end:
There were 0 unused item pointers.
0 pages are entirely empty.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO: analyzing public.seo_partnerlinkcategory
INFO: seo_partnerlinkcategory: scanned 0 of 0 pages, containing 0 live
rows and 0 dead
;
commit;
that sounds like a good approach.
gentlemen, 300,000 + thanks for your generous time !
(a small number, I know)
-felix
ah right, duh.
yes, I did it as -U postgres, verified as a superuser
just now did it from inside psql as postgres
\c djns4
vacuum verbose analyze;
still no advice on the pages
On Fri, Feb 4, 2011 at 8:34 PM, Scott Marlowe scott.marl...@gmail.comwrote:
On Fri, Feb 4, 2011 at 12:26 PM, felix
management every once in a while. thanks guys.
it won't run now because its too big, I can delete them from psql though
well just think how sprightly my website will run tomorrow once I fix these.
On Fri, Feb 4, 2011 at 9:00 PM, Shaun Thomas stho...@peak6.com wrote:
On 02/04/2011 01:59 PM, felix
thanks for the replies !,
but actually I did figure out how to kill it
but pb_cancel_backend didn't work. here's some notes:
this has been hung for 5 days:
ns | 32681 | nssql | IDLE in transaction | f | 2010-12-01
15
resulting in: fastadder_fastadderstatus: scanned 3000 of
at 8:34 AM, felix crucialfe...@gmail.com wrote:
thanks !
of course now, 2 hours later, the queries run fine.
the first one was locked up for so long that I interrupted it.
maybe that caused it to get blocked
saved your query for future reference, thanks again !
On Fri, Nov 26, 2010 at 5:00
Hello,
I have a very large table that I'm not too fond of. I'm revising the design
now.
Up until now its been insert only, storing tracking codes from incoming
webtraffic.
It has 8m rows
It appears to insert fine, but simple updates using psql are hanging.
update ONLY traffic_tracking2010 set
.
--
... _._. ._ ._. . _._. ._. ___ .__ ._. . .__. ._ .. ._.
Felix Finch: scarecrow repairman rocket surgeon / [EMAIL PROTECTED]
GPG = E987 4493 C860 246C 3B1E 6477 7838 76E9 182E 8151 ITAR license #4933
I've found a solution to Fermat's Last Theorem but I see I've run out of room o
---(end
28 matches
Mail list logo