tion of worker processes], you can limit the user that
> owns the processes (in /etc/security/limits.conf) - which usually is
> "postgres" when Postgresql is run as a service.
The easiest way to impose a limit on the entire Postgres cluster is to
run it in a container using Docker. For example you could use the image
from hub.docker.com and run it with the "--memory" argument.
https://hub.docker.com/_/postgres/
https://docs.docker.com/engine/reference/commandline/run/
--
Jonathan Rogers
Socialserve.com by Emphasys Software
jrog...@emphasys-software.com
signature.asc
Description: OpenPGP digital signature
On 10/20/2015 03:45 AM, Pavel Stehule wrote:
>
>
> 2015-10-20 8:55 GMT+02:00 Thomas Kellerer <mailto:spam_ea...@gmx.net>>:
>
> Jonathan Rogers schrieb am 17.10.2015 um 04:14:
> >>> Yes, I have been looking at both plans and can see where they
>
On 10/14/2015 05:01 AM, Pavel Stehule wrote:
> Hi
>
> 2015-10-14 9:38 GMT+02:00 Jonathan Rogers <mailto:jrog...@socialserve.com>>:
>
> I have a very complex SELECT for which I use PREPARE and then EXECUTE.
> The first five times I run "explain (a
On 10/16/2015 08:37 AM, Albe Laurenz wrote:
> Jonathan Rogers wrote:
>>> Look at the EXPLAIN ANALYZE output for both the custom plan (one of the
>>> first five executions) and the generic plan (the one used from the sixth
>>> time on) and see if you can find and fix
On 10/14/2015 05:00 AM, Albe Laurenz wrote:
> Jonathan Rogers wrote:
>> I have a very complex SELECT for which I use PREPARE and then EXECUTE.
>> The first five times I run "explain (analyze, buffers) execute ..." in
>> psql, it takes about 1s. Starting wit
the plans themselves yet, what could possibly cause
the prepared statement to be re-planned? I have seen the same behavior
on Postgres 9.2.10 and 9.4.1.
--
Jonathan Rogers
Socialserve.com by Emphasys Software
jrog...@emphasys-software.com
--
Sent via pgsql-performance mailing list (pgsql-p
nsight would be
greatly appreciated. A summary of explain (analyze, buffers) can be found
at http://explain.depesz.com/s/qx7f.
Thanks
On Thu, May 23, 2013 at 5:21 PM, Jonathan Morra wrote:
> Sorry for the messy query, I'm very new to writing these complex queries.
> I'll try and
te:
> On 23/05/2013 22:57, Jonathan Morra wrote:
>
> I'm not sure I understand your proposed solution. There is also the
> case to consider where the same patient can be assigned the same device
> multiple times. In this case, the value may be reset at each assi
n Thu, May 23, 2013 at 1:01 PM, Steve Crawford <
scrawf...@pinpointresearch.com> wrote:
> On 05/23/2013 10:57 AM, Jonathan Morra wrote:
>
>> Ultimately I'm going to deploy this to Heroku on a Linux machine (my
>> tests have so far indicated that Heroku is MUCH slower than m
1. Reads is constantly inserted upon. It should never be updated or
deleted.
2. I suppose I can, but that will make my insertion logic very
complicated. I cannot guarantee the order of any of this data, so I might
get reads at any time and also get assignments at any time (historical as
well).
out them now.
On Thu, May 23, 2013 at 10:47 AM, Steve Crawford <
scrawf...@pinpointresearch.com> wrote:
> On 05/23/2013 10:19 AM, Jonathan Morra wrote:
>
>> I am fairly new to squeezing performance out of Postgres, but I hope this
>> mailing list can help me. I h
I am fairly new to squeezing performance out of Postgres, but I hope this
mailing list can help me. I have read the instructions found at
http://wiki.postgresql.org/wiki/Slow_Query_Questions and have tried to
abide by them the best that I can. I am running "PostgreSQL 9.1.7,
compiled by Visual C+
ck to the
server which uses it in a where clause. This way I had just the LIMIT
and was able to remove the OFFSET and things ran great. I don't know
how feasible it is for you to change things application side but it
worked well for me.
Jonathan
--
Sent via pgsql-performance mailing list
I am working on a fuzzy search of a large dataset. Basically, it is a list
of all of the songs, albums, artists, movies, and celebrities exported from
Freebase. Anyway, I was hoping to get a fuzzy search that was nearly as
fast as the full-text search with the new nearest-neighbor GIST indexes,
b
Does anyone have any suggestions for my problem? (I have to wonder if
I'm somehow just not getting peoples attention or what. This is my
second question this week on a public mailing list that has gotten
exactly 0 replies)
Jonathan
On 7/5/2011 8:18 PM, Jonathan wrote:
I have a query
the planner to use the fast plan?
I found some stuff in the mailing list archives that looks related but I
didn't see any fixes. Apparently the planner hopes the merge join will
find the LIMIT # of rows fairly quickly but instead it winds up scanning
almost the entire table.
Thanks,
Another couple of possible ways:
Select groupfield,value
>From tbl x1
Where number = (select max(number) from tbl x2 where x2.groupfield=
x1.groupfield)
Select groupfield,value
>From tbl x1
Where (groupfield,number) in (select groupfield,max(number) from tbl group
by groupfield)
Which is qui
s TCP connections.
--
Jonathan Gardner
jgard...@jonathangardner.net
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
t;
I may have stumbled upon this by my ignorance, but I thought I read
that synchronous_commit controlled whether it tries to line up commits
or has a more free-for-all that may cause some intermediate weirdness.
--
Jonathan Gardner
jgard...@jonathangardner.net
--
Sent via pgsql-performance mailing
quot;EXECUTE ins(%s)", (tx,));
> #~ conn.commit()
> #~ cursor.execute("EXECUTE sel" );
> conn.commit()
> d = time() - start
> tx += N
> print "result : %d tps" % (tx / d)
> cursor.execute("DROP TABLE t
ly forgot about
pgbench. I'm going to dump my cheesy python script and play with that
for a while.
--
Jonathan Gardner
jgard...@jonathangardner.net
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresq
s with minor version changes...
On Tue, Dec 8, 2009 at 11:22 AM, Tom Lane wrote:
> Jonathan Foy writes:
> > My vacuums have suddenly started to fail, seemingly at random. I am
> > confused.
>
> > I'm running 8.1.3, with close to a dozen servers, up to 150 datab
Hello
My vacuums have suddenly started to fail, seemingly at random. I am
confused.
I'm running 8.1.3, with close to a dozen servers, up to 150 databases each.
I have 8GB of RAM. Vacuums have started to fail on all servers (though only
the occasional vacuum) with the following error:
VACUUM,ER
Definitely after.
Jonathan
-Original Message-
From: pgsql-performance-ow...@postgresql.org
[mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of Matthew
Wakeling
Sent: Monday, November 23, 2009 1:00 PM
To: Jonathan Blitz
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM
Many thanks.
I'll give it a try and see what happens.
-Original Message-
From: Craig Ringer [mailto:cr...@postnewspapers.com.au]
Sent: Sunday, November 22, 2009 3:25 PM
To: Jonathan Blitz
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Why is the query not using the
performs a sort.
Am I missing something?
Jonathan Blitz
esponses...I've been gradually getting deeper
and deeper into postgres, and am still very much learning as I go. All
advice is very helpful.
Thanks..
2009/11/20 Віталій Тимчишин
>
>
> 20 листопада 2009 р. 17:01 Jonathan Foy написав:
>
> This seems to result in the sam
This seems to result in the same problem; should I attempt to pull for a
specific id_nbr/id_qfr, postgres uses the index without a problem. If I try
to join the two tables/views however, it insists on doing a sequential scan
(actually two in this case) and will not use the index. Any other
ideas/e
Hello,
I've inherited some very...interestingly...
designed tables, and am trying to figure out how to make them usable. I've
got an ugly hack in place, but it will not use an index properly, and I'm
hoping someone will be able to point me in the right direction.
Production is running 8.1.3, but
Oops, forgot to CC my reply to the list. Sorry if this gets messed up.
On Thu, Mar 5, 2009 at 12:30 PM, Tom Lane wrote:
> Jonathan Hseu writes:
> > Sort (cost=11684028.44..11761274.94 rows=30898601 width=40)
> >Sort Key: "time"
> >-> Bitmap H
ed setting random_page_cost=1, but it still gave me the bitmap
plan.
Thanks,
Jonathan Hseu
access to the file...
>
> You may want to actually get that to stop. Syslog is a notorious
> performance bottleneck for postgresql.
Can you elaborate? The only reference to this I could find was a
thread from 2004 where someone wasn't rotating his logs.
nditions were in the correct order but again
appears to not matter.
I will try to get a more complex/sophisticated test case running. I'm not
able to post my actual structure or queries but I'll try to produce a better
example of the other (multiple table) case tomorrow.
Thanks.
Jonatha
to efficiently handle it (do something just
like the above query but not have to do the full LIMIT 5 for each set, some
kind of in order merge/heap join?)
Jonathan Gray
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Chris
Sent: Tuesday, July 24, 2
On 4/6/07, Tom Lane <[EMAIL PROTECTED]> wrote:
> The problem seems to be that clan_members_v contains a call to an
> expensive function:
I'll bet that the function is marked VOLATILE. 8.2 is more conservative
about optimizing away volatile functions than previous releases. If
it has no side ef
On 4/6/07, Tom Lane <[EMAIL PROTECTED]> wrote:
"Jonathan Ellis" <[EMAIL PROTECTED]> writes:
> I can do that... you don't think the fact I mentioned, that
> redefining the view to leave out the expensive function fixes the
> problem, is relevant?
Hm, I&
On 4/6/07, Tom Lane <[EMAIL PROTECTED]> wrote:
"Jonathan Ellis" <[EMAIL PROTECTED]> writes:
> On 4/6/07, Tom Lane <[EMAIL PROTECTED]> wrote:
>> Yeah, it sure is the same plan, and 8.2 seems to be a tad faster right
>> up to the hash join on user_id. I
On 4/6/07, Tom Lane <[EMAIL PROTECTED]> wrote:
"Jonathan Ellis" <[EMAIL PROTECTED]> writes:
> On 4/5/07, Tom Lane <[EMAIL PROTECTED]> wrote:
>>> Is this a regression, or a "feature" of 8.2?
>>
>> Hard to say without EXPLAIN ANALYZE
On 4/5/07, Tom Lane <[EMAIL PROTECTED]> wrote:
"Jonathan Ellis" <[EMAIL PROTECTED]> writes:
> I have a query hitting a view that takes 0.15s on 8.1 but 6s on 8.2.3:
> ...
> Is this a regression, or a "feature" of 8.2?
Hard to say without EXPLAIN ANALYZ
I have a query hitting a view that takes 0.15s on 8.1 but 6s on 8.2.3:
select party_id from clan_members_v cm, clans c
where cm.clan_id = c.id
and c.type = 'standard'
The problem seems to be that clan_members_v contains a call to an
expensive function:
create or replace view clan_member
How about trying:
Select *
From
(Select * from t28 where t28.0='spec') t28a
Left out join (t1 JOIN t11 ON
> (t11.o = '<http://example.org>' AND t11.s = t1.o)) ON t28a.s = t1.s
In this way, I think, the where clause on t28 would be performed before the
join rathe
an we expect with them, do anybody have done any experiments with
those processors ?
Regards,
Jonathan Ballet
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [
suppose I could do but I need to install PostgreSQL there and then copy
over the database.
Maybe I will give it a try.
Jonathan
--
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.1.394 / Virus Database: 268.7.4/351 - Release Date: 05/29/2006
---
RAM and 8 disks in RAID 10 (SATA).
> Doing an update on a 5 million record table took quite a while, but it did
> fininish. :-)
I am using a laptop :).
Pentium 4 (not 4M) with 1GB of memory - 2 MHZ
Must do it on that since the program is aimed for use at home.
Jonathan
--
No virus found in
h NULL as the default value.
There were, in fact, over 2 million rows in the table rather than 1/4 of a
million so that was part of the problem.
Jonathan
--
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.1.394 / Virus Database: 268.7.4/351 - Rel
802158) order by batterydescription.
How about changing it into a standard join:
select t1.batterycode, t1.batterydescription, t2.observationdate
from Battery t1,
(Select batterycode ,max(observationdate) from Battery t2 where
patientidentifier=611802158 group by batterycode) AS T2
where t1. bat
should be fast enough.
Any ideas?
Jonathan Blitz
--
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.1.392 / Virus Database: 268.5.6/340 - Release Date: 05/15/2006
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On Tuesday 24 February 2004 01:48 pm, Robert Treat wrote:
> On Tue, 2004-02-24 at 12:11, Richard Huxton wrote:
> > On Tuesday 24 February 2004 16:11, Jonathan M. Gardner wrote:
> > > I've written a summary of my findings on
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
I'm not sure if my original reply made it through. Ignore the last one if
it did.
On Tuesday 24 February 2004 1:48 pm, Robert Treat wrote:
> On Tue, 2004-02-24 at 12:11, Richard Huxton wrote:
> > On Tuesday 24 February 2004 16:
Week: http://redweek.com/. As a result, some queries
that were taking longer than 30 seconds to run now run in a fraction of a
millisecond.
You can view my summary at
http://jonathangardner.net/PostgreSQL/materialized_views/matviews.html
Comments and suggestions are definitely welcome.
- --
Jonathan Gard
. However, for a small set of data in the IN group,
no tablescan is performed.
I assume that (a) works at O(ln(N)) for large N, and O(N) for small N,
while (b) works at O(N) universally. Therefore, (a) is the superior
algorithm. I welcome criticism and correction.
- --
Jonathan Gardner
[EMAIL
I'v spent a couple days playing with this problem and searching the mailing lists and
docs etc but come up with nothing. Any help would be much appreciated.
Setup is postgres 7.3.2 on redhat 7.1 on a 1.3GHz Athlon machine with 1G pc133 ram and
SCSI.
Here is the same query with the addition of a l
> I think I have found out why.. I have a where clause on a ID field but it
> seems like I need to cast this integer to the same integer as the field is
> defined in the table, else it will do a tablescan.
Yes, this is correct
> Is this assumtion correct? And if it is, do I then need to change al
er on. Everything else
is tweaking.
Is it absolutely necessary to store 8MB files in the database? I find it
cumbersome. Storing them on a file server has been a better alternative for
me.
- --
Jonathan Gardner <[EMAIL PROTECTED]>
Live Free, Use Linux!
-BEGIN PGP SIGNATURE-
V
nel
to cache, the more responsive it is going to be.
- --
Jonathan Gardner <[EMAIL PROTECTED]>
(was [EMAIL PROTECTED])
Live Free, Use Linux!
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.1 (GNU/Linux)
iD8DBQE/AF8EWgwF3QvpWNwRArmVAJwK5C2ExmS8Rayrne33UJ0KZZM4UgCgq7b5
3J1LGtofgtnKq/bPtF75lNI=
55 matches
Mail list logo