On Sat, Jan 25, 2014 at 12:02:59PM +0530, RAMAKRISHNAN KANDASAMY wrote:
Hi All,
I have configured the blow parameters for a 32 GB server . I this correct ?
shared_buffers = 6GB
going over 2GB probably doesn't help
work_mem = 24MB maintenance_work_mem = 250MB
work_mem depends a lot of
On Tue, Oct 15, 2013 at 08:59:08AM -0700, Tony Kay wrote:
On Mon, Oct 14, 2013 at 4:42 PM, Tomas Vondra t...@fuzzy.cz wrote:
On 15.10.2013 01:00, Tony Kay wrote:
Hi,
I'm running 9.1.6 w/22GB shared buffers, and 32GB overall RAM on a
16 Opteron 6276 CPU box. We limit connections to
On Tue, Oct 15, 2013 at 12:07:38PM -0700, Tony Kay wrote:
On Tue, Oct 15, 2013 at 10:26 AM, Julien Cigar jci...@ulb.ac.be wrote:
for two reasons:
- some of the overhead of bgwriter and checkpoints is more or less linear
in the size of shared_buffers, for example it could be possible
On Tue, Sep 10, 2013 at 02:04:57PM -0400, David Whittaker wrote:
Hi Andrew,
On Tue, Sep 10, 2013 at 11:26 AM, Andrew Dunstan and...@dunslane.netwrote:
On 09/10/2013 11:04 AM, David Whittaker wrote:
Hi All,
I've been seeing a strange issue with our Postgres install for about a
On 06/25/2013 00:18, Ben wrote:
hello postgresql experts --
i have a strange row estimate problem, which unfortunately i have trouble
reproducing for anything but very large tables which makes boiling down to a
simple example hard. i'm using version 9.1.1, all tables have had analyze run
on
On 05/03/2013 01:11, Mike McCann wrote:
Hello,
Hello,
We are in the fortunate situation of having more money than time to
help solve our PostgreSQL 9.1 performance problem.
Our server hosts databases that are about 1 GB in size with the
largest tables having order 10 million 20-byte
try to increase cpu_tuple_cost to 0.1
On 04/06/2013 03:50, Joe Van Dyk wrote:
If I disable sequential scans, hash joins, and merge joins, the query
plans become the same and performance on the first slow one is much
improved.
Is there something else I can do to avoid this problem?
below
On 03/29/2013 15:20, Franck Routier wrote:
Hi,
Hello,
I have a postgresql database (8.4) running in production whose
performance is degrading.
There is no single query that underperforms, all queries do.
Another interesting point is that a generic performance test
On 03/06/2013 00:51, Niels Kristian Schjødt wrote:
Hi, thanks for answering. See comments inline.
Den 05/03/2013 kl. 15.26 skrev Julien Cigar jci...@ulb.ac.be:
On 03/05/2013 15:00, Niels Kristian Schjødt wrote:
Hi,
I'm running a rails app, where I have a model called Car that has_many
On 03/05/2013 15:00, Niels Kristian Schjødt wrote:
Hi,
I'm running a rails app, where I have a model called Car that has_many Images.
Now when I tell rails to include those images, when querying say 50 cars, then
it often decides to use a SELECT * from images WHERE car_id IN
On 02/28/2013 16:11, Ao Jianwang wrote:
Hi,
Does any one can tell me why the same query runs against on smaller
data is slower than bigger table. thanks very much.
I am using PostgreSQL9.1.8.
*t_apps_1 and t_estimate_1 are about 300M respectively, while *_list_1
about 10M more or less.
The biggest difference in performance between text and integer keys is
usually down to whether you're inserting in order or not. Inserting in
order is tons faster regardless of the type, since it keeps the index
unfragmented and doesn't cause page splits.
On 02/04/2013 22:52, Anne Rosset
On 11/16/2012 14:04, David Popiashvili wrote:
All right, after some discussion on StackOverflow
http://stackoverflow.com/questions/13407555/postgresql-query-taking-too-long/13415984#13415984,
we found out that incorrect query plan is generated due to the fact
that there is a LIMIT keyword in
On 11/16/2012 17:35, David Popiashvili wrote:
Thanks Craig. Yes I already tried it but it didn't work. I don't see
any solution other than fixing this bug. Take a
look http://www.postgresql.org/search/?m=1q=LIMITl=8d=365s=r.
There are too many bug reports about LIMIT slowing down queries.
On 10/10/2012 09:12, Strahinja Kustudić wrote:
Hi everyone,
Hello,
I have a Postgresql 9.1 dedicated server with 16 cores, 96GB RAM and
RAID10 15K SCSI drives which is runing Centos 6.2 x64. This server is
mainly used for inserting/updating large amounts of data via
copy/insert/update
random_page_cost to 3.5 (instead of 4.0)
I also recommend to raise cpu_tuple_cost to 0.05 (instead of 0.01), set
vm.swappiness to 0, vm.overcommit_memory to 2, and finally raise the
read-ahead (something like 8192)
Strahinja Kustudić| System Engineer | Nordeus
On Wed, Oct 10, 2012 at 10:11 AM, Julien
On 10/08/2012 17:26, Navaneethan R wrote:
Hi all,
Hello,
I have 10 million records in my postgres table.I am running the database
in amazon ec2 medium instance. I need to access the last week data from the
table.
It takes huge time to process the simple query.So, i throws time out
On 09/26/2012 15:03, FFW_Rude wrote:
Here is the answer to Ray Stell who send me the wiki page of Slow Query. I
hope i detailed all you wanted (i basicly pasted the page and add my
answers).
Full Table and Index Schema:
schema tables_adresses
Tables
tables_adresses.adresses_XX (id (serial),
On 09/26/2012 15:36, FFW_Rude wrote:
Hi,
Thank you for your answer.
It was already at 16MB and i upped it just this morning to 64MB. Still
no change
that's normal, please configure shared_buffers and effective_cache_size
properly
Rude - Last Territory
*Ou écouter ?*
On 09/26/2012 16:14, FFW_Rude wrote:
Thank for you answer.
shared_buffer is at 24Mb
effective_cache_size at 2048Mb
What do you mean properly ? That's not really helping a novice...
from my previous mail:
before looking further, please configure shared_buffers and
effective_cache_size
On 09/26/2012 16:41, FFW_Rude wrote:
Ok done to 512Mb and 2048Mb
I'm relaunching. See you in a few hours (so tommorrow)
with 250 000 rows and proper indexes it should run in less than a second.
be sure your indexes are set properly and that they're used (use EXPLAIN
ANALYZE for that) within
Hello,
1) upgrade your PostgreSQL installation, there have been numerous
bugfixes releases since 8.4.2
2) you'll have to show us an explain analyze of the slow queries. If I
take a look at those you provided everything run i less than 1ms.
3) with 200 records you'll always have a seqscan
4)
On 09/24/2012 14:34, Andres Freund wrote:
On Monday, September 24, 2012 02:21:09 PM Julien Cigar wrote:
5) synchronous_commit = off should only be used if you have a
battery-backed write cache.
Huh? Are you possibly confusing this with full_page_writes?
indeed...! sorry for that
(note
On 09/24/2012 15:51, Kiriakos Tsourapas wrote:
Hi,
Thank you for your response.
Please find below my answers/comments.
On Sep 24, 2012, at 15:21, Julien Cigar wrote:
Hello,
1) upgrade your PostgreSQL installation, there have been numerous bugfixes
releases since 8.4.2
Not possible right
if you have millions of files in data/pgsql_tmp it means that you're
using temporary tables (very) heavily .. or you've a huge sorting
activity (of large tables) and that the sort happens on disk (you can
verify that with an EXPLAIN ANALYZE of the query, you'll see something
like external disk
On 04/05/2012 21:47, Ants Aasma wrote:
On Thu, Apr 5, 2012 at 2:47 PM, Julien Cigarjci...@ulb.ac.be wrote:
- http://www.pastie.org/3731956 : with default config
- http://www.pastie.org/3731960 : this is with enable_seq_scan = off
It looks like the join selectivity of
Hello,
I have an extremely bad plan for one of my colleague's query. Basically
PostgreSQL chooses to seq scan instead of index scan. This is on:
antabif=# select version();
version
27 matches
Mail list logo