Yeah, thanks. We have it in count.
On Wed, Sep 6, 2017 at 9:45 PM, Moreno Andreo <moreno.and...@evolu-s.it>
wrote:
> Il 06/09/2017 10:12, Soni M ha scritto:
>
>>
>>
>> Let's say I have 10 year data, and commonly used data only the last 1
>> year. This data i
In our environment, OS cache is much bigger than postgres buffers. Postgres
buffers around 8 GB, OS cache more than 100 GB. Maybe we should inspect
pgfincore
On Wed, Sep 6, 2017 at 9:13 PM, Gerardo Herzig <gher...@fmed.uba.ar> wrote:
>
>
> - Mensaje original -
> >
Hello All, I would like to know about how OS cache works for postgres table
and index file.
Let's say I have 10 year data, and commonly used data only the last 1 year.
This data is quite big, so each table and index file is divided into
several file in PGDATA/base
Let's say 1 index named
3 read=30175
Now, index scan on activity_pkey which take much slower. Can someone please
explain these ?
Thanks
On Tue, Sep 5, 2017 at 8:46 PM, Soni M <diptat...@gmail.com> wrote:
> It's Postgres 9.1.24 on RHEL 6.5
>
> On Tue, Sep 5, 2017 at 8:24 PM, Soni M <diptat...@gmail.c
It's Postgres 9.1.24 on RHEL 6.5
On Tue, Sep 5, 2017 at 8:24 PM, Soni M <diptat...@gmail.com> wrote:
> Consider these 2 index scan produced by a query
>
> -> Index Scan using response_log_by_activity on public.response_log rl2
> (cost=0.00..51.53 rows=21 width=8) (actual t
Consider these 2 index scan produced by a query
-> Index Scan using response_log_by_activity on public.response_log rl2
(cost=0.00..51.53 rows=21 width=8) (actual time=9.017..9.056 rows=0
loops=34098)
Output: rl2.activity_id, rl2.feed_id
Thanks All for the response, finally we figure it out. The slow is due to
high number of dead rows on main table, repack these tables wipe out the
issue.
On Mar 6, 2015 9:31 PM, Merlin Moncure mmonc...@gmail.com wrote:
On Wed, Mar 4, 2015 at 1:31 PM, Soni M diptat...@gmail.com wrote:
Hello All
Hello All,
Master db size 1.5 TB
All postgres 9.1.13 installed from RHEL package.
It has streaming replica and slony replica to another servers.
Server performance is slower than usual, before that, there's a big query
got cancelled and then performance get slow.
No sign of IO wait.
on sar,
Changing to a higher rate CPU would be more helpful if you run less than 32
queries at a time.
On Tue, Aug 26, 2014 at 8:51 AM, Jeff Janes jeff.ja...@gmail.com wrote:
On Monday, August 25, 2014, Jeison Bedoya Delgado
jeis...@audifarma.com.co wrote:
hi, recently i change the hardware of my
[.] perf_evsel__parse_sample
0.05% perf [.] rb_insert_color
0.05% [kernel] [k] pointer
On Mon, Jun 30, 2014 at 2:05 PM, Heikki Linnakangas hlinnakan...@vmware.com
wrote:
On 06/29/2014 03:43 PM, Soni M wrote:
top and sar
On Tue, Jul 1, 2014 at 12:14 AM, Andres Freund and...@2ndquadrant.com
wrote:
My guess it's a spinlock, probably xlogctl-info_lck via
RecoveryInProgress(). Unfortunately inline assembler doesn't always seem
to show up correctly in profiles...
What worked for me was to build with
Hello Everyone ...
We have 6 PG 9.1.12 installation, one master (Ubuntu 10.04), one slony
slave(Ubuntu 10.04), and four streaming replica (2 on Ubuntu 10.04 and 2 on
RHEL 6.5 (Santiago) which lies on different datacenter). All Ubuntu is on
same datacenter. Master send wal archive to slony slave.
/2014 11:14 AM, Soni M wrote:
Everything works fine until on Thursday we have high load on master, and
after that every streaming replica lag further behind the master. Even on
night and weekend where all server load is low. But the slony slave is OK
at all.
What does 'top' on the standby
13 matches
Mail list logo