Hi,
I have a strange case where a SELECT for a primary key returns 0 rows on one
slave, while it returns the correct 1 row on another slave and on the master.
It does however return that row on all slaves when queried with LIKE and
trailing or leading wildcard.
psql version is 9.3.5, because
Dear Concern,
Always my server memory utilization is remain 99%. I have 4 DB server and
RAM of the server is (32 GB, 64 GB, 64 GB and 132 GB*). In every server
always we are getting the memory utilization 99%. *Kindly suggest why
this problem is and which parameter will resolve this problem.
I
Hello,
First: When I have given checkpoint segments=200 (As per my knowledge WAL
file should be generated 200*2+3=403 only) but why it’s generating too much
file. MY each WAL file is 16 MB.
The formula is
(2 + checkpoint_completion_target) * checkpoint_segments + 1
or
checkpoint_segments +
Sameer Thakur wrote:
You could disable fsync as write reliability is not relevant
That is bad advice.
If there are no writes, fsync won't hurt anyway.
Never disable fsync for anything but test systems.
Yours,
Laurenz Albe
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
What is the output of 'free -m' ? Look at the third column second row (+/-
cache). This is the most relevant number. Even if you are swapping that can
happen with plenty of RAM available if 'swappiness' is set too low.
Sent from my iPhone
On May 20, 2015, at 2:25 AM, Sachin Srivastava
Could you also please check the Transparent huge page(THP) are enabled on
the server or not, they can also result in intermittent poor performance
along with high system cpu time counted against the database processes.
This can be confirmed by below command:
cat
--
MY each WAL file is 16 MB.
---
Are you saying that you want WAL segments to be smaller? WAL file segments are
always the same size, which can be set with --with-WAL-segsize at build time
but not at runtime.
Regards
Mathew
Sent from my iPhone
On May 20,
(I'm not going to try to fix the top-posting)
I've seen what I think you're describing on several occasions.
What we basically discovered is very simple: disks have a top speed and
you can't exceed it.
If I understand you correctly, you see the # of wal segments grow very
large during and
Hi all,
I've been using Postgresql (9.3) for a while, but I'm new to
partitioned tables.
1. I have noticed that when the partition trigger function returns
NULL (to stop operation on the parent table), the operation always
reports no (zero) rows affected - even when rows have been affected.
Hello,
In addition to what has already been suggested
1. Use
VACUUM FREEZE ANALYZE;
Otherwise you will still have some trickle of write-activity going on,
not always efficiently, despite being in read-only mode. It's because
of what's referred to as Hint Bits:
Hello,
I was more dreaming of something like disable read write locks or
mutexes when accessing the database in read-only mode, but sadly this
case seems unhandled.
You could use transactions in read only mode. They do not generate
XID's,which reduces the
need to do VACUUM to protect against
Hello,
You could disable fsync as write reliability is not relevant
regards
Sameer
--
View this message in context:
http://postgresql.nabble.com/Optimizing-a-read-only-database-tp5849746p5850103.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.
--
Sent via
Dear List,
Thank you for all for your advices, even if there's not a direct and
magical solution, I've now some paths to try.
I really enjoy the PostgreSQL community.
Wish you a nice day/night, best regards.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make
Hello All,
I have a problem for which a view seems like a nice solution. Basically we
want to see all records in some table that are older than 5 days and
haven't yet gone through further processing.
This particular view is probably simple enough that it doesn't require unit
tests but I wonder
Sachin Srivastava wrote:
Always my server memory utilization is remain 99%. I have 4 DB server and
RAM of the server is (32
GB, 64 GB, 64 GB and 132 GB). In every server always we are getting the
memory utilization 99%.
Kindly suggest why this problem is and which parameter will resolve
Hello
That is bad advice.
If there are no writes, fsync won't hurt anyway.
Never disable fsync for anything but test systems.
Yep. Its a bad way to speed up writes. Not relevant to this context and bad
anyway
regards
Sameer
--
View this message in context:
On 20/05/15 19:47, Andy Chambers wrote:
Hello All,
I have a problem for which a view seems like a nice solution.
Basically we want to see all records in some table that are older than
5 days and haven't yet gone through further processing.
This particular view is probably simple enough that
On 05/19/2015 11:52 PM, Musall Maik wrote:
Hi,
I have a strange case where a SELECT for a primary key returns 0 rows on one
slave, while it returns the correct 1 row on another slave and on the master.
It does however return that row on all slaves when queried with LIKE and
trailing or
Am 20.05.2015 um 15:17 schrieb Adrian Klaver adrian.kla...@aklaver.com:
On 05/19/2015 11:52 PM, Musall Maik wrote:
Hi,
I have a strange case where a SELECT for a primary key returns 0 rows on one
slave, while it returns the correct 1 row on another slave and on the
master. It does
First, are your postgresql.conf options the same on master and slave2 with
regard to memory allocation and all Planner Method Configuration options?
Next, is it possible you have a corrupted index on slave2.?
I would suggest verifying there is no difference in the query plan between
master and
Hi Melvin,
thanks for the response.
Am 20.05.2015 um 14:26 schrieb Melvin Davidson melvin6...@gmail.com:
First, are your postgresql.conf options the same on master and slave2 with
regard to memory allocation and all Planner Method Configuration options?
slave2 has less shared_buffers
Adrian Klaver adrian.kla...@aklaver.com writes:
The reason I ask is that the machine you are having problems with has OS
X. Over the years I have seen quite a few reports on this list of OS X
locale/encoding issues.
Yes. Here's the problem: OS X UTF8 locales (other than C) don't sort the
On 05/20/2015 06:34 AM, Musall Maik wrote:
Am 20.05.2015 um 15:17 schrieb Adrian Klaver adrian.kla...@aklaver.com:
On 05/19/2015 11:52 PM, Musall Maik wrote:
Hi,
I have a strange case where a SELECT for a primary key returns 0 rows on one
slave, while it returns the correct 1 row on
Hi Tom,
that's very useful information. Very cool to get such results so quickly here.
Thanks
Maik
Am 20.05.2015 um 16:05 schrieb Tom Lane t...@sss.pgh.pa.us:
Adrian Klaver adrian.kla...@aklaver.com writes:
The reason I ask is that the machine you are having problems with has OS
X. Over
Hello,
Just had an idea and could use some feedback. If we start a transaction,
leave it idle, and use pg_export_snapshot() to get its snapshot_id MVCC
will hold all the tuples as of that transaction's start and any other
transaction can see the state of the database as of that time using SET
The documentation for pg_dump says that dump files are created in a
consistent state.
Is that true across multiple tables in the same pg_dump command?
(Obviously it would not be true if I dumped tables using separate pg_dump
commands. But if I put the database into a backup state using
Yes. The entire dump is performed within a single transaction.
On Wed, May 20, 2015 at 9:24 AM, Michael Nolan htf...@gmail.com wrote:
The documentation for pg_dump says that dump files are created in a
consistent state.
Is that true across multiple tables in the same pg_dump command?
On Wed, May 20, 2015 at 12:40 PM, David G. Johnston
david.g.johns...@gmail.com wrote:
Yes. The entire dump is performed within a single transaction.
On Wed, May 20, 2015 at 9:24 AM, Michael Nolan htf...@gmail.com wrote:
The documentation for pg_dump says that dump files are created in a
2015-05-20 22:16 GMT+02:00 Stefan Stefanov stefanov...@abv.bg:
Hi,
I have been using COPY .. FROM a lot these days for reading in tabular
data and it does a very good job. Still there is an inconvenience when a
(large) text file contains more columns than the target table or the
columns’
Hi,
I have been using COPY .. FROM a lot these days for reading in tabular data and
it does a very good job. Still there is an inconvenience when a (large) text
file contains more columns than the target table or the columns’ order differs.
I can imagine three ways round and none is really
On 5/20/2015 10:44 AM, Michael Nolan wrote:
When I moved to a new release of pg (9.3) last December, I stopped all
transaction processing first so that pg_dumpall had no consistency
issues.
the only possible consistency issue would be if you have applications
doing 2-phase commits to two
I'm creating a foreign table (foo_table) in database_a. foo_table lives in
database_b.foo_table has an enum (bar_type) as one of its columns. Because
this enum is in database_b, the creation of the foreign table fails in
database_a. database_a doesn't understand the column type. Running the
On Wed, May 20, 2015 at 4:12 PM, Sachin Srivastava ssr.teleat...@gmail.com
wrote:
Dear Venkata,
I have not set this parameter archive_timeout
I think that is also the reason.
What value I'll assign to this parameter if my *Checkpoint_segments= 200 *and
*checkpoint_timeout= 5min, *kindly
33 matches
Mail list logo