On 3/30/2016 10:41 PM, Alvaro Aguayo Garcia-Rada wrote:
What I don't understand is the need of a shared storage in this case. It would
be a lot better to have the data folder inside each server virtual disk to
avoid troubles with the shared storage; I really see no reason for such
We are overlaping mails :P
What I don't understand is the need of a shared storage in this case. It would
be a lot better to have the data folder inside each server virtual disk to
avoid troubles with the shared storage; I really see no reason for such
configuration here.
Now, getting into
Hello Alvaro,
We running BDR where each PostgreSQL vm is a master and shared storage only on
hypervisor level. All vm leave with own virtual disk. Right now we have 2
server for redundancy which have shared network between them. Issue that BDR is
working right now see log below. And my
What's the purpose of such configuration? Doesn't makes sense for me. The only
reasonable case where you would want to put the data folder on a shared storage
is for usage with warm standby, where you can have a secondary server which
serves as a read-only replica, and can be rpomoted to master
Hello Everyone,
Is possible recovery from my situation at all ? I was looking on tool which
might will help and only bdr_init_copy. If possible initialize second node
again ? Also is it good idea enable wal archiving with bdr ?
volga629
From: "volga629"
To: "John
In my case only virtual hosts are use share storage (feed from glusterfs), but
actual virtual machines have own separate disks and all PostgreSQL run on
separate data directories.
volga629
From: "John R Pierce"
To: "pgsql-general"
Sent:
Hello Craig,
Is any share storage technology recommended for PostgreSQL in virtual
environment ?
Ok what I will do is going take backups, shutdown both virtual servers and
place all vm use local disk on server only.
volga629
From: "Craig Ringer"
To: "volga629"
On 31 March 2016 at 10:43, Slava Bendersky wrote:
> Hello Craig,
> The current setup is two server which run libvirt and for storage which
> run glusterfs (storage server feed two virtual servers). Right now is no
> fencing in place. Each of the nodes have one
Hello Craig,
The current setup is two server which run libvirt and for storage which run
glusterfs (storage server feed two virtual servers). Right now is no fencing in
place. Each of the nodes have one PostgreSQL vm with bdr.
volga629.
From: "Craig Ringer"
To:
On 31 March 2016 at 09:38, Slava Bendersky wrote:
> Hello Everyone,
> I am looking for suggestion how to recover bdr replication.
> The short story we have 2 virtual nodes with share storage.
>
Can you describe the "shared storage" setup in more detail?
In general,
Hello Everyone,
I am looking for suggestion how to recover bdr replication.
The short story we have 2 virtual nodes with share storage. Share storage lost
power and after I brought all online bdr doesn't work properly.
Here are some log
2016-03-30 20:47:26 EDT
On 3/30/2016 2:52 PM, Francisco Reyes wrote:
On 03/30/2016 05:44 PM, John R Pierce wrote:
and what if commit db2 fails for any number of reasons? you've
already committed db1, so you can't roll it back. this sort of
work requires '2pc' (2-phase commit), which is rather gnarly to
On 3/30/2016 2:02 PM, Francisco Reyes wrote:
Have users that are familiar with plpsql and have a process which
needs to be done withing transactions in 2 databases.
begin in db1
begin in db2
work on 1
work on 2
if work on both dbs worked
commit db1
commit db2
and what if commit db2
On Wed, Mar 30, 2016 at 5:18 PM, David G. Johnston <
david.g.johns...@gmail.com> wrote:
> On Wednesday, March 30, 2016, Francisco Reyes wrote:
>
>> Is it possible to have a plpsql program connect to 2 databases?
>>
>> This seems to imply it,
>>
On Wednesday, March 30, 2016, Francisco Reyes wrote:
> Is it possible to have a plpsql program connect to 2 databases?
>
> This seems to imply it,
> http://stackoverflow.com/questions/18257003/pl-sql-querying-a-table-on-multiple-databases
>
> However looking at the postgresql
Pavlov, Vladimir wrote:
> Yes, VACUUM helps to solve the problem and the WARNING gone away.
> But, the problem is that the VACUUM for the entire database (2.4T) takes over
> 7 hours, and it has to run every 15-20 hours (about 300 millions
> transactions), otherwise:
> ERROR: multixact "members"
On 03/30/2016 02:02 PM, Francisco Reyes wrote:
Is it possible to have a plpsql program connect to 2 databases?
This seems to imply it,
http://stackoverflow.com/questions/18257003/pl-sql-querying-a-table-on-multiple-databases
Well the above is referring to Oracle, so is not applicable to
On 03/30/2016 08:03 AM, Pavlov, Vladimir wrote:
Yes, VACUUM helps to solve the problem and the WARNING gone away.
Okay, so now we are on a different problem.
But, the problem is that the VACUUM for the entire database (2.4T) takes over 7
hours, and it has to run every 15-20 hours (about 300
Hi Pushkar:
On Wed, Mar 30, 2016 at 8:40 AM, Deole, Pushkar (Pushkar)
wrote:
> Does PostgreSQL support a query to fetch last ‘n’ records that match the
> selection criteria. I am trying to fetch records from a table with start
> date that falls in last 30 days, however, I want
Yes, VACUUM helps to solve the problem and the WARNING gone away.
But, the problem is that the VACUUM for the entire database (2.4T) takes over 7
hours, and it has to run every 15-20 hours (about 300 millions transactions),
otherwise:
ERROR: multixact "members" limit exceeded - and server stops
On Wed, Mar 30, 2016 at 8:33 AM, Tom Lane wrote:
> Merlin Moncure writes:
>> On Wed, Mar 30, 2016 at 3:52 AM, Alexander Reshetov
>> wrote:
>>> As far as I know there is no LIMIT clause pushdown in FDW API.
>>> Is there some
Amit Langote writes:
> destdb=# ALTER TABLE c ADD CONSTRAINT p_a_check CHECK (a IN ('a', 'b', 'c'));
> destdb=# \d c
> ...
> Check constraints:
> "p_a_check" CHECK (a::text = ANY (ARRAY['a'::character varying,
> 'b'::character varying, 'c'::character
On Wed, Mar 30, 2016 at 6:45 AM, Tom Lane wrote:
> Joshua Ma writes:
>> This might not be a common case, but we're using pg_dump in a testing
>> environment to check migrations - 1) we initialize the db from HEAD,
>> pg_dump it, 2) we initialize the db
On 03/30/2016 06:24 AM, Pavlov, Vladimir wrote:
Hello,
There is no news?
Now I have to do VACUUM every night, so that the server worked.
So has the WARNING gone away?:
WARNING: oldest multixact is far in the past
HINT: Close open transactions with multixacts soon to avoid wraparound
Merlin Moncure writes:
> On Wed, Mar 30, 2016 at 3:52 AM, Alexander Reshetov
> wrote:
>> As far as I know there is no LIMIT clause pushdown in FDW API.
>> Is there some reasons not to support LIMIT clause pushdown?
It was impossible to do it
Hello,
There is no news?
Now I have to do VACUUM every night, so that the server worked.
Maybe run VACUUM FREEZE?
Kind regards,
Vladimir Pavlov
-Original Message-
From: Pavlov Vladimir
Sent: Friday, March 25, 2016 9:55 AM
To: 'Alvaro Herrera'
Cc: 'Adrian Klaver';
On Wed, Mar 30, 2016 at 3:52 AM, Alexander Reshetov
wrote:
> Hello,
>
> As far as I know there is no LIMIT clause pushdown in FDW API.
>
> Is there some reasons not to support LIMIT clause pushdown?
> Is there bug-report regarding this (didn't found it though)
> or
Sridhar N Bamandlapally wrote:
> Is there any way we can change the segment file size,
>
> I am trying to look into the possibility of segment file size Vs bytea size
> limitation
>
> PostgreSQL installation
>
> step 1: ./configure --enable-largefile --with-segsize (
Is there any way we can change the segment file size,
I am trying to look into the possibility of segment file size Vs bytea size
limitation
PostgreSQL installation
step 1: ./configure --enable-largefile --with-segsize ( throwing error
"configure: error: Large file support is not enabled.
On 30 March 2016 at 11:36, Deole, Pushkar (Pushkar)
wrote:
> I am sorry I didn’t clarify my requirement properly.. I want the ‘n’
> oldest records, however, they should sorted with the recent record first
> and I want this to happen in the query itself so I don’t have to care
Il 30/03/2016 11:36, Deole, Pushkar
(Pushkar) ha scritto:
I
am sorry I didn’t clarify my requirement properly.. I want
the ‘n’ oldest records, however, they should sorted with the
recent record first
I am sorry I didn’t clarify my requirement properly.. I want the ‘n’ oldest
records, however, they should sorted with the recent record first and I want
this to happen in the query itself so I don’t have to care about sorting
through the application..
From: pgsql-general-ow...@postgresql.org
Il 30/03/2016 11:19, Deole, Pushkar
(Pushkar) ha scritto:
select * from t order by record_date desc
limit 5;
this will return the recent 5 records..
what I want is the oldest 5 records (in last 30
On 30 March 2016 at 11:19, Deole, Pushkar (Pushkar)
wrote:
> select * from t order by record_date desc limit 5;
>
>
>
> this will return the recent 5 records.. what I want is the oldest 5
> records (in last 30 days)
>
>
>
> *From:* pgsql-general-ow...@postgresql.org [mailto:
>
select * from t order by record_date desc limit 5;
this will return the recent 5 records.. what I want is the oldest 5 records (in
last 30 days)
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of John R Pierce
Sent: Wednesday, March 30, 2016 12:38
Hello,
As far as I know there is no LIMIT clause pushdown in FDW API.
Is there some reasons not to support LIMIT clause pushdown?
Is there bug-report regarding this (didn't found it though)
or should it be filled down?
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To
On 3/29/2016 11:40 PM, Deole, Pushkar (Pushkar) wrote:
WITH t AS (
SELECT * FROM mytable ORDER BY record_date ASC LIMIT 5
)
SELECT * FROM t ORDER BY record_date DESC;
why do it twice when you can just do
select * from t order by record_date desc limit 5;
--
john r
Hi,
Does PostgreSQL support a query to fetch last 'n' records that match the
selection criteria. I am trying to fetch records from a table with start date
that falls in last 30 days, however, I want to fetch the oldest 'n' records and
not the recent ones. I know there is a LIMIT clause which I
38 matches
Mail list logo