On 3/29/2016 11:40 PM, Deole, Pushkar (Pushkar) wrote:
WITH t AS (
SELECT * FROM mytable ORDER BY record_date ASC LIMIT 5
)
SELECT * FROM t ORDER BY record_date DESC;
why do it twice when you can just do
select * from t order by record_date desc limit 5;
--
john r pier
Hello,
As far as I know there is no LIMIT clause pushdown in FDW API.
Is there some reasons not to support LIMIT clause pushdown?
Is there bug-report regarding this (didn't found it though)
or should it be filled down?
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To ma
select * from t order by record_date desc limit 5;
this will return the recent 5 records.. what I want is the oldest 5 records (in
last 30 days)
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of John R Pierce
Sent: Wednesday, March 30, 2016 12:38
On 30 March 2016 at 11:19, Deole, Pushkar (Pushkar)
wrote:
> select * from t order by record_date desc limit 5;
>
>
>
> this will return the recent 5 records.. what I want is the oldest 5
> records (in last 30 days)
>
>
>
> *From:* pgsql-general-ow...@postgresql.org [mailto:
> pgsql-general-ow...
Il 30/03/2016 11:19, Deole, Pushkar
(Pushkar) ha scritto:
select * from t order by record_date desc
limit 5;
this will return the recent 5 records..
what I want is the oldest 5 records (in last 30 d
I am sorry I didn’t clarify my requirement properly.. I want the ‘n’ oldest
records, however, they should sorted with the recent record first and I want
this to happen in the query itself so I don’t have to care about sorting
through the application..
From: pgsql-general-ow...@postgresql.org
[
Il 30/03/2016 11:36, Deole, Pushkar
(Pushkar) ha scritto:
I
am sorry I didn’t clarify my requirement properly.. I want
the ‘n’ oldest records, however, they should sorted with the
recent record first
On 30 March 2016 at 11:36, Deole, Pushkar (Pushkar)
wrote:
> I am sorry I didn’t clarify my requirement properly.. I want the ‘n’
> oldest records, however, they should sorted with the recent record first
> and I want this to happen in the query itself so I don’t have to care about
> sorting thro
Is there any way we can change the segment file size,
I am trying to look into the possibility of segment file size Vs bytea size
limitation
PostgreSQL installation
step 1: ./configure --enable-largefile --with-segsize ( throwing error
"configure: error: Large file support is not enabled. Segme
Sridhar N Bamandlapally wrote:
> Is there any way we can change the segment file size,
>
> I am trying to look into the possibility of segment file size Vs bytea size
> limitation
>
> PostgreSQL installation
>
> step 1: ./configure --enable-largefile --with-segsize ( throwing error
> "config
On Wed, Mar 30, 2016 at 3:52 AM, Alexander Reshetov
wrote:
> Hello,
>
> As far as I know there is no LIMIT clause pushdown in FDW API.
>
> Is there some reasons not to support LIMIT clause pushdown?
> Is there bug-report regarding this (didn't found it though)
> or should it be filled down?
Worki
Hello,
There is no news?
Now I have to do VACUUM every night, so that the server worked.
Maybe run VACUUM FREEZE?
Kind regards,
Vladimir Pavlov
-Original Message-
From: Pavlov Vladimir
Sent: Friday, March 25, 2016 9:55 AM
To: 'Alvaro Herrera'
Cc: 'Adrian Klaver'; pgsql-general@postgre
Merlin Moncure writes:
> On Wed, Mar 30, 2016 at 3:52 AM, Alexander Reshetov
> wrote:
>> As far as I know there is no LIMIT clause pushdown in FDW API.
>> Is there some reasons not to support LIMIT clause pushdown?
It was impossible to do it before 3fc6e2d7f5b652b4, committed 3 weeks
ago. Now t
On 03/30/2016 06:24 AM, Pavlov, Vladimir wrote:
Hello,
There is no news?
Now I have to do VACUUM every night, so that the server worked.
So has the WARNING gone away?:
WARNING: oldest multixact is far in the past
HINT: Close open transactions with multixacts soon to avoid wraparound
problem
On Wed, Mar 30, 2016 at 6:45 AM, Tom Lane wrote:
> Joshua Ma writes:
>> This might not be a common case, but we're using pg_dump in a testing
>> environment to check migrations - 1) we initialize the db from HEAD,
>> pg_dump it, 2) we initialize the db from migration_base.sql, apply
>> migrations
Amit Langote writes:
> destdb=# ALTER TABLE c ADD CONSTRAINT p_a_check CHECK (a IN ('a', 'b', 'c'));
> destdb=# \d c
> ...
> Check constraints:
> "p_a_check" CHECK (a::text = ANY (ARRAY['a'::character varying,
> 'b'::character varying, 'c'::character varying]::text[]))
Hm. It seems like the
On Wed, Mar 30, 2016 at 8:33 AM, Tom Lane wrote:
> Merlin Moncure writes:
>> On Wed, Mar 30, 2016 at 3:52 AM, Alexander Reshetov
>> wrote:
>>> As far as I know there is no LIMIT clause pushdown in FDW API.
>>> Is there some reasons not to support LIMIT clause pushdown?
>
> It was impossible to d
Yes, VACUUM helps to solve the problem and the WARNING gone away.
But, the problem is that the VACUUM for the entire database (2.4T) takes over 7
hours, and it has to run every 15-20 hours (about 300 millions transactions),
otherwise:
ERROR: multixact "members" limit exceeded - and server stops
Hi Pushkar:
On Wed, Mar 30, 2016 at 8:40 AM, Deole, Pushkar (Pushkar)
wrote:
> Does PostgreSQL support a query to fetch last ‘n’ records that match the
> selection criteria. I am trying to fetch records from a table with start
> date that falls in last 30 days, however, I want to fetch the oldest
Is it possible to have a plpsql program connect to 2 databases?
This seems to imply it,
http://stackoverflow.com/questions/18257003/pl-sql-querying-a-table-on-multiple-databases
However looking at the postgresql plpsql reference,
http://www.postgresql.org/docs/current/static/plpgsql-cursors.h
On 03/30/2016 08:03 AM, Pavlov, Vladimir wrote:
Yes, VACUUM helps to solve the problem and the WARNING gone away.
Okay, so now we are on a different problem.
But, the problem is that the VACUUM for the entire database (2.4T) takes over 7
hours, and it has to run every 15-20 hours (about 300
On 03/30/2016 02:02 PM, Francisco Reyes wrote:
Is it possible to have a plpsql program connect to 2 databases?
This seems to imply it,
http://stackoverflow.com/questions/18257003/pl-sql-querying-a-table-on-multiple-databases
Well the above is referring to Oracle, so is not applicable to Postgr
Pavlov, Vladimir wrote:
> Yes, VACUUM helps to solve the problem and the WARNING gone away.
> But, the problem is that the VACUUM for the entire database (2.4T) takes over
> 7 hours, and it has to run every 15-20 hours (about 300 millions
> transactions), otherwise:
> ERROR: multixact "members"
On Wednesday, March 30, 2016, Francisco Reyes wrote:
> Is it possible to have a plpsql program connect to 2 databases?
>
> This seems to imply it,
> http://stackoverflow.com/questions/18257003/pl-sql-querying-a-table-on-multiple-databases
>
> However looking at the postgresql plpsql reference,
>
On Wed, Mar 30, 2016 at 5:18 PM, David G. Johnston <
david.g.johns...@gmail.com> wrote:
> On Wednesday, March 30, 2016, Francisco Reyes wrote:
>
>> Is it possible to have a plpsql program connect to 2 databases?
>>
>> This seems to imply it,
>> http://stackoverflow.com/questions/18257003/pl-sql-q
On 3/30/2016 2:02 PM, Francisco Reyes wrote:
Have users that are familiar with plpsql and have a process which
needs to be done withing transactions in 2 databases.
begin in db1
begin in db2
work on 1
work on 2
if work on both dbs worked
commit db1
commit db2
and what if commit db2 fails
On 03/30/2016 05:44 PM, John R Pierce wrote:
and what if commit db2 fails for any number of reasons? you've
already committed db1, so you can't roll it back. this sort of
work requires '2pc' (2-phase commit), which is rather gnarly to
implement.
You mean when executing the actual commi
On 3/30/2016 2:52 PM, Francisco Reyes wrote:
On 03/30/2016 05:44 PM, John R Pierce wrote:
and what if commit db2 fails for any number of reasons? you've
already committed db1, so you can't roll it back. this sort of
work requires '2pc' (2-phase commit), which is rather gnarly to
implemen
Hello Everyone,
I am looking for suggestion how to recover bdr replication.
The short story we have 2 virtual nodes with share storage. Share storage lost
power and after I brought all online bdr doesn't work properly.
Here are some log
2016-03-30 20:47:26 EDT postgres@10.12.160.2(43714):fre
On Tue, Mar 29, 2016 at 3:47 AM, Geoff Winkless wrote:
> On 28 March 2016 at 20:23, I wrote:
>
>> Table pa has 7522676 rows, 4834042 of which have field1 NULL, so it's
>> absolutely not reasonable to expect this to be an optimal strategy.
>>
>>
> It occurred to me that even though the majority
On 31 March 2016 at 09:38, Slava Bendersky wrote:
> Hello Everyone,
> I am looking for suggestion how to recover bdr replication.
> The short story we have 2 virtual nodes with share storage.
>
Can you describe the "shared storage" setup in more detail?
In general, with PostgreSQL "shared stora
Hello Craig,
The current setup is two server which run libvirt and for storage which run
glusterfs (storage server feed two virtual servers). Right now is no fencing in
place. Each of the nodes have one PostgreSQL vm with bdr.
volga629.
From: "Craig Ringer"
To: "volga629"
Cc: "pgsql-gene
On 31 March 2016 at 10:43, Slava Bendersky wrote:
> Hello Craig,
> The current setup is two server which run libvirt and for storage which
> run glusterfs (storage server feed two virtual servers). Right now is no
> fencing in place. Each of the nodes have one PostgreSQL vm with bdr.
>
That's
Hello Craig,
Is any share storage technology recommended for PostgreSQL in virtual
environment ?
Ok what I will do is going take backups, shutdown both virtual servers and
place all vm use local disk on server only.
volga629
From: "Craig Ringer"
To: "volga629"
Cc: "pgsql-general"
Sent
On 3/30/2016 8:09 PM, Slava Bendersky wrote:
Is any share storage technology recommended for PostgreSQL in virtual
environment ?
Ok what I will do is going take backups, shutdown both virtual servers
and place all vm use local disk on server only.
'share storage technology'... um.thats
In my case only virtual hosts are use share storage (feed from glusterfs), but
actual virtual machines have own separate disks and all PostgreSQL run on
separate data directories.
volga629
From: "John R Pierce"
To: "pgsql-general"
Sent: Thursday, 31 March, 2016 00:34:55
Subject: Re: [GE
Hello Everyone,
Is possible recovery from my situation at all ? I was looking on tool which
might will help and only bdr_init_copy. If possible initialize second node
again ? Also is it good idea enable wal archiving with bdr ?
volga629
From: "volga629"
To: "John R Pierce"
Cc: "pgsql-ge
What's the purpose of such configuration? Doesn't makes sense for me. The only
reasonable case where you would want to put the data folder on a shared storage
is for usage with warm standby, where you can have a secondary server which
serves as a read-only replica, and can be rpomoted to master
Hello Alvaro,
We running BDR where each PostgreSQL vm is a master and shared storage only on
hypervisor level. All vm leave with own virtual disk. Right now we have 2
server for redundancy which have shared network between them. Issue that BDR is
working right now see log below. And my question
We are overlaping mails :P
What I don't understand is the need of a shared storage in this case. It would
be a lot better to have the data folder inside each server virtual disk to
avoid troubles with the shared storage; I really see no reason for such
configuration here.
Now, getting into the
On 3/30/2016 10:41 PM, Alvaro Aguayo Garcia-Rada wrote:
What I don't understand is the need of a shared storage in this case. It would
be a lot better to have the data folder inside each server virtual disk to
avoid troubles with the shared storage; I really see no reason for such
configuratio
Hello Alvaro,
That sound like a good plan. I will trying tomorrow, because need maintenance
window.
To do back up I can use something like
bdr_dump -Fp -h localhost -U postgres mydb -f /tmp/mydb.data > --data-only
--exclude-table='bdr_*
volga629
From: "Alvaro Aguayo Garcia-Rada"
To: "
I'd rather use something like this:
bdr_dump -N bdr -h localhost -U postgres -W mydb > /backup.sql
That's for each database. You would restore it, you must first create the empty
database:
CREATE DATABASE mydb WITH ENCODING 'UTF-8' TEMPLATE template0;
(change encoding according to your needs.
43 matches
Mail list logo