On Wed, Nov 30, 2016 at 6:43 AM, Raymond O'Donnell wrote:
>
> I seem to remember that it was actually deprecated at some point - this is
> going back quite a few years. This was later reversed, though I don't know
> why.
>
>
Because its pointless to deprecate something that you
On Wed, Nov 30, 2016 at 11:05 AM, George wrote:
> On Wed, Nov 30, 2016 at 6:45 PM, Merlin Moncure wrote:
>> On Wed, Nov 30, 2016 at 10:42 AM, Tom Lane wrote:
>>> George writes:
explain select * from
On Wed, 30 Nov 2016 15:00:16 +0100 (CET)
Marcin Giedz wrote:
> Hello,
>
> Does anyone know any reliable open source HA solution with stream replication
> and IP live migration in case of master node failure? Recently I've been
> attending Fujitsu PostgreSQL webinar and
On Wed, Nov 30, 2016 at 6:45 PM, Merlin Moncure wrote:
> On Wed, Nov 30, 2016 at 10:42 AM, Tom Lane wrote:
>> George writes:
>>> explain select * from wg3ppbm_transaction where partner_uuid in (
>>> select p.uuid
>>>
My use case:
I have a table which I expect to reach a size of more than 10M rows.
This table will have a column "partner_uuid" which will have a maximum
envisioned cardinality of 10.
I want different users of my web application to see different subsets
of that table. I am using row-level
On Wed, Nov 30, 2016 at 7:43 AM, Raymond O'Donnell wrote:
> On 30/11/16 12:05, Thomas Kellerer wrote:
>>
>> Tobia Conforto schrieb am 30.11.2016 um 12:15:
>>>
>>> I think MONEY is a great datatype, at least in theory.
>>
>>
>> I personally find it pretty useless to be honest -
- Original Message -
> From: "Howard News"
> Sent: Wednesday, November 30, 2016 9:03:28 AM
> Subject: Re: [GENERAL] select function alias
> On 30/11/2016 13:42, Timoteo Blanco wrote:
> > I've a series of timestamp columns I'd like to alias in select
On Wed, Nov 30, 2016 at 10:42 AM, Tom Lane wrote:
> George writes:
>> explain select * from wg3ppbm_transaction where partner_uuid in (
>> select p.uuid
>> from wg3ppbm_userpartner up
>> join wg3ppbm_partner p on
On Wed, Nov 30, 2016 at 8:04 AM, Cachique wrote:
> You can try pg_cron.
> https://github.com/citusdata/pg_cron
> "pg_cron is a simple cron-based job scheduler for PostgreSQL (9.5 or
> higher) that runs inside the database as an extension. It uses the same
> syntax as regular
On 11/30/2016 06:00 AM, Marcin Giedz wrote:
Hello,
Does anyone know any reliable open source HA solution with stream
replication and IP live migration in case of master node failure?
Recently I've been attending Fujitsu PostgreSQL webinar and they
presented so called "mirroring controler" +
>> I think MONEY is a great datatype, at least in theory.
>
> I personally find it pretty useless to be honest - especially because
> the currency symbol depends on the client.
I should have been more clear: I find the underlying idea of storing a
fixed-scale decimal number as a pre-scaled int8
On Tue, Nov 29, 2016 at 6:56 PM, rob stone wrote:
> Hello Michael,
> On Tue, 2016-11-29 at 19:10 -0500, Michael Sheaver wrote:
>> Greetings,
>>
>> I have two tables that are populated using large datasets from
>> disparate external systems, and I am trying to match records
George writes:
> explain select * from wg3ppbm_transaction where partner_uuid in (
> select p.uuid
> from wg3ppbm_userpartner up
> join wg3ppbm_partner p on p.id = up.partner_id
> );
> "Hash Semi Join (cost=2.07..425.72 rows=2960
- Original Message -
> From: "Merlin Moncure"
> To: "Raymond O'Donnell"
> Cc: "Thomas Kellerer" , "PostgreSQL General"
>
> Sent: Wednesday, November 30, 2016 11:41:39 AM
> Subject: Re: [GENERAL] About
2016-12-01 5:54 GMT+13:00 Melvin Davidson :
>
> On Wed, Nov 30, 2016 at 8:04 AM, Cachique wrote:
>
>> You can try pg_cron.
>> https://github.com/citusdata/pg_cron
>> "pg_cron is a simple cron-based job scheduler for PostgreSQL (9.5 or
>> higher) that
The actual upgrade will be that I have an existing server running 9.3 on RHEL
6.2. We’ll have to do a backup of the database, then deploy a new OVA to get to
RHEL 7 with PG 9.6, then restore the database. Are there any specific steps
that I’ll have to execute in the scripts that will restore
On 11/30/2016 10:27 AM, Sinclair, Ian D (Ian) wrote:
The actual upgrade will be that I have an existing server running 9.3 on
RHEL 6.2. We’ll have to do a backup of the database, then deploy a new
OVA to get to RHEL 7 with PG 9.6, then restore the database. Are there
any specific steps that I’ll
On Wed, Nov 30, 2016 at 8:15 AM, bto...@computer.org wrote:
> 5. Use a CTE:
>
> with shortnames as (
> select to_char(impressions_create_date,'-mm-dd') as ymd from
> impressionsdb
> ) select ymd from shortnames where ymd like '2016-11%';
>
>
Except that
Merlin Moncure writes:
> On Wed, Nov 30, 2016 at 11:05 AM, George wrote:
>> So there is definitely something wrong here. This situation makes many
>> row-level security use cases cumbersome since you need to have
>> almost the same WHERE clause both in
On Wed, Nov 30, 2016 at 12:15 PM, David G. Johnston
wrote:
> On Wed, Nov 30, 2016 at 8:15 AM, bto...@computer.org
> wrote:
>>
>> 5. Use a CTE:
>>
>> with shortnames as (
>> select to_char(impressions_create_date,'-mm-dd') as ymd
On 11/30/2016 10:27 AM, Sinclair, Ian D (Ian) wrote:
The actual upgrade will be that I have an existing server running 9.3
on RHEL 6.2. We’ll have to do a backup of the database, then deploy a
new OVA to get to RHEL 7 with PG 9.6, then restore the database. Are
there any specific steps that
On Wed, Nov 30, 2016 at 2:16 PM, John McKown
wrote:
> On Wed, Nov 30, 2016 at 1:23 PM, bto...@computer.org
> wrote:
> Speaking generically, I guess maybe MONEY needs to be somewhat like a
> TIMESTAMP. At least in PostgreSQL, a TIMESTAMP can
On 11/30/2016 11:57 AM, Patrick B wrote:
but there is queries like this:
select now() - pg_last_xact_replay_timestamp() AS replication_delay;
that need to be ran into a slave.. how can I insert that data into a
table on the slave?
you would insert that data into a table on the MASTER,
On 11/30/2016 12:16 PM, John McKown wrote:
Speaking generically, I guess maybe MONEY needs to be somewhat like a
TIMESTAMP. At least in PostgreSQL, a TIMESTAMP can contain a
TIMEZONE. I guess a MONEY type should contain a modifier identifying
the issuer of the currency (E.g. U.S. Dollar vs
On Wed, 30 Nov 2016 08:58:13 -0800
"Joshua D. Drake" wrote:
> On 11/30/2016 06:00 AM, Marcin Giedz wrote:
> > Hello,
> >
> > Does anyone know any reliable open source HA solution with stream
> > replication and IP live migration in case of master node failure?
> >
On Wed, Nov 30, 2016 at 8:44 PM, Tom Lane wrote:
> Merlin Moncure writes:
>> On Wed, Nov 30, 2016 at 11:05 AM, George wrote:
>>> So there is definitely something wrong here. This situation makes many
>>> row-level security use
On Wed, Nov 30, 2016 at 1:23 PM, bto...@computer.org wrote:
>
> I recall a number of years ago reading about a money implementation that
> included different currency bases and exchange rate calculation. A quick
> Google search turned up
>
>
>
Is there any reason why for the same data set, and same indexes, that
the data in postgres would be significantly larger than
innodb/mariadb?
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
On Thu, Dec 01, 2016 at 12:38:37AM +1300, Samuel Williams wrote:
> Is there any reason why for the same data set, and same indexes, that
> the data in postgres would be significantly larger than
> innodb/mariadb?
Sure: because they do entirely different things on-disk.
Regards,
Karsten
--
GPG
Hi,
recently I had problems with a corrupt pg_dump file. The problem with
the file was due to a faulty disk. The trouble with this is that I was
unaware of the disk problem and the pg_dump file corruption so I did not
have a full valid backup. In order to reduce the chances of this I was
Tobia Conforto schrieb am 30.11.2016 um 12:15:
> I think MONEY is a great datatype, at least in theory.
I personally find it pretty useless to be honest - especially because the
currency symbol depends on the client.
So if I store a money value in the database, some clients see CHF, some see
I think MONEY is a great datatype, at least in theory.
It's stored as a 64 bit binary integer with an implied, fixed decimal
scale. This means that storage is conserved and operations are as fast
and exact as possible (to the implied decimal scale.)
Unfortunately it has a couple of significant
On Wed, Nov 30, 2016 at 12:00:07PM +, Howard News wrote:
> recently I had problems with a corrupt pg_dump file. The problem with the
> file was due to a faulty disk. The trouble with this is that I was unaware
> of the disk problem and the pg_dump file corruption so I did not have a full
>
You can try pg_cron.
https://github.com/citusdata/pg_cron
"pg_cron is a simple cron-based job scheduler for PostgreSQL (9.5 or
higher) that runs inside the database as an extension. It uses the same
syntax as regular cron, but it allows you to schedule PostgreSQL commands
directly from the
On 30/11/2016 12:27, Karsten Hilbert wrote:
You can try to suitably combine "pg_dump --format=plain" with
"tee" and "md5sum" such that the output stream is diverted to
both a file and a pipe-into-CRC-algorithm and eventually
compare the pipe's sum with the sum generated from the file.
But the
On Wed, Nov 30, 2016 at 01:11:58PM +, Howard News wrote:
> > You can try to suitably combine "pg_dump --format=plain" with
> > "tee" and "md5sum" such that the output stream is diverted to
> > both a file and a pipe-into-CRC-algorithm and eventually
> > compare the pipe's sum with the sum
Hello,
i have a problem with my postgres server. There is a DB with approx 30
GB. the traffic per day is between 20-50 GB on the outgoing interface.
one client has ~ 20 IDLE connections. i have sniffer this connections
with nfdump and get this traffic
2016-11-29 12:05:56.165 5854.304 TCP
On 30/11/2016 13:05, basti wrote:
My question is. what does send so much traffic on an idle connection
within an hour?
Presumably they are only idle at the point you are viewing the
connections. In my typical usage, the connections are in a connection
pool and are mostly idle. Monitored
Howdy,
I've a series of timestamp columns I'd like to alias in select statements.
psql indicates my alias doesnt exist after
I define it.
Example -> select to_char(impressions_create_date,'-mm-dd') as ymd from
impressionsdb where ymd like '2016-11%' ;
psql always complains column ymd does
On Wed, Nov 30, 2016 at 6:05 PM, Patrick B wrote:
> https://www.postgresql.org/docs/9.2/static/runtime-config-replication.html
>
> wal_keep_segments is the parameter responsible for streaming replication
> be able to recover itself without using wal_files, is that
On 11/30/2016 3:01 PM, George Weaver wrote:
I have set up 2 instances of PostgreSQL 9.6 on the same Windows
server. I noticed when I ran initdb for the second instance that it
did not create a pg_log folder in the new cluster, and that all
logging is going to the pg_log folder in the original
Michael Rasmussen writes:
> I have encountered an issue in testing our upgrade from 9.4.4 to 9.6.1. Per
> the changes detailed in
> https://www.postgresql.org/message-id/E1aGVwY-0002Pu-Uk%40gemulon.postgresql.org,
> the argument types of the to_reg*() functions were changed
On 30/11/2016 9:16 PM, John R Pierce wrote:
On 11/30/2016 3:01 PM, George Weaver wrote:
I have set up 2 instances of PostgreSQL 9.6 on the same Windows
server. I noticed when I ran initdb for the second instance that it
did not create a pg_log folder in the new cluster, and that all
logging
I have encountered an issue in testing our upgrade from 9.4.4 to 9.6.1. Per the
changes detailed in
https://www.postgresql.org/message-id/E1aGVwY-0002Pu-Uk%40gemulon.postgresql.org,
the argument types of the to_reg*() functions were changed from cstring to
text.
We have some plpgsql helper
2016-11-29 23:59 GMT+13:00 Patrick B :
>
>
> 2016-11-29 16:36 GMT+13:00 David G. Johnston :
>
>> On Mon, Nov 28, 2016 at 8:22 PM, Patrick B
>> wrote:
>>
>>>
>>> Ho
>>> [w]
>>> is that even possible?? I don't
Good afternoon,
I have set up 2 instances of PostgreSQL 9.6 on the same Windows server.
I noticed when I ran initdb for the second instance that it did not
create a pg_log folder in the new cluster, and that all logging is going
to the pg_log folder in the original cluster.
Is it possible
On Wed, Nov 30, 2016 at 4:01 PM, George Weaver wrote:
> Good afternoon,
>
> I have set up 2 instances of PostgreSQL 9.6 on the same Windows server. I
> noticed when I ran initdb for the second instance that it did not create a
> pg_log folder in the new cluster, and that all
On 11/30/2016 03:38 AM, Samuel Williams wrote:
Is there any reason why for the same data set, and same indexes, that
the data in postgres would be significantly larger than
innodb/mariadb?
Hard to say without the table/index definitions and some indication of
what the data is.
What version
2016-12-01 14:15 GMT+13:00 David G. Johnston :
> On Wed, Nov 30, 2016 at 6:05 PM, Patrick B
> wrote:
>
>> https://www.postgresql.org/docs/9.2/static/runtime-config-
>> replication.html
>>
>> wal_keep_segments is the parameter responsible for
On 30/11/16 12:05, Thomas Kellerer wrote:
Tobia Conforto schrieb am 30.11.2016 um 12:15:
I think MONEY is a great datatype, at least in theory.
I personally find it pretty useless to be honest - especially because
the currency symbol depends on the client.
So if I store a money value in the
Regarding the filesystem solution, the dump is currently written to a HP
RAID 10 array with an NTFS partition. What filesystems / raid arrays have
this ability?
If you can't trust your RAID 10 (1 meaning mirrored) to
actually store what you told it to you've got problems beyond
somehow
Hello,
Does anyone know any reliable open source HA solution with stream replication
and IP live migration in case of master node failure? Recently I've been
attending Fujitsu PostgreSQL webinar and they presented so called "mirroring
controler" + sync stream replication . I'm wondering if
On 30/11/2016 13:42, Timoteo Blanco wrote:
Howdy,
I've a series of timestamp columns I'd like to alias in select
statements. psql indicates my alias doesnt exist after
I define it.
Example -> select to_char(impressions_create_date,'-mm-dd') as ymd
from impressionsdb where ymd like
On Wed, Nov 30, 2016 at 01:53:21PM +, Howard News wrote:
> Regarding the filesystem solution, the dump is currently written to a HP
> > > RAID 10 array with an NTFS partition. What filesystems / raid arrays have
> > > this ability?
> > If you can't trust your RAID 10 (1 meaning mirrored) to
>
Also this
https://en.wikipedia.org/wiki/Silent_data_corruption#Countermeasures
--
GPG key ID E4071346 @ eu.pool.sks-keyservers.net
E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your
55 matches
Mail list logo