On 11/18/2015 08:54 AM, Will McCormick wrote:
Re-sending to group as well Jim :D
Regarding testing backups, Well said Jim. Thanks for taking the time to
respond. I will test regularly whatever we decide to put in place.
The below is from the 0.9.3 BDR documentation:
"Because logical
On 11/18/15 10:53 AM, Will McCormick wrote:
Regarding testing backups, Well said Jim. Thanks for taking the time to
respond. I will test regularly whatever we decide to put in place.
The below is from the 0.9.3 BDR documentation:
"Because logical replication is only supported in streaming mode
On 11/17/15 5:33 PM, anj patnaik wrote:
The pg log files apparently log error lines every time a user inserts a
duplicate. I implemented a composite primary key and then when I see the
exception in my client app I update the row with the recent data.
however, I don't want the log file to fill
>From my work with it and as mentioned before Ora2Pg is highly recommended
and very powerfull tool to migration database from Oracle to PostgreSQL.
Ran Fedida.
בתאריך 18 בנוב׳ 2015 12:25, "Thomas Kellerer" כתב:
> Sachin Srivastava schrieb am 18.11.2015 um 10:41:
>
> >
Hello,
Through java jdbc, is it possible that we do the following steps without
accessexclusivelock for index:
setautocommit(false);
drop index1, 2,;
insert millions records
set index1,2...
commit;
Found this post, but it says only within psql block begin/commit, users
are able to do
On 11/18/15 10:48 AM, Emi wrote:
Hello,
Through java jdbc, is it possible that we do the following steps without
accessexclusivelock for index:
setautocommit(false);
drop index1, 2,;
insert millions records
set index1,2...
commit;
Found this post, but it says only within psql block
And as always, what is the O/S and the version of
PostgreSQL??
On Tue, Nov 17, 2015 at 7:18 AM, Ramesh T
wrote:
> the query is big it's selecting 20 rows from two table like i mentioned
> above exaplain analyze
>
> what should i do..?any help
>
> On
Re-sending to group as well Jim :D
Regarding testing backups, Well said Jim. Thanks for taking the time to
respond. I will test regularly whatever we decide to put in place.
The below is from the 0.9.3 BDR documentation:
"Because logical replication is only supported in streaming mode (rather
the query is big it's selecting 20 rows from two table like i mentioned
above exaplain analyze
what should i do..?any help
On Wed, Nov 4, 2015 at 4:27 AM, Adrian Klaver
wrote:
> On 11/03/2015 06:42 AM, Ramesh T wrote:
>
>> I have a Query it taking a lot of time to
The pg log files apparently log error lines every time a user inserts a
duplicate. I implemented a composite primary key and then when I see the
exception in my client app I update the row with the recent data.
however, I don't want the log file to fill out with these error messages
since it's
On 11/17/2015 03:33 PM, anj patnaik wrote:
The pg log files apparently log error lines every time a user inserts a
duplicate. I implemented a composite primary key and then when I see the
exception in my client app I update the row with the recent data.
however, I don't want the log file to
On 11/17/2015 04:18 AM, Ramesh T wrote:
the query is big it's selecting 20 rows from two table like i mentioned
above exaplain analyze
what should i do..?any help
Please do not top post.
I must be missing a post, as I see no explanation of what the query is
doing.
On Wed, Nov 4, 2015 at
On Tue, Nov 17, 2015 at 05:48:36PM +0530, Ramesh T wrote:
> the query is big it's selecting 20 rows from two table like i mentioned
> above exaplain analyze
>
> what should i do..?any help
Considering to post the query might be a reasonable first step.
Karsten
> On Wed, Nov 4, 2015 at 4:27
On 11/18/2015 09:31 AM, Will McCormick wrote:
Ccing list
Thanks Adrian. I think I have it
Lets say we have 2 nodes:
Node A
Node B
GOOD
Application Writes only occurring against Node A
1) Node A Base Backup taken
2) User Error occurs that replicates
Can restore and Recover Node A to PITR
What viable options exist for Backup & Recovery in a BDR environment? From
the reading I have done PITR recovery is not an option with BDR. It's
important to preface this that I have almost no exposure to postgres backup
and recovery. Is PITR not an option with BDR?
If a user fat fingers
On 11/18/15 9:46 AM, Will McCormick wrote:
What viable options exist for Backup & Recovery in a BDR environment?
From the reading I have done PITR recovery is not an option with BDR.
It's important to preface this that I have almost no exposure to
postgres backup and recovery. Is PITR not an
IN Folks:
I will be visiting Indianapolis in early December, so we will have our
first-ever PostgreSQL meetup in that city on December 2. It's being
hosted by the Big Data Meetup at ElevenFifty academy. RSVP here:
http://www.meetup.com/IndyBigData/events/224656314/
--
Josh Berkus
PostgreSQL
Hi,
One of my co-workers came out of a NIST cyber-security type meeting today and
asked me to delve into postgres and zeroization.
I am casually aware of mvcc issues and vacuuming
I believe the concern, based on my current understanding of postgres inner
workings, is that when a dead
On Wed, Nov 18, 2015 at 12:45 PM, Day, David wrote:
> Hi,
>
>
>
> One of my co-workers came out of a NIST cyber-security type meeting today
> and asked me to delve into postgres and zeroization.
>
>
>
> I am casually aware of mvcc issues and vacuuming
>
>
>
> I believe the
On 11/18/2015 11:45 AM, Day, David wrote:
I believe the concern, based on my current understanding of
postgres inner workings, is that when a dead tuple is reclaimed by
vacuuming: Is that reclaimed space initialized in some fashion that
would shred any sensitive data that was formerly
On 11/18/2015 11:45 AM, Day, David wrote:
Hi,
One of my co-workers came out of a NIST cyber-security type meeting
today and asked me to delve into postgres and zeroization.
I am casually aware of mvcc issues and vacuuming
I believe the concern, based on my current understanding of
On Wed, Nov 18, 2015 at 9:35 AM, Jim Nasby wrote:
> On 11/17/15 5:33 PM, anj patnaik wrote:
>>
>> The pg log files apparently log error lines every time a user inserts a
>> duplicate. I implemented a composite primary key and then when I see the
>> exception in my client
On 11/18/2015 11:45 AM, Day, David wrote:
Hi,
One of my co-workers came out of a NIST cyber-security type meeting
today and asked me to delve into postgres and zeroization.
I am casually aware of mvcc issues and vacuuming
I believe the concern, based on my current understanding of
-Original Message-
From: Adrian Klaver [mailto:adrian.kla...@aklaver.com]
Sent: Wednesday, November 18, 2015 3:47 PM
To: Day, David; pgsql-general@postgresql.org
Subject: Re: [GENERAL] postgres zeroization of dead tuples ? i.e scrubbing dead
tuples with sensitive data.
On 11/18/2015
David G. Johnston wrote:
> On Wed, Nov 18, 2015 at 12:45 PM, Day, David wrote:
> > I believe the concern, based on my current understanding of postgres
> > inner workings, is that when a dead tuple is reclaimed by vacuuming: Is
> > that reclaimed space initialized in some
Which begs the question, what is more important, the old/vacuumed data, or
the current valid data?
If someone can hack into the freed data, then they certainly have the
ability to hack into the current valid data.
So ultimately, the best thing to do is to secure the system from being
hacked, not
Alvaro Herrera writes:
> David G. Johnston wrote:
>> On Wed, Nov 18, 2015 at 12:45 PM, Day, David wrote:
>>> I believe the concern, based on my current understanding of postgres
>>> inner workings, is that when a dead tuple is reclaimed by
On Wed, Nov 18, 2015 at 10:58 AM, dinesh kumar
wrote:
> On Wed, Nov 18, 2015 at 10:41 AM, Sachin Srivastava <
> ssr.teleat...@gmail.com> wrote:
>
>> Hi,
>>
>> Please inform which is the best tool for SQL conversion because I have to
>> migration Oracle database into
On Wed, Nov 18, 2015 at 11:24 AM, Thomas Kellerer
wrote:
> Sachin Srivastava schrieb am 18.11.2015 um 10:41:
>
> > Please inform which is the best tool for SQL conversion because I have
> to migration Oracle database into PostgreSQL.
>
> Ora2Pg works quite well
Hi,
Please inform which is the best tool for SQL conversion because I have to
migration Oracle database into PostgreSQL.
Regards,
SS
Sachin Srivastava schrieb am 18.11.2015 um 10:41:
> Please inform which is the best tool for SQL conversion because I have to
> migration Oracle database into PostgreSQL.
Ora2Pg works quite well http://ora2pg.darold.net/
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
On Wed, Nov 18, 2015 at 10:41 AM, Sachin Srivastava wrote:
> Hi,
>
> Please inform which is the best tool for SQL conversion because I have to
> migration Oracle database into PostgreSQL.
>
>
Pentaho is the tool you need to have a look. Also, talend works great.
You
On Wed, Nov 18, 2015 at 04:46:11PM -0500, Melvin Davidson wrote:
> 'm still trying to understand why you think someone can access old data but
> not current/live data.
I don't. It's just another risk. When you're making a list of risks,
you need to list them all. It turns out that in Postgres,
As a temporary fix I need to write some uploaded image files to PostgreSQL
until a task server can read/process/delete them.
The problem I've run into (via server load tests that model our production
environment), is that these read/writes end up pushing the indexes used by
other queries out
On Wed, Nov 18, 2015 at 03:22:44PM -0500, Tom Lane wrote:
> It's quite unclear to me what threat model such a behavior would add
> useful protection against.
If you had some sort of high-security database and deleted some data
from it, it's important for the threat modeller to know whether the
On 11/18/2015 01:46 PM, Michael Nolan wrote:
On Wed, Nov 18, 2015 at 4:38 PM, Adrian Klaver
> wrote:
Alright, I was following you up to this. Seems to me deleted data
would represent stale/old data and would be less
On Wed, Nov 18, 2015 at 01:38:47PM -0800, Adrian Klaver wrote:
> Alright, I was following you up to this. Seems to me deleted data would
> represent stale/old data and would be less valuable.
If the data that was deleted is sensitive, then the fact that you
deleted it but that it didn't actually
On 11/18/2015 01:49 PM, John McKown wrote:
On Wed, Nov 18, 2015 at 3:38 PM, Adrian Klaver
>wrote:
On 11/18/2015 01:34 PM, Andrew Sullivan wrote:
On Wed, Nov 18, 2015 at 03:22:44PM -0500, Tom Lane wrote:
It's
On 11/18/2015 01:34 PM, Andrew Sullivan wrote:
On Wed, Nov 18, 2015 at 03:22:44PM -0500, Tom Lane wrote:
It's quite unclear to me what threat model such a behavior would add
useful protection against.
If you had some sort of high-security database and deleted some data
from it, it's important
'm still trying to understand why you think someone can access old data but
not current/live data.
If you encrypt the live data, wouldn't that solve both concerns?
On Wed, Nov 18, 2015 at 4:38 PM, Adrian Klaver
wrote:
> On 11/18/2015 01:34 PM, Andrew Sullivan wrote:
>
On Wed, Nov 18, 2015 at 4:38 PM, Adrian Klaver
wrote:
>
>> Alright, I was following you up to this. Seems to me deleted data would
> represent stale/old data and would be less valuable.
>
>>
>>
It may depend on WHY the data was deleted. If it represented, say, Hillary
On 11/18/2015 01:51 PM, Andrew Sullivan wrote:
On Wed, Nov 18, 2015 at 01:38:47PM -0800, Adrian Klaver wrote:
Alright, I was following you up to this. Seems to me deleted data would
represent stale/old data and would be less valuable.
If the data that was deleted is sensitive, then the fact
On 11/18/2015 5:10 PM, Jonathan Vanasco wrote:
As a temporary fix I need to write some uploaded image files to PostgreSQL
until a task server can read/process/delete them.
The problem I've run into (via server load tests that model our production
environment), is that these read/writes end up
On Wed, Nov 18, 2015 at 3:38 PM, Adrian Klaver
wrote:
> On 11/18/2015 01:34 PM, Andrew Sullivan wrote:
>
>> On Wed, Nov 18, 2015 at 03:22:44PM -0500, Tom Lane wrote:
>>
>>> It's quite unclear to me what threat model such a behavior would add
>>> useful protection
On 11/18/2015 12:57 PM, Day, David wrote:
-Original Message-
From: Adrian Klaver [mailto:adrian.kla...@aklaver.com]
Sent: Wednesday, November 18, 2015 3:47 PM
To: Day, David; pgsql-general@postgresql.org
Subject: Re: [GENERAL] postgres zeroization of dead tuples ? i.e scrubbing dead
45 matches
Mail list logo