Re: [Veritas-bu] Policy settings are reverting

2007-09-30 Thread Bobby Williams
Make sure that someone else does not have the console open.  If you have
multiple people with the console open (bpps and grep for java), tell
everyone to rescan the policies when you make the change.
 
If you open a policy without rescanning for updates and say ok, then that is
the version that gets saved.
 



Bobby Williams 
2205 Peterson Drive 
Chattanooga, Tennessee  37421 
423-296-8200 

 

  _  

From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Steve Hoenir
Sent: Friday, September 28, 2007 12:08 PM
To: Veritas Netbackup
Cc: veritas-bu@mailman.eng.auburn.edu
Subject: Re: [Veritas-bu] Policy settings are reverting


No -- this is a new system for me, so I'm sticking to using the master for
everything (even just viewing job activity) until I can get a better handle
on how this system is configured.  But that is a good point.
 
-Skip

- Original Message - 
From: Veritas  mailto:[EMAIL PROTECTED] Netbackup 
To: Steve Hoenir mailto:[EMAIL PROTECTED]  
Cc: veritas-bu@mailman.eng.auburn.edu 
Sent: Friday, September 28, 2007 6:52 AM
Subject: Re: [Veritas-bu] Policy settings are reverting

Did you use...seprate consoles...! I mean did you change settings using the
windows console and now trying to view them using a java console. Get back
to using the console that you used to make the changes. I had problems in
this case. 
 
Try verifying the settings using commands.
 
Regards,
PP BIJU KRISHNAN
 
On 9/28/07, Steve Hoenir [EMAIL PROTECTED] wrote: 

Hello all,
 
I haven't been on this list for quite some time, but somehow I got pulled
back into the fun world of NBU admin...  I'm working on a system that I
haven't encountered before, so I'm not fully sure how it's configured yet,
but I ran into something really screwy, and I'm having trouble finding out
why.  The system is running NBU 5.0MP7, with only the one master and no
media servers.
 
I made changes to a number of policies that were set to Allow multiple data
streams -- the amount of data didn't require it, so I cleared the checkbox.
I went back into the system today, and all the checkboxes I had cleared were
repopulated.  However, when looking at the job activity, all of the jobs are
acting properly (one data stream).  I tried clicking OK instead of
Cancel on the policy, and the setting reactivated itself. 
 
Anyone know why these settings tried to revert?
 
Thanks,
Skip
 


  _  

Invite your mail contacts to join your friends list with Windows Live
Spaces. It's easy! Try
http://spaces.live.com/spacesapi.aspx?wx_action=createwx_url=/friends.aspx
mkt=en-us it!

___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
mailto:Veritas-bu@mailman.eng.auburn.edu 
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu




___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Tapeless backup environments?

2007-09-30 Thread Curtis Preston
Chris Freemantle said:
It's interesting that the probability of any 2 randomly selected hashs 
being the same is quoted, rather than the probability that at least 2 
out of a whole group are the same. That's probably because the minutely

small chance becomes rather bigger when you consider many hashs. This 
will still be small, but I suspect not as reassuringly small.
To illustrate this consider the 'birthday paradox'. 

I'm really glad you point this out.  The way I interpret this is that
the odds of their being a hash collision in your environment increase
with every new block of data you submit to the de-duplication system.
I've talked to somebody who has researched this mathematically, and he
says he's going to share with me his calculations.  I'll share them
if/when he shares them with me.  As a proponent of these systems, I
certainly don't want to misrepresent the odds they represent.

For our data I would certainly not use de-duping, even if it did work 
well on image data.

I think you're under the misconception that all de-dupe systems use ONLY
hashes to identify redundant data.  While there are products that do
this (and I still trust them more than you do), there are also products
that do a full block comparison of the supposedly matching blocks before
throwing one of them away.

In addition, there are ways to completely remove the risk you're worried
about.  If you backup to a de-dupe backup system, regardless of its
design, and then use your backup software to copy from it to tape (or
anything), you verify the de-duped data, as any good backup software
will check all data it copies against its own stored checksums.

___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Tapeless backup environments?

2007-09-30 Thread Curtis Preston
Bob,

I'll try to respond as best as I can.

No importa.  The length of the checksum/hash/fingerprint and the
sophistication of its algorithm only affect how frequently--not
whether--the incorrect answer is generated.

You and I don't disagree on this.  The only thing we differ with is the
odds of the event.  I think the odds are small enough to not be
concerned with, and you think they're larger than that.

(I also think it's important to state what I stated in my other reply.
Most de-dupe systems do not rely only on hashes.  So if you can't get
past this whole hashing thing, there's no reason to reject de-dupe
altogether.  Just make sure your vendor uses an alternate method.

The notion that the bad guys will never figure out a way to plant a
silent data-change based on checksum/hash/fingerprint collisions is,
IMO, naive.

So someone is going to exploit the hash collision possibilities in my
backup system to do what, exactly?  As much as I've spoken and written
about storage security, I can't for the life of me figure out what
someone would hope to gain or how they would gain it this way.

Those are impressive, and dare I guess, vendor-supplied, numbers.  And
they're meaningless.  

These are odds based on the size of the key space.  If you have 2^160
odds, you have a 1:2^160 chance of a collision.

What _is_ important?  To me, it's important that if I read
back any of the N terrabytes of data I might store this week, I get the
same data that was written, not a silently changed version because the
checksum/hash/fingerprint of one block that I wrote collides with
another cheksum/hash/fingerprint.  

This is referring to the birthday paradox.  As I stated in another post,
I haven't thought about this before, and am looking into what the real
odds are.  I'm trying to translate it into actual numbers.

I can NOT have that happen to any
block--in a file clerk's .pst, a directory inode or the finance
database.  Probably, it won't happen is not acceptable.

Couldn't agree more.

 Let's compare those odds with the odds of an unrecoverable 
 read error on a typical disk--approximately 1 in 100 trillion

Bogus comparison.  In this straw man, that 1/100,000,000,000,000 read
error a) probably doesn't affect anything

I thought probably wasn't acceptable?  I'm sorry, that was just too
close to your previous use of probably in a very different context.

probably doesn't affect anything because of the higher-level
RAID array it's in and b) if it does, there's an error, a
we-could-not-read-this-data, you-can't-proceed, stop, fail,
get-it-from-another-source error--NOT a silent changing of the data
from
foo to bar on every read with no indication that it isn't the data that
was written.

I think Darren's other posts about this point are sufficient.  It
happens.  It happens all the time, and is well documented.  And yet the
industry's ok with this.  On the other hand, the odds of what we're
talking about are significantly smaller and people are freaking out.

 If you want to talk about the odds of something bad happening and not
 knowing it, keep using tape. Everyone who has worked with tape for
any
 length of time has experienced a tape drive writing something that it
 then couldn't read.

That's not news, and why we've been making copies of data for, oh, 50
years or so.

I'm just saying that a hash collision, however possible, would basically
translate into a failed backup that looks good.  Do you have any idea
how many failed backups that look good happen every single day with
tape?  And, as long as you bring up making copies, making copies of your
de-duped data removes any concerns, as it verifies the original.

 Compare that to successful deduplication disk
 restores. According to Avamar Technologies Inc. (recently acquired by
 EMC Corp.), none of its customers has ever had a failed restore.

Now _there's_ an unbiased source.

Touche'. Anyone who has actually experienced a hash collision in their
de-duplication backup system please stand up.  Given the hype that
de-dupe has made, don't you think that anyone who had experienced such a
thing would have reported it and such a report would have been given big
press?  I sure do.  And yet there has been nothing.


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu