Hello,
Has anyone this bpimagelist option? The commandline reference says this;
LIST_COMPLETE_COPIES - Do not report fragments of a duplicate copy that is
still in process.
The command i try to use is bpimagelist -option LIST_COMPLETE_COPIES
I cannot get this command to work while other
Thanks for highlighting this problem to the list, as we for one hadn't
even picked up that our zfs filesystems were no longer backing up. We
now have the engineering binaries, and our account manager at Symantec
has kindly chased up the TechAlert which should have been issued.
cheers, Phil
I think it should be COMPLETE_COPIES i.e no LIST_ prepending..
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Rolf C
Sent: 08 August 2008 08:55
To: veritas-bu@mailman.eng.auburn.edu
Subject: [Veritas-bu] bpimagelist option LIST_ALL_COPIES
Hello,
Has anyone this
Looks like a bug. All the other options seem to work.
Hello,
Has anyone this bpimagelist option? The commandline reference says this;
LIST_COMPLETE_COPIES - Do not report fragments of a duplicate copy that is
still in process.
The command i try to use is bpimagelist -option
Oops, I should have tried Mr Cheney's suggestion first, it works. So it is not
a bug, but a documentation error. The man page does indeed show what you were
trying to run.
Hello,
Has anyone this bpimagelist option? The commandline reference says this;
LIST_COMPLETE_COPIES - Do not report
Maybe I need to read up a little more on this media sharing. Any suggestions
on using it with Vault? We are using VTL's, and then using Vault to duplicate
and send off-site physical tapes.
+--
|This was sent by [EMAIL
We are converting to capacity based licensing, and SSO is included in the
Enterprise tier that we are getting, so no licensing concerns are involved.
The reason I'm looking at using SSO is because the VTL only supports up to 30
virtual drives or 30 streams of data at a time, and since it's
The only issue I can forsee is with SCSI reservations. The reservation request
is issued by the server, but the reservations are implemented at the tape
drive. Even with actual tape drives we ran into issues where a tape drive was
reserved by a server but when the release for the reservation
Ed,
As Phil and myself have proved, there are backup admins out there who
have no idea that their ZFS filesystem data is unprotected. Not
everyone has the opportunity to test restores and it's only through
sheer luck that we'd been given a server to use as a test box about a
week before I
On Fri, Aug 8, 2008 at 9:53 AM, Mark Glazerman
[EMAIL PROTECTED]wrote:
I agree 100% that as a matter of urgency, Symantec should have distributed
the technote much earlier than they did once they identified this issue. I
hope that there were not too many other admins who had to go through the
Ah - well - before they supplied it, I just copied the rsh version and
edited it to change all the rsh to ssh and rcp to scp.
That works.
These days, though, I usually just put the client install package out
there install from there - easier to copy that out there than it is to
get all the
On Fri, Aug 8, 2008 at 7:43 AM, spaldam
[EMAIL PROTECTED]wrote:
since it's also does de-duplication, we cannot use multiplexing or we
loose the effectiveness of the de-duplication.
I don't know why this would be true - what blocksize does your appliance use
to de-dupe on? If it's
Меня не будет на работе 08.08.2008 по 17.08.2008.
Я буду в отпуске с 08.08 по 17.08.По всем вопросам просьба обращаться к
Ржечковскому Д.С.___
Veritas-bu maillist - Veritas-bu@mailman.eng.auburn.edu
Not all dedupe vendors dedupe the way you're describing, but it is a
common recommendation to turn off multiplexing. (Only one or two
vendors claim not to care.)
Curtis Preston | VP Data Protection
GlassHouse Technologies, Inc.
T: +1
Ed Wilts wrote:
On Fri, Aug 8, 2008 at 9:53 AM, Mark Glazerman
[EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote:
Some the Quality Engineering folks and product managers are reading
the list too... We're not complaining about the engineering on this -
although we would have liked for the
Quantum uses a variable block size for their dedup, which they claim gives them
a much higher de-duplication ratio. If you multiplex, the natural boundaries
that quantum looks for get chopped up and that lowers the effectiveness of the
de-duplication.
As for having problems with SSO and
16 matches
Mail list logo