On Jan 10, 2008, at 5:13 PM, Jim Dunham wrote:
> Eric,
>
>>
>> On Jan 10, 2008, at 4:50 AM, Łukasz K wrote:
>>
>>> Hi
>>>I'm using ZFS on few X4500 and I need to backup them.
>>> The data on source pool keeps changing so the online replication
>>> would be the best solution.
>>>
>>>As I k
On Jan 10, 2008, at 07:50, Łukasz K wrote:
> As I know AVS doesn't support ZFS - there is a problem with
> mounting backup pool.
Two demo movies with AVS and ZFS together were posted a little while
ago:
http://blogs.sun.com/AVS/entry/avs_zfs_the_sndr_replication
___
I finaly found the cause of the error
Since my disks are mounted in a cassette with four in each I had to disconnect
all cables to them to replace the crashed disk.
When re-attaching the cables I reversed the order of them by accident. In my
early tests this was not a problem since zfs iden
Kory,
> Yes, I get it now. You want to detach one of the disks and then readd
> the same disk, but lose the redundancy of the mirror.
>
> Just as long as you realize you're losing the redundancy.
>
> I'm wondering if zpool add will complain. I don't have a system to
> try this at the moment.
The
Eric,
>
> On Jan 10, 2008, at 4:50 AM, Łukasz K wrote:
>
>> Hi
>>I'm using ZFS on few X4500 and I need to backup them.
>> The data on source pool keeps changing so the online replication
>> would be the best solution.
>>
>>As I know AVS doesn't support ZFS - there is a problem with
>> moun
Hi Kory,
Yes, I get it now. You want to detach one of the disks and then readd
the same disk, but lose the redundancy of the mirror.
Just as long as you realize you're losing the redundancy.
I'm wondering if zpool add will complain. I don't have a system to
try this at the moment.
Cindy
Kory
Currently c2t2d0 c2t3d0 are setup in a mirror. I want to break the mirror and
save the data on c2t2d0 (which both drives are 73g. Then I want to concatenate
c2t2do to c2t3d0 so I have a pool of 146GB no longer in a mirror just
concatenated. But since their mirror right now I need the data sav
(I think Cindy is thinking of something else :-)
Kory Wheatley wrote:
> We have a ZFS mirror setup of two 73GB's disk, but we are running out of
> space. I want to break the mirror and join the disk to have a pool of 146GB,
> and of course not lose the data doing this. What are the commands?
>
Hey Kory,
I think you must mean can you detach one of the 73GB disks from moodle
and then add it to another pool of 146GB and you want to save the
data from the 73GB disk?
You can't do this and save the data. By using zpool detach, you are
removing any knowledge of ZFS from that disk.
If you wa
We have a ZFS mirror setup of two 73GB's disk, but we are running out of space.
I want to break the mirror and join the disk to have a pool of 146GB, and of
course not lose the data doing this. What are the commands?
To break the mirror I would do
zpool detach moodle c1t3d0
NAME
On Jan 10, 2008 3:46 AM, Toby Thain <[EMAIL PROTECTED]> wrote:
>
> On 9-Jan-08, at 10:26 PM, Noël Dellofano wrote:
>
> > As I mentioned, ZFS is still BETA, so there are (and likely will be)
> > some issues turn up with compatibility with the upper layers of the
> > system if that's what you're refe
On Jan 10, 2008, at 9:32 AM, Carson Gaspar wrote:
> eric kustarz wrote:
>> On Jan 10, 2008, at 9:18 AM, Łukasz K wrote:
>>
>>> I need automatic system. Now I'm using zfs send but it
>>> takes too much human resources to control it.
>>
>> cron job?
>
> Oh look, I did a zfs send in cron, and my res
eric kustarz wrote:
> On Jan 10, 2008, at 9:18 AM, Łukasz K wrote:
>
>> I need automatic system. Now I'm using zfs send but it
>> takes too much human resources to control it.
>
> cron job?
Oh look, I did a zfs send in cron, and my resilver never finished, and I
lost a second drive, and now my d
On Jan 10, 2008, at 9:18 AM, Łukasz K wrote:
> Dnia 10-01-2008 o godz. 17:45 eric kustarz napisał(a):
>> On Jan 10, 2008, at 4:50 AM, Łukasz K wrote:
>>
>>> Hi
>>> I'm using ZFS on few X4500 and I need to backup them.
>>> The data on source pool keeps changing so the online replication
>>> wo
Dnia 10-01-2008 o godz. 17:45 eric kustarz napisał(a):
> On Jan 10, 2008, at 4:50 AM, Łukasz K wrote:
>
> > Hi
> > I'm using ZFS on few X4500 and I need to backup them.
> > The data on source pool keeps changing so the online replication
> > would be the best solution.
> >
> > As I know AV
On Jan 9, 2008, at 7:38 PM, Noël Dellofano wrote:
> Yep, these issues are known and I've listed them with explanations on
> on our website under "The State of the ZFS on OS X World":
> http://zfs.macosforge.org/
>
> We're working on them. The Trash and iTunes bugs should be fixed
> soon. The "y
On Jan 9, 2008, at 9:09 PM, Rob Logan wrote:
>
> fun example that shows NCQ lowers wait and %w, but doesn't have
> much impact on final speed. [scrubbing, devs reordered for clarity]
Here are the results i found when comparing random reads vs.
sequential reads for NCQ:
http://blogs.sun.com/eri
On Jan 10, 2008, at 4:50 AM, Łukasz K wrote:
> Hi
> I'm using ZFS on few X4500 and I need to backup them.
> The data on source pool keeps changing so the online replication
> would be the best solution.
>
> As I know AVS doesn't support ZFS - there is a problem with
> mounting backup pool
Dnia 10-01-2008 o godz. 16:11 Jim Dunham napisał(a):
> Łukasz K wrote:
>
> > Hi
> >I'm using ZFS on few X4500 and I need to backup them.
> > The data on source pool keeps changing so the online replication
> > would be the best solution.
> >
> >As I know AVS doesn't support ZFS - there is
Ross wrote:
> For slide 3, HA-ZFS is available now with HA-Storage+ if you're happy with
> Active/Passive. HA-iSCSI code was released just before christmas I believe
> but is currently untested, and HA-CIFS is just a thought on the roadmap.
>
> The reason for the 2008/2009 timeline is because th
Łukasz K wrote:
> Hi
>I'm using ZFS on few X4500 and I need to backup them.
> The data on source pool keeps changing so the online replication
> would be the best solution.
>
>As I know AVS doesn't support ZFS - there is a problem with
> mounting backup pool.
This is not true, if replicat
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Guy wrote:
> Is there a way to know which blocks changed since the last snapshot ?
> Is it metadata or something else ?
>
> Usually, there is several hundred kilobytes in the last snapshot ?
>
> Can you help me please ?
I saw the same issue. Investi
Hello experts,
We have a large implementation of Symantec Netbackup 6.0 with disk staging.
Today, the customer is using VxFS as file system inside Netbackup 6.0 DSSU
(disk staging).
The customer would like to know if it is best to use ZFS or VxFS as file system
inside Netbackup disk stag
On 10 January, 2008 - Rob Logan sent me these 1,9K bytes:
>
> fun example that shows NCQ lowers wait and %w, but doesn't have
> much impact on final speed. [scrubbing, devs reordered for clarity]
The final speed is limited by the slowest of your two raidz groups, so a
better example would be two
Hi
I'm using ZFS on few X4500 and I need to backup them.
The data on source pool keeps changing so the online replication
would be the best solution.
As I know AVS doesn't support ZFS - there is a problem with
mounting backup pool.
Other backup systems (disk-to-disk or block-to-block)
On 9-Jan-08, at 10:26 PM, Noël Dellofano wrote:
> As I mentioned, ZFS is still BETA, so there are (and likely will be)
> some issues turn up with compatibility with the upper layers of the
> system if that's what you're referring to.
Two potential areas come immediately to mind - case sensitivi
Robert wrote:
> Ok, not a single soul knows this either, this doesn't look promising
>
> How can I list/edit the metadata(?) that is on my disks or the pool so that I
> may see/edit what each physical disk in the pool has registered?
To view but not edit you can use /usr/sbin/zdb
--
Darren
For slide 3, HA-ZFS is available now with HA-Storage+ if you're happy with
Active/Passive. HA-iSCSI code was released just before christmas I believe but
is currently untested, and HA-CIFS is just a thought on the roadmap.
The reason for the 2008/2009 timeline is because that's when I've been t
Hi all,
Please can you help with my ZFS troubles:
I currently have 3 x 400 GB Seagate NL35's and a 500 GB Samsung Spinpoint in a
RAIDZ array that I wish to expand by systematically replacing each drive with a
750 GB Western Digital Caviar.
After failing miserably, I'd like to start from scratc
Robert telia.com> writes:
>
> I simply need to rename/remove one of the erronous c2d0 entries/disks in
> the pool so that I can use it in full again, since at this time I can't
> reconnect the 10th disk in my raid and if one more disk fails all my
> data would be lost (4 TB is a lot of disk to wa
30 matches
Mail list logo