Robert slask at telia.com writes:
I simply need to rename/remove one of the erronous c2d0 entries/disks in
the pool so that I can use it in full again, since at this time I can't
reconnect the 10th disk in my raid and if one more disk fails all my
data would be lost (4 TB is a lot of disk to
Hi all,
Please can you help with my ZFS troubles:
I currently have 3 x 400 GB Seagate NL35's and a 500 GB Samsung Spinpoint in a
RAIDZ array that I wish to expand by systematically replacing each drive with a
750 GB Western Digital Caviar.
After failing miserably, I'd like to start from
For slide 3, HA-ZFS is available now with HA-Storage+ if you're happy with
Active/Passive. HA-iSCSI code was released just before christmas I believe but
is currently untested, and HA-CIFS is just a thought on the roadmap.
The reason for the 2008/2009 timeline is because that's when I've been
Robert wrote:
Ok, not a single soul knows this either, this doesn't look promising
How can I list/edit the metadata(?) that is on my disks or the pool so that I
may see/edit what each physical disk in the pool has registered?
To view but not edit you can use /usr/sbin/zdb
--
Darren J
On 9-Jan-08, at 10:26 PM, Noël Dellofano wrote:
As I mentioned, ZFS is still BETA, so there are (and likely will be)
some issues turn up with compatibility with the upper layers of the
system if that's what you're referring to.
Two potential areas come immediately to mind - case sensitivity
Hi
I'm using ZFS on few X4500 and I need to backup them.
The data on source pool keeps changing so the online replication
would be the best solution.
As I know AVS doesn't support ZFS - there is a problem with
mounting backup pool.
Other backup systems (disk-to-disk or block-to-block)
On 10 January, 2008 - Rob Logan sent me these 1,9K bytes:
fun example that shows NCQ lowers wait and %w, but doesn't have
much impact on final speed. [scrubbing, devs reordered for clarity]
The final speed is limited by the slowest of your two raidz groups, so a
better example would be two
Hello experts,
We have a large implementation of Symantec Netbackup 6.0 with disk staging.
Today, the customer is using VxFS as file system inside Netbackup 6.0 DSSU
(disk staging).
The customer would like to know if it is best to use ZFS or VxFS as file system
inside Netbackup disk
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Guy wrote:
Is there a way to know which blocks changed since the last snapshot ?
Is it metadata or something else ?
Usually, there is several hundred kilobytes in the last snapshot ?
Can you help me please ?
I saw the same issue.
Łukasz K wrote:
Hi
I'm using ZFS on few X4500 and I need to backup them.
The data on source pool keeps changing so the online replication
would be the best solution.
As I know AVS doesn't support ZFS - there is a problem with
mounting backup pool.
This is not true, if replication is
Ross wrote:
For slide 3, HA-ZFS is available now with HA-Storage+ if you're happy with
Active/Passive. HA-iSCSI code was released just before christmas I believe
but is currently untested, and HA-CIFS is just a thought on the roadmap.
The reason for the 2008/2009 timeline is because that's
Dnia 10-01-2008 o godz. 16:11 Jim Dunham napisał(a):
Łukasz K wrote:
Hi
I'm using ZFS on few X4500 and I need to backup them.
The data on source pool keeps changing so the online replication
would be the best solution.
As I know AVS doesn't support ZFS - there is a problem
On Jan 10, 2008, at 4:50 AM, Łukasz K wrote:
Hi
I'm using ZFS on few X4500 and I need to backup them.
The data on source pool keeps changing so the online replication
would be the best solution.
As I know AVS doesn't support ZFS - there is a problem with
mounting backup pool.
On Jan 9, 2008, at 9:09 PM, Rob Logan wrote:
fun example that shows NCQ lowers wait and %w, but doesn't have
much impact on final speed. [scrubbing, devs reordered for clarity]
Here are the results i found when comparing random reads vs.
sequential reads for NCQ:
On Jan 9, 2008, at 7:38 PM, Noël Dellofano wrote:
Yep, these issues are known and I've listed them with explanations on
on our website under The State of the ZFS on OS X World:
http://zfs.macosforge.org/
We're working on them. The Trash and iTunes bugs should be fixed
soon. The your
Dnia 10-01-2008 o godz. 17:45 eric kustarz napisał(a):
On Jan 10, 2008, at 4:50 AM, Łukasz K wrote:
Hi
I'm using ZFS on few X4500 and I need to backup them.
The data on source pool keeps changing so the online replication
would be the best solution.
As I know AVS doesn't
On Jan 10, 2008, at 9:18 AM, Łukasz K wrote:
Dnia 10-01-2008 o godz. 17:45 eric kustarz napisał(a):
On Jan 10, 2008, at 4:50 AM, Łukasz K wrote:
Hi
I'm using ZFS on few X4500 and I need to backup them.
The data on source pool keeps changing so the online replication
would be the best
eric kustarz wrote:
On Jan 10, 2008, at 9:18 AM, Łukasz K wrote:
I need automatic system. Now I'm using zfs send but it
takes too much human resources to control it.
cron job?
Oh look, I did a zfs send in cron, and my resilver never finished, and I
lost a second drive, and now my data has
On Jan 10, 2008 3:46 AM, Toby Thain [EMAIL PROTECTED] wrote:
On 9-Jan-08, at 10:26 PM, Noël Dellofano wrote:
As I mentioned, ZFS is still BETA, so there are (and likely will be)
some issues turn up with compatibility with the upper layers of the
system if that's what you're referring to.
We have a ZFS mirror setup of two 73GB's disk, but we are running out of space.
I want to break the mirror and join the disk to have a pool of 146GB, and of
course not lose the data doing this. What are the commands?
To break the mirror I would do
zpool detach moodle c1t3d0
NAME
Hey Kory,
I think you must mean can you detach one of the 73GB disks from moodle
and then add it to another pool of 146GB and you want to save the
data from the 73GB disk?
You can't do this and save the data. By using zpool detach, you are
removing any knowledge of ZFS from that disk.
If you
(I think Cindy is thinking of something else :-)
Kory Wheatley wrote:
We have a ZFS mirror setup of two 73GB's disk, but we are running out of
space. I want to break the mirror and join the disk to have a pool of 146GB,
and of course not lose the data doing this. What are the commands?
To
Hi Kory,
Yes, I get it now. You want to detach one of the disks and then readd
the same disk, but lose the redundancy of the mirror.
Just as long as you realize you're losing the redundancy.
I'm wondering if zpool add will complain. I don't have a system to
try this at the moment.
Cindy
Kory
Eric,
On Jan 10, 2008, at 4:50 AM, Łukasz K wrote:
Hi
I'm using ZFS on few X4500 and I need to backup them.
The data on source pool keeps changing so the online replication
would be the best solution.
As I know AVS doesn't support ZFS - there is a problem with
mounting backup
Kory,
Yes, I get it now. You want to detach one of the disks and then readd
the same disk, but lose the redundancy of the mirror.
Just as long as you realize you're losing the redundancy.
I'm wondering if zpool add will complain. I don't have a system to
try this at the moment.
The
I finaly found the cause of the error
Since my disks are mounted in a cassette with four in each I had to disconnect
all cables to them to replace the crashed disk.
When re-attaching the cables I reversed the order of them by accident. In my
early tests this was not a problem since zfs
On Jan 10, 2008, at 07:50, Łukasz K wrote:
As I know AVS doesn't support ZFS - there is a problem with
mounting backup pool.
Two demo movies with AVS and ZFS together were posted a little while
ago:
http://blogs.sun.com/AVS/entry/avs_zfs_the_sndr_replication
On Jan 10, 2008, at 5:13 PM, Jim Dunham wrote:
Eric,
On Jan 10, 2008, at 4:50 AM, Łukasz K wrote:
Hi
I'm using ZFS on few X4500 and I need to backup them.
The data on source pool keeps changing so the online replication
would be the best solution.
As I know AVS doesn't support
28 matches
Mail list logo