Brandon High wrote:
On Fri, Apr 25, 2008 at 4:48 AM, kilamanjaro <[EMAIL PROTECTED]> wrote:
Is ZFS ready today to link a set of dispersed desktop computers (diverse
operating systems) into a distributed RAID volume that supports desktops
It sounds like you'd want to use something like
Brandon High wrote:
On Fri, Apr 25, 2008 at 4:48 AM, kilamanjaro <[EMAIL PROTECTED]> wrote:
Is ZFS ready today to link a set of dispersed desktop computers (diverse
operating systems) into a distributed RAID volume that supports desktops
It sounds like you'd want to use something like
On Fri, Apr 25, 2008 at 4:48 AM, kilamanjaro <[EMAIL PROTECTED]> wrote:
> Is ZFS ready today to link a set of dispersed desktop computers (diverse
> operating systems) into a distributed RAID volume that supports desktops
It sounds like you'd want to use something like Lustre or Hadoop, both
of wh
Hi,
this is true, but it might still be possible to use zfs in a
distributed setup, as you can build pools from plain files that may be
located anywhere on a network.
Automatic status notification would require some custom scripting and
it is obviously not recommended by sun ;-)
The other dra
Hi Matt,
you can use
zpool replace pond ad4
after removing your old disk.
The problem is, if something goes wrong during the replace, you might
lose data, because you have to remove the redundant disk first.
Hope this helps,
ralf
Am 25.04.2008 um 16:42 schrieb [EMAIL PROTECTED]:
> [zfs-discuss]
Hi Matt,
if you can remember the status information and are the only
administrator, you can just doa zpool clear and re-scrub your pool
regularly.
zpool clear does no more or less than resetting the status and the
counters.
(A reboot or export does the same thing, at least no Mac OS X.)
The p
> I didn't cross post the following, it was sent only to the cifs discuss list:
> nge. Onboard nvidia nforce 570 Ultra MCP onboard gigabit nic for the Asus
> M2N-E board.
>
> nge: [ID 801725 kern.info] NOTICE: nge0: Using FIXED interrupt type
> mac: [ID 469746 kern.info] NOTICE: nge0 registered
>
I've just tried to bfu my system from b87 (which was in turn bfu'ed from OS DP
2, so has a ZFS root as installed by DP 2), and b88 is unbootable. I retrieved
the non-debug b88 bfu archives from the ON download page, so am not using a
local build.
I'm relatively new to debugging boot problems o
No :-)
The slightly more longer answer is that I don't think ZFS is what you're
looking for, I believe there are other projects better suited (although I don't
recall the names off the top of my head).
For starters, ZFS only offers at best dual parity raid, so if more than two
computers went o
On Fri, 25 Apr 2008, Richard Elling wrote:
> No. ZFS is not a distributed file system.
While the results might not be pretty, if each PC exports a drive via
iSCSI and mirroring is used with plenty of PCs in each mirror, it
seems like it would "work" but with likely dismal performance if a PC
No. ZFS is not a distributed file system.
-- richard
kilamanjaro wrote:
> Hallo!
>
> [Asked by a technical managerial new person assembling a
> recommendation (sorry for my relative ignorance) and apologies for the
> long English sentence]:
>
> Is ZFS ready today to link a set of dispersed de
Hello andrew,
Thursday, April 24, 2008, 11:03:48 AM, you wrote:
a> What is the reasoning behind ZFS not enabling the write cache for
a> the root pool? Is there a way of forcing ZFS to enable the write cache?
The reason is that EFI labels are not supported for booting.
So from ZFS perspective you
-snip-
> > What I'm doing is mounting the smb share with WinXP
> and pulling data from the ZFS mirror pool at 2.3MiB/s
> across the network. Writing to the same share from
> the WinXP host I get a fairly consistent 342KiB/s
> speed.
-snip-
> What performance are you getting with transfers over
>
Hallo!
[Asked by a technical managerial new person assembling a
recommendation (sorry for my relative ignorance) and apologies for
the long English sentence]:
Is ZFS ready today to link a set of dispersed desktop computers
(diverse operating systems) into a distributed RAID volume that
On Thu, 2008-04-24 at 09:46 -0700, Rick wrote:
> Recently I've installed SXCE nv86 for the first time in hopes of getting rid
> of my linux file server and using Solaris and ZFS for my new file server.
> After setting up a simple ZFS mirror of 2 disks, I enabled smb and set about
> moving over a
Say I have a raidz of 3 disks: ad4 ad6 ad8
I want to tell zfs that I am pulling ad4 and replacing it with a new disk on
the same controller, but I can't get it to release its hold on ad4.
[EMAIL PROTECTED]:/home/matt]# zpool offline pond ad4
Bringing device ad4 offline
[EMAIL PROTECTED]:/home/ma
This is probably a FAQ but I have been unable to turn up the answer in
searches, thanks for your patience.
I have a zfs testbed set up with 3x 200 GB SATA drives in raidz. I pulled a
drive (ad4) and replaced it to experience the rebuild procedure. After
scrubbing/resilvering, I get the status
17 matches
Mail list logo