Hi. I've been reading the ZFS admin guide, and I don't understand the
distinction between adding a device and attaching a device to a pool?
TIA
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
George Plymale wrote:
Couple of questions regarding ZFS:
First, can slices and vdevs be removed from a pool? It appears to
only want to remove a hotspare from a pool, which makes sense, however
is there some work around that will migrate data off of a vdev and
thus allow you to remove it? (In
Rick Mann wrote:
Hi. I've been reading the ZFS admin guide, and I don't understand the distinction between
adding a device and attaching a device to a pool?
attach is used to create or add a side to a mirror.
add is to add a new top level vdev where that can be a raidz, mirror
or single
Ross Newell wrote:
What are this issues preventing the root directory being stored on raidz?
I'm talking specifically about root, and not boot which I can see would be
difficult.
Would it be something an amateur programmer could address in a weekend, or
is it more involved?
I believe this
Samuel Borgman wrote:
I just started to use zfs after longing to try it out for a long while now. The problem
is that I've lost 240Gb out of 700Gb
I have single 700G pool on a 3510 HW raid mounted on /nm4/data running
# du -sk /nm4/data
411025338 /nm4/data
While a
# df -hk
[EMAIL PROTECTED] wrote:
After one aborted ufsrestore followed by some cleanup I tried
to restore again but this time ufsrestore faultered with:
bad filesystem block size 2560
The reason was this return value for the stat of . of the
filesystem:
8339: stat(., 0xFFBFF818)
My question: What apps are these? I heard mention of some SunOS 4.x
library. I don't think that's anywhere near important enough to warrant
changing the current ZFS behavior.
Not apps; NFS clients such as *BSD.
On Solaris the issue is next to non-existant (SunOS 4.x binaries using
I went hunting for more apps in the hundreds of ports installed at my
shop to see what our exposure was to the scandir() problem - much to
my surpise out of 700 or so ports, only a dozen or so used the libc
scandir(). A handful of mail programs had a vulnerable local
implementation of scandir()
I know it's a pain, but you have to spend money to download Apple's betas, that
is, pay their developer fee. If, however, this might inspire you to do this,
you should know that zfs will run (read and write) on the latest build of
Leopard, as Apple has (somewhat cryptically) said. Apple also
[EMAIL PROTECTED] wrote:
After one aborted ufsrestore followed by some cleanup I tried
to restore again but this time ufsrestore faultered with:
bad filesystem block size 2560
The reason was this return value for the stat of . of the
filesystem:
8339: stat(., 0xFFBFF818)
On June 13, 2007 7:51:21 PM -0500 Al Hopper [EMAIL PROTECTED] wrote:
It seems wasteful to (determine the required part number and) order a
2530 JBOD expansion shelf and then return it if it does not work out
It's a 2501 I think.
Obviously this is a huge hole in Suns' current storage
On June 13, 2007 11:26:07 PM -0400 Ed Ravin [EMAIL PROTECTED] wrote:
On Wed, Jun 13, 2007 at 09:42:26PM -0400, Ed Ravin wrote:
As mentioned before, NetBSD's scandir(3) implementation was one. The
NetBSD project has fixed this in their CVS. OpenBSD and FreeBSD's
scandir() looks like another,
On June 13, 2007 11:26:07 PM -0400 Ed Ravin [EMAIL PROTECTED] wrote:
On Wed, Jun 13, 2007 at 09:42:26PM -0400, Ed Ravin wrote:
As mentioned before, NetBSD's scandir(3) implementation was one. The
NetBSD project has fixed this in their CVS. OpenBSD and FreeBSD's
scandir() looks like another,
Issue with statvfs()
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
mypool 108M 868M 26.5K /mypool
mypool/home108M 868M 27.5K /mypool/home
mypool/home/user-2 108M 868M 108M /mypool/home/user-2
# df -h
Filesystem size
Intending to experiment with ZFS, I have been struggling with what
should be a simple download routine.
Sun Download Manager leaves a great deal to be desired.
In the Online Help for Sun Download Manager there's a section on
troubleshooting, but if it causes *anyone* this much trouble
Thanks, here is some more info
# zpool status
pool: nm4
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
nm4ONLINE 0 0 0
Oh yeah I'm running Solaris 10
uname -a
SunOS jet 5.10 Generic_118855-36 i86pc i386 i86pc
/Samuel
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Graham Perrin wrote:
Intending to experiment with ZFS, I have been struggling with what
should be a simple download routine.
Sun Download Manager leaves a great deal to be desired.
In the Online Help for Sun Download Manager there's a section on
troubleshooting, but if it causes *anyone*
On 14/6/07 11:16, Graham Perrin [EMAIL PROTECTED] wrote:
Intending to experiment with ZFS, I have been struggling with what
should be a simple download routine.
Sun Download Manager leaves a great deal to be desired.
In the Online Help for Sun Download Manager there's a section on
Frank Cusack [EMAIL PROTECTED] wrote:
On June 13, 2007 11:26:07 PM -0400 Ed Ravin [EMAIL PROTECTED] wrote:
On Wed, Jun 13, 2007 at 09:42:26PM -0400, Ed Ravin wrote:
As mentioned before, NetBSD's scandir(3) implementation was one. The
NetBSD project has fixed this in their CVS. OpenBSD
Intending to experiment with ZFS, I have been
struggling with what
should be a simple download routine.
Sun Download Manager leaves a great deal to be
desired.
In the Online Help for Sun Download Manager there's a
section on
troubleshooting, but if it causes *anyone* this much
Strikes me that at the moment Sun/ZFS team is missing a great opportunity.
Imagine Joe bloggs has a historical machine with Just Any Old Bunch Of Discs...
(it's not me, no really).
He doesn't want to have to think too hard about pairing them up in mirrors or
in raids - and sometimes they die
Hi,
On 14.6.2007, at 9:15, G.W. wrote:
If someone knows how to modify Extensions.kextcache and
Extensions.mkext, please let me know. After the bugs are worked
out, Leopard should be a pretty good platform.
You can recreate the kext cache like this:
kextcache -k
Now that I know *what*, could you perhaps explain
to my *why*? I understood zpool import and export
operations much as mount and unmount, like maybe some
checks on the integrity of the pool and updates to
some structure on the OS to maintain the
imported/exported state of that pool. But now
I have 6 400GB discs and want to make two RAIDZ 2/1 vdevs out of them (ie
2stripe 1parity).
The problem is that 4 are in use... so I want to do something like:
zpool create datadump raidz c1t0d0 c1t1d0 missing
Then move and bunch of data into datadump, to free up another two discs, then
Hi,
after creation of a raidz2 pool with
# zpool create -f mypool raidz2 c1t1d0 c1t2d0 c1t3d0 c1t4d0 c1t5d0
the commands zpool list and df showing different sizes for the created fs
# zpool list
NAMESIZEUSED AVAILCAP HEALTH ALTROOT
mypool
A bunch of disks of different sizes will make it a problem. I wanted to
post that idea to the mailing list before, but didn't do so, since it
doesn't make too much sense.
Say you have two disks, one 50GB and one 100GB, part of your data can
only be ditto'd within the upper 50GB of the larger
On 6/14/07, Paul Hedderly [EMAIL PROTECTED] wrote:
What Joe really wants to say to ZFS is: Here is a bunch of discs. Use them any way
you like - but I'm setting 'copies=2' or 'stripes=5' and 'parity=2' so you just go
allocating space on any of these discs trying to make sure I always have
Trying some funky experiments, based on hearing about this readonly ZFS
in MacOSX, I'm kidding around with creating file based pools and then
burning them to a CDROM. When running zpool import, it does find the
pool on the CD, but then warns about a read-only device, followed by a
core dump.
Hi Mario,
On Thu, 2007-06-14 at 15:23 -0100, Mario Goebbels wrote:
[EMAIL PROTECTED]:~/LargeFiles zpool import -o readonly testpool
internal error: Read-only file system
Abort (core dumped)
[EMAIL PROTECTED]:~/LargeFiles
Interesting, I've just filed 6569720 for this behaviour - thanks for
[EMAIL PROTECTED]:~/LargeFiles zpool import -o readonly testpool
internal error: Read-only file system
Abort (core dumped)
[EMAIL PROTECTED]:~/LargeFiles
Interesting, I've just filed 6569720 for this behaviour - thanks for
spotting this! Regardless of whether ZFS supports this, we
Graham Perrin wrote:
Intending to experiment with ZFS, I have been struggling with what
should be a simple download routine.
Sun Download Manager leaves a great deal to be desired.
In the Online Help for Sun Download Manager there's a section on
troubleshooting, but if it causes *anyone*
On Thu, Jun 14, 2007 at 05:17:36AM -0700, Douglas Atique wrote:
Do you think this panic when the root pool is not visible is a bug?
Should I file one?
No. There is nothing else the OS can do when it cannot mount the root
filesystem. That being said, it should have a nicer message (using
more background below...
Richard Elling wrote:
Graham Perrin wrote:
Intending to experiment with ZFS, I have been struggling with what
should be a simple download routine.
Sun Download Manager leaves a great deal to be desired.
In the Online Help for Sun Download Manager there's a section
People,
indeed, even though interesting and a problem, this is OT. I suggest that
everyone who has trouble with SDM address it to the people who actually
work on it - especially if you're a (potential) customer.
cheers
Michael
Richard Elling wrote:
more background below...
Richard Elling
On Jun 14, 2007, at 8:58 AM, Michael Schuster wrote:
People,
indeed, even though interesting and a problem, this is OT. I
suggest that everyone who has trouble with SDM address it to the
people who actually work on it - especially if you're a (potential)
customer.
Michael, for the
John Martinez wrote:
On Jun 14, 2007, at 8:58 AM, Michael Schuster wrote:
People,
indeed, even though interesting and a problem, this is OT. I suggest
that everyone who has trouble with SDM address it to the people who
actually work on it - especially if you're a (potential) customer.
On 14 Jun 2007, at 12:22, Richard L. Hamilton wrote:
I wonder if you're all that interested in the first place.
I'm definitely interested but at
http://www.sun.com/download/faq.xml#q1 Sun shout (with an
exclamation mark)
Use the Sun Download Manager (SDM) for all your SDLC downloads!
On 14 Jun 2007, at 16:58, Michael Schuster wrote:
People,
indeed, even though interesting and a problem, this is OT. I
suggest that everyone who has trouble with SDM address it to the
people who actually work on it - especially if you're a (potential)
customer.
cheers
Michael
Thanks
See:
6308817 discrepancy between zfs and zpool space accounting
- Eric
On Thu, Jun 14, 2007 at 02:22:35PM +0200, Ronny Kopischke wrote:
Hi,
after creation of a raidz2 pool with
# zpool create -f mypool raidz2 c1t1d0 c1t2d0 c1t3d0 c1t4d0 c1t5d0
the commands zpool list and df showing
Pretorious wrote:
Issue with statvfs()
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
mypool 108M 868M 26.5K /mypool
mypool/home108M 868M 27.5K /mypool/home
mypool/home/user-2 108M 868M 108M /mypool/home/user-2
# df -h
Filesystem
Chris Csanady wrote:
On 6/14/07, Paul Hedderly [EMAIL PROTECTED] wrote:
Is this currently possible?
You may be able to do this my specifying a sparse file for the last
device, and then immediately issuing a zpool offline of it after the
pool is created. It seems to work, and I was able to
On June 14, 2007 11:16:00 AM +0100 Graham Perrin [EMAIL PROTECTED]
wrote:
If it can't assuredly be fixed, then we should not be forced to use it.
You're not forced to use it. I use wget just fine.
-frank
___
zfs-discuss mailing list
Paul Hedderly wrote:
Now I can do that at the moment - well the copies/ditto kind anyway - but
if I lose or remove one of the discs, zfs will not start the zpool.
[i]That sucks!!![/i]
Agreed, that is a bug (perhaps related to 6540322).
--matt
___
Paul Hedderly wrote:
Strikes me that at the moment Sun/ZFS team is missing a great opportunity.
Imagine Joe bloggs has a historical machine with Just Any Old Bunch Of Discs...
(it's not me, no really).
He doesn't want to have to think too hard about pairing them up in mirrors or
in raids -
I have a problem with one of my zfs pools everytime I import it I get the error
below. I can not destroy it because it will not allow me to import. I have
tried trashing the cache file but did not help, is there a way to destory the
config then i can start over??? Also up to date on patches,
Hi Paul,
I think I have a smiliar problem, I have 4 disks, two empty, two full
and want to create a RAIDZ1 from these disk.
But I'm new to zfs, maybe you can explain me how do you done it, if it
runs :)
Thanks in advance :)
Greetings Cyron
I have 6 400GB discs and want to make two RAIDZ 2/1
Is there a way to get past this I can not re-create until I export it?
zpool export -f zonesHA2
cannot iterate filesystems: I/O error
zpool status
pool: zonesHA2
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be
On Thu, 2007-06-14 at 09:09 +0200, [EMAIL PROTECTED] wrote:
The implication of which, of course, is that any app build for Solaris 9
or before which uses scandir may have picked up a broken one.
or any app which includes its own copy of the BSD scandir code, possibly
under a different name,
Hi Rick,
what do you think about this configuration:
Part all disks like this
7GiB
493GiB
Make a RAIDz1 out of the 493GiB partitions and a RAID5 out of the 7GiB
partitions. Create a swap, and the root in the RAID5, the dirs with the
user data in the ZFS storage.
Backup the / daily to the ZFS
Hello,
Somewhat off topic, but it seems that someone released a COW file
system for Linux (currently in 'alpha'):
* Extent based file storage (2^64 max file size)
* Space efficient packing of small files
* Space efficient indexed directories
* Dynamic inode
it's about time. this hopefully won't spark another license debate,
etc... ZFS may never get into linux officially, but there's no reason
a lot of the same features and ideologies can't make it into a
linux-approved-with-no-arguments filesystem...
as a more SOHO user i like ZFS mainly for it's
On Jun 13, 2007, at 9:22 PM, Siegfried Nikolaivich wrote:
On 12-Jun-07, at 9:02 AM, eric kustarz wrote:
Comparing a ZFS pool made out of a single disk to a single UFS
filesystem would be a fair comparison.
What does your storage look like?
The storage looks like:
NAME
On June 14, 2007 3:57:55 PM -0700 mike [EMAIL PROTECTED] wrote:
as a more SOHO user i like ZFS mainly for it's COW and integrity, and
huh. As a SOHO user, why do you care about COW?
-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
because i don't want bitrot to destroy the thousands of pictures and
memories i keep? i keep important personal documents, etc. filesystem
corruption is not a feature to me. perhaps i spoke incorrectly but i
consider COW to be one of the reasons a filesystem can keep itself in
check, the disk
On June 14, 2007 4:40:18 PM -0700 mike [EMAIL PROTECTED] wrote:
because i don't want bitrot to destroy the thousands of pictures and
memories i keep?
COW doesn't stop that.
i keep important personal documents, etc. filesystem
corruption is not a feature to me. perhaps i spoke incorrectly but
Rick Mann wrote:
BTW, I don't mind if the boot drive fails, because it will be fairly easy to
replace, and this server is only mission-critical to me and my friends.
So...suggestions? What's a good way to utilize the power and glory of ZFS in
a 4x 500 GB system, without unnecessary waste?
On 6/14/07, Frank Cusack [EMAIL PROTECTED] wrote:
Yes, but there are many ways to get transactions, e.g. journalling.
ext3 is journaled. it doesn't seem to always be able to recover data.
it also takes forever to fsck. i thought COW might alleviate some of
the fsck needs... it just seems like
Ian Collins wrote:
Bung in (add a USB one if you don't have space) a small boot drive and
use all the others for for ZFS.
Not a bad idea; I'll have to see where I can put one.
But, I thought I read somewhere that one can't use ZFS for swap. Or maybe I
read this:
Slices should only be used
On June 14, 2007 5:07:39 PM -0700 mike [EMAIL PROTECTED] wrote:
On 6/14/07, Frank Cusack [EMAIL PROTECTED] wrote:
Yes, but there are many ways to get transactions, e.g. journalling.
ext3 is journaled. it doesn't seem to always be able to recover data.
zfs is COW. it isn't always able to
Rick Mann wrote:
BTW, I don't mind if the boot drive fails, because it will be fairly easy
to replace, and this server is only mission-critical to me and my friends.
So...suggestions? What's a good way to utilize the power and glory of ZFS
in a 4x 500 GB system, without unnecessary
On Thu, Jun 14, 2007 at 17:19:18 -0700, Frank Cusack wrote:
: anyway, my point is that i didn't think COW was in and of itself a feature
: a home or SOHO user would really care about. it's more an implementation
: detail of zfs than a feature. i'm sure this is arguable.
I'm really not sure I
Ian Collins wrote:
Rick Mann wrote:
BTW, I don't mind if the boot drive fails, because it will be fairly easy to
replace, and this server is only mission-critical to me and my friends.
So...suggestions? What's a good way to utilize the power and glory of ZFS in a
4x 500 GB system, without
On Thu, 2007-06-14 at 17:45 -0700, Bart Smaalders wrote:
This is how I run my home server w/ 4 500GB drives - a small
40GB IDE drive provides root swap/dump device, the 4 500 GB
drives are RAIDZ contain all the data. I ran out of drive
bays, so I used one of those 5 1/4 - 3.5 adaptor
Ian Collins wrote:
Rick Mann wrote:
Ian Collins wrote:
Bung in (add a USB one if you don't have space) a small boot drive and
use all the others for for ZFS.
Not a bad idea; I'll have to see where I can put one.
But, I thought I read somewhere that one can't use ZFS for swap. Or
Bart Smaalders wrote:
Ian Collins wrote:
Rick Mann wrote:
Ian Collins wrote:
Bung in (add a USB one if you don't have space) a small boot drive and
use all the others for for ZFS.
Not a bad idea; I'll have to see where I can put one.
But, I thought I read somewhere that one can't
On Fri, Jun 15, 2007 at 12:27:15AM +0200, Joerg Schilling wrote:
Bill Sommerfeld [EMAIL PROTECTED] wrote:
On Thu, 2007-06-14 at 09:09 +0200, [EMAIL PROTECTED] wrote:
The implication of which, of course, is that any app build for Solaris 9
or before which uses scandir may have picked up
Intending to experiment with ZFS, I have been
struggling with what
should be a simple download routine.
Sun Download Manager leaves a great deal to be
desired.
In the Online Help for Sun Download Manager there's a
section on
troubleshooting, but if it causes *anyone* this much
68 matches
Mail list logo