Hi gurus,
CU getting differences between zpool list and df -k :
Fresh creation of ZFS pool based on 46 465GB disks gives the following :
# df -k : 14TB
# zpool list : 20.5TB
-=-=-=-=- ccxrdsn002 -=-=-=-=-
ccxrdsn002_root# zpool status ;echo; zpool list; echo ;df -k /xrootd
pool: xrootd
state:
Nicolas Williams [EMAIL PROTECTED] wrote:
zfs send as backup is probably not generally acceptable: you can't
expect to extract a single file out of it (at least not out of an
incremental zfs send), but that's certainly done routinely with ufsdump,
tar, cpio, ...
Then an incremental star
Dennis Clarke [EMAIL PROTECTED] wrote:
I don't believe that there are any good/useful solutions which are free
that will store both the data and all the potential meta-data in the
filesystem in a recoverable way.
I think that star ( Joerg Schilling ) has a good grasp on all the
Hi,
I've now gone through both the opensolaris instructions:
http://www.opensolaris.org/os/community/zfs/boot/zfsboot-manual/
and Tim Foster's script:
http://blogs.sun.com/timf/entry/zfs_bootable_datasets_happily_rumbling
for making my laptop ZFS bootable.
Both work well and here's a
Constantin Gonzalez Schmitz wrote:
2. After going through the zfs-bootification, Solaris complains on reboot that
/etc/dfs/sharetab is missing. Somehow this seems to have been fallen through
the cracks of the find command. Well, touching /etc/dfs/sharetab just fixes
the issue.
Could it be an order problem? NFS trying to start before zfs is mounted?
Just a guess, of course. I'm not real savvy in either realm.
HTH,
Mike
Ben Miller wrote:
I have an Ultra 10 client running Sol10 U3 that has a zfs pool set up on the
extra space of the internal ide disk. There's just
[EMAIL PROTECTED] wrote:
Adam:
Does anyone have a clue as to where the bottlenecks are going to be with
this:
16x hot swap SATAII hard drives (plus an internal boot drive)
Tyan S2895 (K8WE) motherboard
Dual GigE (integral nVidia ports)
2x Areca 8-port PCIe (8-lane) RAID drivers
2x AMD
On Apr 19, 2007, at 1:38 AM, Ricardo Correia wrote:
Why doesn't zpool status -v display the byte ranges of permanent
errors anymore, like it used to (before snv_57)?
I think it was a useful feature. For example, I have a pool with 17
permanent errors in 2 files with 700 MB each, but no
I have seen a previous discussion with the same error. I don't think a
solution was posted though.
The libzfs_mount.c source indicates that the 'share' command returned
non zero but specified no error. Can you run 'share' manually after a
fresh boot? There may be some insight if it fails, though
Hello Tim,
Thursday, April 19, 2007, 10:32:53 AM, you wrote:
TT Hi
TT This is a bit off topic...but as Bill mentioned SAM-FS...my job at Sun
TT is working in a group focused on ISV's in the archiving space (Symantec
TT Enterprise Vault, Open Text LEA, CA Message Manager, FileNet, Mobius,
TT
Nicholas Lee wrote:
On 4/19/07, *Adam Lindsay* [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
16x hot swap SATAII hard drives (plus an internal boot drive)
Tyan S2895 (K8WE) motherboard
Dual GigE (integral nVidia ports)
2x Areca 8-port PCIe (8-lane) RAID drivers
2x AMD
eric kustarz wrote:
Two reasons:
1) cluttered the output (as the path name is variable length). We
could perhaps add another flag (-V or -vv or something) to display the
ranges.
2) i wasn't convinced that output was useful, especially to most
users/admins.
If we did provide the range
It does seem like an ordering problem, but nfs/server should be starting up
late enough with SMF dependencies. I need to see if I can duplicate the
problem on a test system...
This message posted from opensolaris.org
___
zfs-discuss mailing list
Hello Nicolas,
Wednesday, April 18, 2007, 10:12:17 PM, you wrote:
NW On Wed, Apr 18, 2007 at 03:47:55PM -0400, Dennis Clarke wrote:
Maybe with a definition of what a backup is and then some way to
achieve it. As far as I know the only real backup is one that can be
tossed into a vault and
Greetings,
In looking for inexpensive JBOD and/or RAID solutions to use with ZFS, I've
run across the recent VTrak SAS/SATA systems from Promise Technologies,
specifically their E-class and J-class series:
E310f FC-connected RAID:
Dennis Clarke wrote:
So now here we are ten years later with a new filesystem and I have no
way to back it up in such a fashion that I can restore it perfectly. I
can take snapshots. I can do a strange send and receive but the
process is not stable From zfs (1M) we see :
The format of the
Is it possible to gracefully and permanently remove a vdev from a pool without
data loss? The type of pool in question here is a simple pool without
redundancies (i.e. JBOD). The documentation mentions for instance offlining,
but without going into the end results of doing that. The thing I'm
On Thu, 19 Apr 2007, Mario Goebbels wrote:
Is it possible to gracefully and permanently remove a vdev from a pool
without data loss?
Is this what you're looking for?
http://bugs.opensolaris.org/view_bug.do?bug_id=4852783
If so, the answer is 'not yet'.
Regards,
markm
On 4/19/07, Mark J Musante [EMAIL PROTECTED] wrote:
On Thu, 19 Apr 2007, Mario Goebbels wrote:
Is it possible to gracefully and permanently remove a vdev from a pool
without data loss?
Is this what you're looking for?
http://bugs.opensolaris.org/view_bug.do?bug_id=4852783
If so, the
On Thu, 2007-04-19 at 11:59 -0700, Mario Goebbels wrote:
Is it possible to gracefully and permanently remove a vdev from a pool
without data loss?
Not yet. But it's on lots of people's wishlists, there's an open RFE,
and members of the zfs team have said on this list that they're working
on
Mario,
Until zpool remove is available, you don't have any options to remove a
disk from a non-redundant pool.
Currently, you can:
- replace or detach a disk in a ZFS mirrored storage pool
- replace a disk in a ZFS RAID-Z storage pool
Please see the ZFS best practices site for more info about
I was hoping that someone more well-versed in virtual machines
would respond to this so I wouldn't have to show my ignorance,
but no such luck, so here goes:
Is it even possible to build a virtual machine out of a
zfs storage pool? Note that it isn't just zfs as a root file system
we're trying
Marion Hakanson wrote:
In looking for inexpensive JBOD and/or RAID solutions to use with ZFS, I've
run across the recent VTrak SAS/SATA systems from Promise Technologies,
specifically their E-class and J-class series:
E310f FC-connected RAID:
Ricardo Correia wrote:
|compression |time-real |time-user |time-sys |compressratio
--
|lzo |6m39.603s |0m1.288s |0m6.055s |2.99x
|gzip |7m46.875s |0m1.275s |0m6.312s |3.41x
|lzjb |7m7.600s
Hello Richard,
Thursday, April 19, 2007, 10:41:58 PM, you wrote:
RE Marion Hakanson wrote:
In looking for inexpensive JBOD and/or RAID solutions to use with ZFS, I've
run across the recent VTrak SAS/SATA systems from Promise Technologies,
specifically their E-class and J-class series:
Robert Milkowski wrote:
...
RE multi-path support is coming later (sorry, I don't know the details)
MPxIO for SAS went into b63 - see
http://www.opensolaris.org/os/community/on/flag-days/pages/2007041203/
Now the question is if it will make s10u4?
We're working on that right now. The
[EMAIL PROTECTED] said:
This looks similar to the recently announced Sun StorageTek 2500 Low Cost
Array product line. http://www.sun.com/storagetek/disk_systems/workgroup/2500/
Wonder how I missed those. Oh, probably because you can't see them
on store.sun.com/shop.sun.com. On papger, there
[EMAIL PROTECTED] said:
The scsi_vhci multipathing driver doesn't just work with Sun's FC stack, it
also works with SAS (at least, it does in snv_63 and ... soon .. with patches
for s10).
Yes, it's nice to see that's coming; And that FC SAS are the same.
But I'm at S10U3 right now.
I'm not sure I understand the question.
Virtual machines are built by either running a virtualization technology
in a host operating systems, such as running VMware Workstation in
Linux, running Parallels in Mac OS X, Linux or Windows, etc.
These are sometimes referred to as Type II VMMs, where
James W. Abendschan wrote:
On Fri, 20 Apr 2007, James C. McPherson wrote:
Richard Elling wrote:
...
Nice catch... timing is everything :-)
I'll infer from this that the SAS HBA from Sun is based on the mpt
driver which works with LSI controllers.
yes.
As of Solaris 10 update 2, this combo
We've used enclosures manufactured by Xyratex (http://www.xyratex.com/).
Several RAID vendors have used these disk in their systems. One reseller is
listed below (the one we used got bought out). I've been very happy with these
enclosures and a Qlogic HBA.
As we have retired some of the
Hi Tim,
I run a setup of SAM-FS for our main file server and we loved the
backup/restore parts that you described.
The main concerns I have with SAM fronting the entire conversation is
data integrity. Unlike ZFS, SAMFS does not do end to end checksumming.
We have considered the setup you
On April 19, 2007 4:45:20 PM -0700 Marion Hakanson [EMAIL PROTECTED]
wrote:
- VTrak can take SATA disks; ST-2500 lists only 15k rpm SAS disks.
I'd be surprised (and disappointed) if the 2500 can't accept a SATA disk.
Thanks for the SAS HBA and SAS MPXIO information. Now, anyone have any
Hello Wee,
Friday, April 20, 2007, 4:50:08 AM, you wrote:
WYT Hi Tim,
WYT I run a setup of SAM-FS for our main file server and we loved the
WYT backup/restore parts that you described.
WYT The main concerns I have with SAM fronting the entire conversation is
WYT data integrity. Unlike ZFS,
34 matches
Mail list logo