It is weird. Did you run label subcommand after modifying the partition
table? Did you try unset NOINUSE_CHECK before running format?
Larry
Bill Casale wrote:
Sun Fire 280R
Solaris 10 11/06, KU Generic_125100-08
Created a ZFS pool with disk c5t0d5, format c5t0d5 shows the disk is
part
Probably not, my box has 10 drives and two very thirsty FX74 processors
and it draws 450W max.
At 1500W, I'd be more concerned about power bills and cooling than the UPS!
Yeah - good point, but I need my TV! - or so I tell my wife so I can
play with all this gear :-X
Cheers,
Kent
I have a build 62 system with a zone that NFS mounts an ZFS filesystem.
From the zone, I keep seeing issues with .nfs files remaining in
otherwise empty directories preventing their deletion. The files appear
to be immediately replaced when they are deleted.
Is this an NFS or a ZFS issue?
Hi.
Only indirectly related to zfs. I need to test diskusage/performance
on zfs shared via nfs. I have installed nevada b64a. Historically
uid/gid for user www has been 16/16 but when I try to add uid/gid www
via smc with the value 16 I'm not allowed to do so.
I'm coming from a FreeBSD
Ian Collins wrote:
I have a build 62 system with a zone that NFS mounts an ZFS filesystem.
From the zone, I keep seeing issues with .nfs files remaining in
otherwise empty directories preventing their deletion. The files appear
to be immediately replaced when they are deleted.
Is
Hi All,
I have modified mdb so that I can examine data structures on disk using
::print.
This works fine for disks containing ufs file systems. It also works
for zfs file systems, but...
I use the dva block number from the uberblock_t to print what is at the
block
on disk. The problem I am
Ian Collins wrote:
I have a build 62 system with a zone that NFS mounts an ZFS filesystem.
From the zone, I keep seeing issues with .nfs files remaining in
otherwise empty directories preventing their deletion. The files appear
to be immediately replaced when they are deleted.
Is
On 9/17/07, Darren J Moffat [EMAIL PROTECTED] wrote:
It is NFS that is doing that. It happens when a process on the NFS
client still has the file open. fuser(1) is your friend here.
... and if fuser doesn't tell you what you need to know, you can use
lsof (
Hi All,
Two and three-node clusters with SC3.2 and S10u3 (120011-14).
If a node is rebooted when using SCSI3-PGR the node is not
able to take the zpool by HAStoragePlus due to reservation conflict.
SCSI2-PGRE is okay.
Using the same SAN-LUN:s in a metaset (SVM) and HAStoragePlus
works okay with
Only indirectly related to zfs. I need to test diskusage/performance
on zfs shared via nfs. I have installed nevada b64a. Historically
uid/gid for user www has been 16/16 but when I try to add uid/gid www
via smc with the value 16 I'm not allowed to do so.
I'm coming from a FreeBSD
Tuning should not be done in general and Best practices
should be followed.
So get very much acquainted with this first :
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
Then if you must, this could soothe or sting :
What lead you to the assumption it's ONLY those switches? Just because the
patch is ONLY for those switches doesn't mean that the bug is only for them.
The reason you only see the patch for 3xxx and newer is because the 2xxx was
EOL before the patch was released...
FabOS is FabOS, the nature
Hello zfs-discuss,
If you do 'zpool create -f test A B C spare D E' and D or E contains
UFS filesystem then despite of -f zpool command will complain that
there is UFS file system on D.
workaround: create a test pool with -f on D and E, destroy it and
that create first pool with D and
Claus Guttesen wrote:
Only indirectly related to zfs. I need to test diskusage/performance
on zfs shared via nfs. I have installed nevada b64a. Historically
uid/gid for user www has been 16/16 but when I try to add uid/gid www
via smc with the value 16 I'm not allowed to do so.
I'm coming
On Mon, 17 Sep 2007, Robert Milkowski wrote:
If you do 'zpool create -f test A B C spare D E' and D or E contains UFS
filesystem then despite of -f zpool command will complain that there is
UFS file system on D.
This was fixed recently in build 73. See CR 6573276.
Regards,
markm
On Mon, 17 Sep 2007, Kent Watsen wrote:
... snip ...
(Incidentally, I rarely see these discussions touch upon what sort of UPS is
being used. Power fluctuations are a great source of correlated disk
failures.)
Glad you brought that up - I currently have an APC 2200XL
Only indirectly related to zfs. I need to test diskusage/performance
on zfs shared via nfs. I have installed nevada b64a. Historically
uid/gid for user www has been 16/16 but when I try to add uid/gid www
via smc with the value 16 I'm not allowed to do so.
I'm coming from a FreeBSD
On Mon, Sep 17, 2007 at 03:40:05PM +0200, Roch - PAE wrote:
Tuning should not be done in general and Best practices
should be followed.
So get very much acquainted with this first :
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
Then if you must, this
Pawel Jakub Dawidek writes:
On Mon, Sep 17, 2007 at 03:40:05PM +0200, Roch - PAE wrote:
Tuning should not be done in general and Best practices
should be followed.
So get very much acquainted with this first :
Roch - PAE wrote:
Pawel Jakub Dawidek writes:
On Mon, Sep 17, 2007 at 03:40:05PM +0200, Roch - PAE wrote:
Tuning should not be done in general and Best practices
should be followed.
So get very much acquainted with this first :
On Mon, Sep 17, 2007 at 05:22:04PM +0200, Pawel Jakub Dawidek wrote:
On Mon, Sep 17, 2007 at 03:40:05PM +0200, Roch - PAE wrote:
Tuning should not be done in general and Best practices
should be followed.
So get very much acquainted with this first :
Ian Collins wrote:
I have a build 62 system with a zone that NFS mounts an ZFS filesystem.
From the zone, I keep seeing issues with .nfs files remaining in
otherwise empty directories preventing their deletion. The files appear
to be immediately replaced when they are deleted.
Is
On 9/17/07, Darren J Moffat [EMAIL PROTECTED] wrote:
Why not use the already assigned webservd/webserved 80/80 uid/gid pair ?
Note that ALL uid and gid values below 100 are explicitly reserved for
use by the operating system itself and should not be used by end admins.
This is why smc
Yup...
With Leadville/MPXIO targets in the 32-digit range, identifying the new
storage/LUNs is not a trivial operatrion.
-- MikeE
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Russ
Petruzzelli
Sent: Monday, September 17, 2007 1:51 PM
To:
Hello Mark,
Monday, September 17, 2007, 3:04:03 PM, you wrote:
MJM On Mon, 17 Sep 2007, Robert Milkowski wrote:
If you do 'zpool create -f test A B C spare D E' and D or E contains UFS
filesystem then despite of -f zpool command will complain that there is
UFS file system on D.
MJM This was
Just to answer one of my questions, df seems to work pretty well. That said
I still think the zpool creation tool would do well to list what it can create
zpools out of.
This message posted from opensolaris.org
___
zfs-discuss mailing list
Seconded!
MC wrote:
With the arrival of ZFS, the format command is well on its way to
deprecation station. But how else do you list the devices that zpool can
create pools out of?
Would it be reasonable to enhance zpool to list the vdevs that are available
to it? Perhaps as part of the
Great, that's the answer I was looking for.
My current emphasis is on storage rather than performance. So I just
wanted to make sure that mixing the two speeds would be just as safe
as using only one kind.
Thanks!
On 9/17/07, Eric Schrock [EMAIL PROTECTED] wrote:
Yes, the pool would run at the
Paul,
Scroll down a bit in this section to the default passwd/group tables:
http://docs.sun.com/app/docs/doc/819-2379/6n4m1vl99?a=view
Cindy
Paul Kraus wrote:
On 9/17/07, Darren J Moffat [EMAIL PROTECTED] wrote:
Why not use the already assigned webservd/webserved 80/80 uid/gid pair ?
Note
I'm far from an expert but my understanding is that the zil is spread
across the whole pool by default so in theory the one drive could slow
everything down. I don't know what it would mean in this respect to keep
the PATA drive as a hot spare though.
-Tim
Christopher Gibbs wrote:
Anyone?
Yes performance will suffer, but it's a bit difficult to say by how much.
Both pool transaction group writes and zil writes are spread across
all devices. It depends on what applications you will run as to how much
use is made of the zil. Maybe you should experiment and see if performance
is good
Yes, the pool would run at the speed of the slowest drive. There is an
open RFE to better balance allocations acros variable latency toplevel
vdevs, but within a toplevel vdev there's not much we can do; we need to
make sure your data is on disk with sufficient replication before
returning
I also wanted to test a recovery of my pool, so my took two disk raidz pool
onto a friends freebsd box. It seems both systems use zfs version 6, but the
import failed. I noticed on the boot logs:
GEOM: ad6: corrupt or invalid GPT detected.
GEOM: ad6: GPT rejected -- may not be
Mario Goebbels wrote:
Hi, thanks for the tips. I currently using a 2 disk raidz configuration and
it seems to work fine, but I'll probably take your advice and use mirrors
because I'm finding the raidz a bit slow.
What? How would a two disk RAID-Z work, anyway? A three disk RAID-Z
Ellis, Mike wrote:
With Leadville/MPXIO targets in the 32-digit range, identifying the new
storage/LUNs is not a trivial operatrion.
Have a look at my devid/guid presentation for some details on
how we use them with ZFS/SVM:
http://www.jmcp.homeunix.com/~jmcp/WhatIsAGuide.pdf
James C.
A Darren Dunham wrote:
On Tue, Sep 18, 2007 at 10:11:11AM +1000, James C. McPherson wrote:
Have a look at my devid/guid presentation for some details on
how we use them with ZFS/SVM:
http://www.jmcp.homeunix.com/~jmcp/WhatIsAGuide.pdf
Ah, a very silent 'e'... :-)
One of our Solaris 10 update 3 servers paniced today with the following error:
Sep 18 00:34:53 m2000ef savecore: [ID 570001 auth.error] reboot after
panic: assertion failed: ss != NULL, file:
../../common/fs/zfs/space_map.c, line: 125
The server saved a core file, and the resulting backtrace is
Hey Max - Check out the on-disk specification document at
http://opensolaris.org/os/community/zfs/docs/.
Page 32 illustration shows the rootbp pointing to a dnode_phys_t
object (the first member of a objset_phys_t data structure).
The source code indicates ub_rootbp is a blkptr_t, which
If your priorities were different, or for others pondering a similar question,
the PATA disk might be a hotspare.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
my system is currently running ZFS versionnn 3.
And I just can't find the zpool history command.
can anyone help me with the problem?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Hi, Sunnie,
'zpool history' is only introduced from the ZFS version 4.
You could check the update info and pick the bits after Build 62
corresponded
# zpool upgrade -v
This system is currently running ZFS pool version 8.
The following versions are supported:
VER DESCRIPTION
---
'zpool history' is the 4th feature of ZFS in S10. You should get it
in S10U4.
--
Prabahar.
On Sep 17, 2007, at 8:01 PM, sunnie wrote:
my system is currently running ZFS versionnn 3.
And I just can't find the zpool history command.
can anyone help me with the problem?
This message posted
Hi Matty,
From the stack I saw, that is 6454482.
But this defect has been marked as 'Not reproducible', I have no idea
about how to recover
from it, but looks like new update will not hit this issue.
Matty wrote:
One of our Solaris 10 update 3 servers paniced today with the following error:
43 matches
Mail list logo