Do you have a coredump? Or a stack trace of the panic?
On Wed, 19 May 2010, John Andrunas wrote:
Running ZFS on a Nexenta box, I had a mirror get broken and apparently
the metadata is corrupt now. If I try and mount vol2 it works but if
I try and mount -a or mount vol2/vm2 is instantly kerne
rror message.
Since removing that drive, we have not encounted that issue.
You might want to look at
http://bugs.opensolaris.org/bugdatabase/view_bug.do;jsessionid=7acda35c626180d9cda7bd1df451?bug_id=6894775
too.
-Mark
> Machine specs :
>
> Dell R710, 16 GB memory, 2 Intel Qua
On 23 Apr, 2010, at 8.38, Phillip Oldham wrote:
> The instances are "ephemeral"; once terminated they cease to exist, as do all
> their settings. Rebooting an image keeps any EBS volumes attached, but this
> isn't the case I'm dealing with - its when the instance terminates
> unexpectedly. For
On 23 Apr, 2010, at 7.31, Phillip Oldham wrote:
> I'm not actually issuing any when starting up the new instance. None are
> needed; the instance is booted from an image which has the zpool
> configuration stored within, so simply starts and sees that the devices
> aren't available, which beco
On 23 Apr, 2010, at 7.06, Phillip Oldham wrote:
>
> I've created an OpenSolaris 2009.06 x86_64 image with the zpool structure
> already defined. Starting an instance from this image, without attaching the
> EBS volume, shows the pool structure exists and that the pool state is
> "UNAVAIL" (as
On 04/21/10 08:45 AM, Edward Ned Harvey wrote:
From: Mark Shellenbaum [mailto:mark.shellenb...@oracle.com]
You can create/destroy/rename snapshots via mkdir, rmdir, mv inside
the
.zfs/snapshot directory, however, it will only work if you're running
the
command locally. It will not
On 4/21/10 6:49 AM, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Nicolas Williams
And you can
On Sun, 18 Apr 2010, Michelle Bhaal wrote:
zpool lists my pool as having 2 disks which have identical names. One
is offline, the other is online. How do I tell zpool to replace the
offline one?
If you're lucky, the device will be marked as not being present, and then
you can use the GUID.
On Wed, 7 Apr 2010, Neil Perrin wrote:
There have previously been suggestions to read slogs periodically. I
don't know if there's a CR raised for this though.
Roch wrote up CR 6938883 "Need to exercise read from slog dynamically"
Regards,
markm
___
> It would be nice for Oracle/Sun to produce a separate
> script which reset system/devices back to a install
> like beginning so if you move a OS disk with current
> password file and software from one system to
> another, and have it rebuild the device tree on the
> new system.
You mean /usr/sbi
On Wed, 31 Mar 2010, Damon Atkins wrote:
Why do we still need "/etc/zfs/zpool.cache" file???
The cache file contains a list of pools to import, not a list of pools
that exist. If you do a "zpool export foo" and then reboot, we don't want
foo to be imported after boot completes.
Unfortunat
On Mon, 29 Mar 2010, Jim wrote:
Thanks for the suggestion, but have tried detaching but it refuses
reporting no valid replicas. Capture below.
Could you run 'zdb -ddd tank | | awk '/^Dirty/ {output=1} /^Dataset/ {output=0}
{if (output) {print}}'
This will print the dirty time log of the pool
OK, I see what the problem is: the /etc/zfs/zpool.cache file.
When the pool was split, the zpool.cache file was also split - and the split
happens prior to the config file being updated. So, after booting off the
split side of the mirror, zfs attempts to mount rpool based on the information
in
On Mon, 29 Mar 2010, Victor Latushkin wrote:
On Mar 29, 2010, at 1:57 AM, Jim wrote:
Yes - but it does nothing. The drive remains FAULTED.
Try to detach one of the failed devices:
zpool detach tank 4407623704004485413
As Victor says, the detach should work. This is a known issue and I'
On Sat, 27 Mar 2010, Frank Middleton wrote:
Started with c0t1d0s0 running b132 (root pool is called rpool)
Attached c0t0d0s0 and waited for it to resilver
Rebooted from c0t0d0s0
zpool split rpool spool
Rebooted from c0t0d0s0, both rpool and spool were mounted
Rebooted from c0t1d0s0, only rpool
some screenshots that may help:
pool: tank
id: 5649976080828524375
state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:
data ONLINE
mirror-0 ONLINE
c27t2d0ONLINE
c27t0d0ONLINE
m
where the pools
never got an error, same panic.
ANY ideas of volume rescue are welcome - if i missed some important
information,please tell me.
regards, mark
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss
On Thu, 11 Mar 2010, Lars-Gunnar Persson wrote:
> Is it possible to convert a rz2 array to rz1 array? I have a pool with
> to rz2 arrays. I would like to convert them to rz1. Would that be
> possible?
No, you'll have to create a second pool with raidz1 and do a "send | recv"
operation to copy th
On Mon, 8 Mar 2010, Tim Cook wrote:
Is there a way to manually trigger a hot spare to kick in?
Yes - just use 'zpool replace fserv 12589257915302950264 c3t6d0'. That's
all the fma service does anyway.
If you ever get your drive to come back online, the fma service should
recognize that an
On Sat, 6 Mar 2010, Richard Elling wrote:
On Mar 6, 2010, at 5:38 PM, tomwaters wrote:
My though is this, I remove the 3rd mirror disk and offsite it as a backup.
To do this either:
1. upgrade to a later version where the "zpool split" command is
available
2. zfs send/receiv
sage (shown in the /var/adm/messages) :
>
> scsi: [ID 107833 kern.warning] WARNING:
> /p...@0,0/pci8086,3...@4/pci1028,1...@0 (mpt0):
>
> Does anyone has any tip in how to start to trace the problem ?
>
Have a look at Bug ID: 6894775
http://bugs.opensolaris.org/bugdatabase/view_
It looks like you're running into a DTL issue. ZFS believes that ad16p2 has
some data on it that hasn't been copied off yet, and it's not considering the
fact that it's part of a raidz group and ad4p2.
There is a CR on this,
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6909724 bu
On Wed, 24 Feb 2010, Gregory Gee wrote:
files
files/home
files/mail
files/VM
I want to move the files/VM to another zpool, but keep the same mount
point. What would be the right steps to create the new zpool, move the
data and mount in the same spot?
Create the new pool, take a snapshot of
On Tue, 23 Feb 2010, patrik wrote:
I want to import my zpool's from FreeBSD 8.0 in OpenSolaris 2009.06.
secureUNAVAIL insufficient replicas
raidz1 UNAVAIL insufficient replicas
c8t1d0p0 ONLINE
c8t2d0s2 ONLINE
c8t3d0s8 UNAVAIL c
On Mon, 22 Feb 2010, tomwaters wrote:
I have just installed open solaris 2009.6 on my server using a 250G
laptop drive (using the entire drive).
So, 2009.06 was based on 111b. There was a fix that went into build 117
that allows you to mirror to smaller disks if the metaslabs in zfs are
sti
On Fri, 12 Feb 2010, Daniel Carosone wrote:
You can use zfs promote to change around which dataset owns the base
snapshot, and which is the dependant clone with a parent, so you can
deletehe other - but if you want both datasets you will need to keep the
snapshot they share.
Right. The othe
On Thu, 11 Feb 2010, Cindy Swearingen wrote:
On 02/11/10 04:01, Marc Friesacher wrote:
fr...@vault:~# zpool import
pool: zedpool
id: 10232199590840258590
state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:
zedpoolONLINE
On Thu, 11 Feb 2010, Tony MacDoodle wrote:
I have a 2-disk/2-way mirror and was wondering if I can remove 1/2 the
mirror and plunk it in another system?
Intact? Or as a new disk in the other system?
If you want to break the mirror, and create a new pool on the disk, you
can just do 'zpool d
Thomas Burgess wrote:
I've got a strange issue, If this is covered elsewhere, i apologize in
advance for my newbness
I've got a couple ZFS filesystems shared cifs and nfs, i've managed to
get ACL's working the way i want, provided things are accessed via cifs
and nfs.
If i create a new dir
On Fri, 5 Feb 2010, Alexander M. Stetsenko wrote:
NAMESTATE READ WRITE CKSUM
mypool DEGRADED 0 0 0
mirrorDEGRADED 0 0 0
c1t4d0 DEGRADED 0 028 too many errors
c1t5d0 ONLINE 0 0 0
I
On Thu, 4 Feb 2010, Karl Pielorz wrote:
The reason for testing this is because of a weird RAID setup I have
where if 'ad2' fails, and gets replaced - the RAID controller is going
to mirror 'ad1' over to 'ad2' - and cannot be stopped.
Does the raid controller not support a JBOD mode?
Regards
Looks like I got the textbook response from Western Digital:
---
Western Digital technical support only provides jumper configuration and
physical installation support for hard drives used in systems running the
Linux/Unix operating systems. For setup questions beyond physical installation
of yo
> That's good to hear. Which revision are they: 00R6B0
> or 00P8B0? It's marked on the drive top.
Interesting. I wonder if this is the issue too with the 01U1B0 2.0TB drives?
I have 24 WD2002FYPS-01U1B0 drives under OpenSolaris with an LSI 1068E
controller that have weird timeout issues and I
SI IT mode firmware changing the disk
order so the bootable disk is no longer the one booted from with expanders?
It boots with only two disks installed(bootable zfs mirror). Add some more and
the target "boot disk" moves to one of them.
Mark.
--
This message posted from
luate alternative supplers of low cost disks for low end
high volume storage.
Mark.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
to be the only option available.
Mark.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ceived for target 13.
---
With no problems at all, that scrub I don't think takes
nearly that long (I think it was less than 12 hours previously)
and the percentage is barely moving, although it is increasing.
Even still, the exported volumes still appear to be
work
On Thu, 28 Jan 2010, TheJay wrote:
Attached the zpool history.
Did the resilver ever complete on the first c6t1d0? I see a second
replace here:
2010-01-27.20:41:15 zpool replace rzpool2 c6t1d0 c6t16d0
2010-01-28.07:57:27 zpool scrub rzpool2
2010-01-28.20:39:42 zpool clear rzpool2 c6t1d0
2
I thank each of you for all of your insights. I think if this was a production
system I'd abandon the idea of 2 drives and get a more capable system, maybe a
2U box with lots of SAS drives so I could use RAIDZ configurations. But in this
case, I think all I can do is try some things until I unde
256*512/4096=32
Mark.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
many scsi cards.
Mark.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I have a 1U server that supports 2 SATA drives in the chassis. I have 2 750 GB
SATA drives. When I install opensolaris, I assume it will want to use all or
part of one of those drives for the install. That leaves me with the remaining
part of disk 1, and all of disk 2.
Question is, how do I be
> Also, I noticed you're using 'EARS' series drives.
> Again, I'm not sure if the WD10EARS drives suffer
> from a problem mentioned in these posts, but it might
> be worth looking into -- especially the last link:
Aren't the EARS drives the first ones using 4k sectors? Does OpenSolaris
support th
24 x WD10EARS in 6 disk vdev sets, 1 on 16 bay and 2 on 24 bay.
Mark
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hello,
Can anybody tell me if EMC's Replication software is supported using
ZFS, and if so is there any particular version
of Solaris that it is supported with?
thanks,
-mark
--
Mark Woelfel
Storage TSC Backline Volume Products
Sun Microsystems
Work: 78
On Wed, 27 Jan 2010, TheJay wrote:
Guys,
Need your help. My DEV131 OSOL build with my 21TB disk system somehow got
really screwed:
This is what my zpool status looks like:
NAME STATE READ WRITE CKSUM
rzpool2 DEGRADED 0 0 0
raidz2
Hi Giovanni,
I have seen these while testing the mpt timeout issue, and on other systems
during resilvering of failed disks and while running a scrub.
Once so far on this test scrub, and several on yesterdays.
I checked the iostat errors, and they weren't that high on that device,
compared to
ntroller and see
if that helps.
P.S. I have a client with a "suspect", nearly full, 20Tb zpool to try to scrub,
so this is a big issue for me. A resilver of a 1Tb disk takes up to 40 hrs., so
I expect a scrub to be a week (or two), and at present, would probably result
in multiple dis
I would definitely be interested to see if the newer firmware fixes the problem
for you. I have a very similar setup to yours, and finally forcing the
firmware flash to 1.26.00 of my on-board LSI 1068E on a SuperMicro H8DI3+
running snv_131 seemed to address the issue. I'm still waiting to see
> It may depend on the firmware you're running. We've
> got a SAS1068E based
> card in Dell R710 at the moment, connected to an
> external SAS JBOD, and
> we did have problems with the as shipped firmware.
Well, I may have misspoke. I just spent a good portion of yesterday upgrading
to the lat
t or hot
plug.
The robustness of ZFS certainly helps keep things running.
Mark.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
As in they work without any possibility of mpt timeout issues? I'm at my wits
end with a machine right now that has an integrated 1068E and is dying almost
hourly at this point.
If I could spend three hundred dollars or so and have my problems magically go
away, I'd love to pull the trigger on
On Fri, 22 Jan 2010, Tony MacDoodle wrote:
Can I move the below mounts under / ?
rpool/export/export
rpool/export/home /export/home
Sure. Just copy the data out of the directory, do a zfs destroy on the
two filesystems, and copy it back.
For example:
# mkdir /save
# cp -r /expo
On Thu, 14 Jan 2010, Josh Morris wrote:
Hello List,
I am porting a block device driver(for a PCIe NAND flash disk driver) from
OpenSolaris to Solaris 10. On Solaris 10 (10/09) I'm having an issues
creating a zpool with the disk. Apparently I have an 'invalid argument'
somewhere:
% pfexec z
On Fri, 8 Jan 2010, Rob Logan wrote:
this one has me alittle confused. ideas?
j...@opensolaris:~# zpool import z
cannot mount 'z/nukeme': mountpoint or dataset is busy
cannot share 'z/cle2003-1': smb add share failed
j...@opensolaris:~# zfs destroy z/nukeme
internal error: Bad exchange descrip
Ben,
I have found that booting from cdrom and importing the pool on the new host,
then boot the hard disk will prevent these issues.
That will reconfigure the zfs to use the new disk device.
When running, zpool detach the missing mirror device and attach a new one.
Mark.
--
This message posted
uit needed in a few V240 PSU's.
Much cheaper than replacing the whole psu due to poor fan lifespan.
Mark.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Will,
sorry for picking an old thread, but you mentioned a psu monitor to supplement
the CSE-PTJBOD-CB1.
I have two of these and am interested in your design.
Oddly, the LSI backplane chipset supports 2 x i2c busses that Supermicro didn't
make use of for monitoring the psu's.
Mark
Check if your card has the latest firmware.
Mark.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
backplane price difference diminishes when you get to 24
bays.
Mark.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I'd recommend a SAS non-raid controller (with sas backplane) over sata.
It has better hot plug support.
I use the Supermicro SC836E1 and a AOC-USAS-L4i with a UIO M/b.
Mark.
--
This message posted from opensolaris.org
___
zfs-discuss mailing lis
signed for the UIO slot. The cards are a mirror image of a normal
pci-e card and may overlap adjacent slots.
They "may" work in other servers, but I have found some Supermicro non-UIO
servers that wouldn't run them.
Mark.
--
This message po
Hi,
Is it possible to import a zpool and stop it mounting the zfs file systems, or
override the mount paths?
Mark.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
Thanks, sounds like it should handle all but the worst faults OK then; I
believe the maximum retry timeout is typically set to about 60 seconds in
consumer drives.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@ope
>From what I remember the problem with the hardware RAID controller is that the
>long delay before the drive responds causes the drive to be dropped from the
>RAID and then if you get another error on a different drive while trying to
>repair the RAID then that disk is also marked failed and you
Yeah, this is my main concern with moving from my cheap Linux server with no
redundancy to ZFS RAID on OpenSolaris; I don't really want to have to pay twice
as much to buy the 'enterprise' disks which appear to be exactly the same
drives with a flag set in the firmware to limit read retries, but
Did you set autoexpand on? Conversely, did you try doing a 'zpool online
bigpool ' for each disk after the replace completed?
On Mon, 7 Dec 2009, Alexandru Pirvulescu wrote:
Hi,
I've read before regarding zpool size increase by replacing the vdevs.
The initial pool was a raidz2 with 4 640
This may be a dup of 6881631.
Regards,
markm
On 1 Dec 2009, at 15:14, Cindy Swearingen
wrote:
I was able to reproduce this problem on the latest Nevada build:
# zpool create tank raidz c1t2d0 c1t3d0 c1t4d0
# zpool add -n tank raidz c1t5d0 c1t6d0 c1t7d0
would update 'tank' to the follow
Mark Johnson wrote:
Chad Cantwell wrote:
Hi,
I was using for quite awhile OpenSolaris 2009.06
with the opensolaris-provided mpt driver to operate a zfs raidz2 pool of
about ~20T and this worked perfectly fine (no issues or device errors
logged for several months, no hanging). A few days
Chad Cantwell wrote:
Hi,
I was using for quite awhile OpenSolaris 2009.06
with the opensolaris-provided mpt driver to operate a zfs raidz2 pool of
about ~20T and this worked perfectly fine (no issues or device errors
logged for several months, no hanging). A few days ago I decided to
reinsta
This is basically just a me too. I'm using different hardware but essentially
the same problems. The relevant hardware I have is:
---
SuperMicro MBD-H8Di3+-F-O motherboard with LSI 1068E onboard
SuperMicro SC846E2-R900B 4U chassis with two LSI SASx36 expander chips on the
backplane
24 Western D
card installed. The mptsas cards are
not generally
available yet (they're 2nd generation), so I would be
surprised if
you had one.
No...I had set the other two variables after Mark contacted me offline to
do some testing mainly to verify the problem was, indeed, not xVM specific.
I had
On 10 Nov, 2009, at 21.02, Ron Mexico wrote:
This didn't occur on a production server, but I thought I'd post
this anyway because it might be interesting.
This is CR 6895446 and a fix for it should be going into build 129.
Regards,
markm
___
zf
Typically this is called "Sanitization" and could be done as part of
an evacuation of data from the disk in preparation for removal.
You would want to specify the patterns to write and the number of
passes.
-- mark
Brian Kolaci wrote:
Hi,
I was discussing the common practi
Ok. Thanks. Why does '/' show up in the newly created /BE/etc/vfstab but not
in the current /etc/vfstab? Should '/' be in the /BE/etc/vfstab?
btw, thank you for responding so quickly to this.
Mark
On Wed, Oct 21, 2009 at 12:49 PM, Enda O'Connor wrote:
> Mark Horst
Then why the warning on the lucreate. It hasn't done that in the past.
Mark
On Oct 21, 2009, at 12:41 PM, "Enda O'Connor"
wrote:
Hi T
his will boot ok in my opinion, not seeing any issues there.
Enda
Mark Horstman wrote:
more input:
# lumount foobar /mnt
/mnt
#
more input:
# lumount foobar /mnt
/mnt
# cat /mnt/etc/vfstab
# cat /mnt/etc/vfstab
#live-upgrade: updated boot environment
#device device mount FS fsckmount mount
#to mount to fsck point typepassat boot options
#
fd -
More input:
# cat /etc/lu/ICF.1
sol10u8:-:/dev/zvol/dsk/rpool/swap:swap:67108864
sol10u8:/:rpool/ROOT/sol10u8:zfs:0
sol10u8:/appl:pool00/global/appl:zfs:0
sol10u8:/home:pool00/global/home:zfs:0
sol10u8:/rpool:rpool:zfs:0
sol10u8:/install:pool00/shared/install:zfs:0
sol10u8:/opt/local:pool00/shared
Neither the virgin SPARC sol10u8 nor the (update to date) patched SPARC sol10u7
have any local zones.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-d
I'm seeing the same [b]lucreate[/b] error on my fresh SPARC sol10u8 install
(and my SPARC sol10u7 machine I keep patches up to date), but I don't have a
separate /var:
# zfs list
NAMEUSED AVAIL REFER MOUNTPOINT
pool00 3.36G 532G20K none
pool00/global
On Mon, 19 Oct 2009, Espen Martinsen wrote:
Let's say I've chosen to live with a zpool without redundancy, (SAN
disks, has actually raid5 in disk-cabinet)
What benefit are you hoping zfs will provide in this situation? Examine
your situation carefully and determine what filesystem works best
en inheriting an ACL?
I just tried it locally and it appears to work.
# ls -ld test.dir
drwsr-sr-x 2 marksstorage4 Oct 12 16:45 test.dir
my primary group is "staff"
$ touch file
$ ls -l file
-rw-r--r-- 1 marksstorage
Sorry. My environment:
# uname -a
SunOS xx 5.10 Generic_141414-10 sun4v sparc SUNW,SPARC-Enterprise-T5220
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listin
I have a snapshot that I'd like to destroy:
# zfs list rpool/ROOT/be200909160...@200909160720
NAME USED AVAIL REFER MOUNTPOINT
rpool/ROOT/be200909160...@200909160720 1.88G - 4.18G -
But when I try it warns me of dependent clones:
# zfs destroy rpool
On Thu, 24 Sep 2009, Paul Archer wrote:
I may have missed something in the docs, but if I have a file in one FS,
and want to move it to another FS (assuming both filesystems are on the
same ZFS pool), is there a way to do it outside of the standard
mv/cp/rsync commands?
Not yet. CR 6483179
On 23 Sep, 2009, at 21.54, Ray Clark wrote:
My understanding is that if I "zfs set checksum=" to
change the algorithm that this will change the checksum algorithm
for all FUTURE data blocks written, but does not in any way change
the checksum for previously written data blocks.
I need to
Roland Mainz wrote:
Hi!
Does anyone know out-of-the-head whether tmpfs supports ACLs - and if
"yes" - which type(s) of ACLs (e.g. NFSv4/ZFS, old POSIX draft ACLs
etc.) are supported by tmpfs ?
tmpfs does not support ACLs
see _PC_ACL_ENABLED in [f]pathconf(2). You can query the file sy
On Mon, 14 Sep 2009, Marty Scholes wrote:
I really want to move back to 2009.06 and keep all of my files /
snapshots. Is there a way somehow to zfs send an older stream that
2009.06 will read so that I can import that into 2009.06?
Can I even create an older pool/dataset using 122? Ideall
On Sat, 12 Sep 2009, Jeremy Kister wrote:
scrub: resilver in progress, 0.12% done, 108h42m to go
[...]
raidz1 DEGRADED 0 0 0
c3t8d0ONLINE 0 0 0
c5t8d0ONLINE 0 0 0
c3t9d0ONLINE 0 0 0
The device is listed with s0; did you try using c5t9d0s0 as the name?
On 12 Sep, 2009, at 17.44, Jeremy Kister wrote:
[sorry for the cross post to solarisx86]
One of my disks died that i had in a raidz configuration on a Sun
V40z with Solaris 10u5. I took the bad disk out, replaced the dis
avies
How are the parent and kids defined in the /etc/passwd file?
What do the ACLs look like?
Issues with the CIFS server are best served by asking on
cifs-disc...@opensolaris.org
-Mark
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On Fri, 28 Aug 2009, Dave wrote:
Thanks, Trevor. I understand the RFE/CR distinction. What I don't
understand is how this is not a bug that should be fixed in all solaris
versions.
Just to get the terminology right: "CR" means Change Request, and can
refer to Defects ("bugs") or RFE's. Defe
Hi Stephen,
Have you got many zvols (or snapshots of zvols) in your pool? You could
be running into CR 6761786 and/or 6693210.
On Thu, 27 Aug 2009, Stephen Green wrote:
I'm having trouble booting with one of my zpools. It looks like this:
pool: tank
state: ONLINE
scrub: none requested
c
eroing and checking after a replace attempt , shows it is being partitioned
during the replace attempt.
obviously corrupt data suggests read/write integrity, but is there any way to
get more detailed info or logs from zfs on what the reason for rejecting it ?
e.g. sector that is failing
Mark
of an inconvenience, but it does make me wonder whether the
'used' figures on my other filesystems and zvols are correct.
You could be running into an instance of
6792701 Removing large holey file does not free space
A fix for this was
Robert Lawhead wrote:
I recently tried to post this as a bug, and received an auto-ack, but can't
tell whether its been accepted. Does this seem like a bug to anyone else?
Default for zfs list is now to show only filesystems. However, a `zfs list` or
`zfs list -t filesystem` shows filesystem
nsolaris.org
They will start out by asking you to run:
http://opensolaris.org/os/project/cifs-server/files/cifs-gendiag
-Mark
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I was wondering if this is a known problem..
I am running stock b118 bits. System has a UFS root
and a single zpool (with multiple nfs, smb, and iscsi
exports)
Powered off my machine last night.. Powered it on this
morning and it hung during boot. It hung when reading the
zpool disks.. It wou
On Wed, 29 Jul 2009, David Magda wrote:
Which makes me wonder: is there a programmatic way to determine if a
path is on ZFS?
Yes, if it's local. Just use df -n $path and it'll spit out the filesystem
type. If it's mounted over NFS, it'll just say something like nfs or
autofs, though.
Reg
On Wed, 29 Jul 2009, Glen Gunselman wrote:
Where would I see CR 6308817 my usual search tools aren't find it.
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6308817
Regards,
markm
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
101 - 200 of 573 matches
Mail list logo