The term 'stripe' has been so outrageously severely abused in this
forum that it is impossible to know what someone is talking about when
they use the term. Seemingly intelligent people continue to use wrong
terminology because they think that protracting the confusion somehow
helps new
On Jun 22, 2010, at 8:40 AM, Jeff Bacon ba...@walleyesoftware.com wrote:
The term 'stripe' has been so outrageously severely abused in this
forum that it is impossible to know what someone is talking about when
they use the term. Seemingly intelligent people continue to use wrong
terminology
Anyone know why my ZFS filesystem might suddenly start
giving me an error when I try to ls -d the top of it?
i.e.: ls -d /tank/ws/fubar
/tank/ws/fubar: Operation not applicable
zpool status says all is well. I've tried snv_139 and snv_137
(my latest and previous installs). It's an amd64 box.
Gordon Ross wrote:
Anyone know why my ZFS filesystem might suddenly start
giving me an error when I try to ls -d the top of it?
i.e.: ls -d /tank/ws/fubar
/tank/ws/fubar: Operation not applicable
zpool status says all is well. I've tried snv_139 and snv_137
(my latest and previous installs).
lstat64(/tank/ws/fubar, 0x080465D0) Err#89 ENOSYS
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Sun, 20 Jun 2010, Arne Jansen wrote:
In my experience the boot time mainly depends on the number of datasets,
not the number of snapshots. 200 datasets is fairly easy (we have 7000,
but did some boot-time tuning).
What kind of boot tuning are you referring to? We've got about 8k
Paul B. Henson wrote:
On Sun, 20 Jun 2010, Arne Jansen wrote:
In my experience the boot time mainly depends on the number of datasets,
not the number of snapshots. 200 datasets is fairly easy (we have 7000,
but did some boot-time tuning).
What kind of boot tuning are you referring to? We've
Arne Jansen wrote:
Paul B. Henson wrote:
On Sun, 20 Jun 2010, Arne Jansen wrote:
In my experience the boot time mainly depends on the number of datasets,
not the number of snapshots. 200 datasets is fairly easy (we have 7000,
but did some boot-time tuning).
What kind of boot tuning are you
Did a search, but could not find the info I am looking for.
I built out my OSOL system about a month ago and have been gradually making
changes before I move it into production. I have setup a mirrored rpool and a
6 drive raidz2 pool for data. In my system I have 2 8-port SAS cards and 6
On Tue, 22 Jun 2010, Brian wrote:
Is what I did wrong? I was under the impression that zfs wrote a
label to each disk so you can move it around between controllers...?
You are correct. Normally exporting and importing the pool should
cause zfs to import the pool correctly. Moving disks
On Tue, June 22, 2010 17:32, Bob Friesenhahn wrote:
On Tue, 22 Jun 2010, Brian wrote:
Is what I did wrong? I was under the impression that zfs wrote a
label to each disk so you can move it around between controllers...?
You are correct. Normally exporting and importing the pool should
cause
Did some more reading.. Should have exported first... gulp...
So, I powered down and moved the drives around until the system came back up
and zpool status is clean..
However, now I can't seem to boot. During boot it finds all 17 ZFS filesystems
and starts mounting them.
I have several file
Ok -
So I unmounted all the directories, and then deleted them from /media, then I
rebooted and everything remounted correctly and the system is functioning
again..
Ok. time for a zpool scrub, then I will try my export and import..
whew :-)
--
This message posted from opensolaris.org
I ran into the same thing where I had to manually delete directories.
Once you export the pool you can plug in the drives anywhere else. Reimport the
pool and the file systems come right up — as long as the drives can be seen by
the system.
--
This message posted from opensolaris.org
On Fri, Jun 18, 2010 at 9:53 AM, Jeff Bacon ba...@twinight.org wrote:
I know that this has been well-discussed already, but it's been a few months
- WD caviars with mpt/mpt_sas generating lots of retryable read errors,
spitting out lots of beloved Log info 3108 received for target
15 matches
Mail list logo