your point have only a rethoric meaning. System breaks regardless the ressource
you put to build it. Bad hardware, typo, human mistakes, bugs, This
mailing-list is full of examples. Having some tools like zdb, mdb, zfs import
-fFX and labelfix for analyzis and repair is always a good thing.
Hi,
Can anybody help me give the link on the code snippet of block size
estimation?
I want to know when ZFS makes a decision on the block size used for a file.
Does ZFS estimate it based on the length of file when the create event of
file is committed to disk during txg commit?
If so, is the
On 02/09/2010 11:18, Zhu Han wrote:
Can anybody help me give the link on the code snippet of block size
estimation?
See the zfs_write() function.
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Thank you!
Here is my understanding, leave it here as reference for others. If it's not
correct, please point it out.
ZFS estimates the size of block only when the file only has single block and
it is extended. This is because dmu_object_set_blocksize() only set the
block size when the object
On Wed, 1 Sep 2010, Benjamin Brumaire wrote:
your point have only a rethoric meaning.
I'm not sure what you mean by that. I was asking specifically about your
situation. You want to run labelfix on /dev/rdsk/c0d1s4 - what happened
to that slice that requires a labelfix? Is there
looks similar to a crash I had here at our site a few month ago. Same
symptoms, no actual solution. We had to recover from a rsync backup
server.
Thanks Carsten. And on Sun hardware, too. Boy, that's comforting
Three way mirrors anyone?
___
So, when you add a log device to a pool, it initiates a resilver.
What is it actually doing, though? Isn't the slog a copy of the
in-memory intent log? Wouldn't it just simply replicate the data that's
in the other log, checked against what's in RAM? And presumably there
isn't that much data in
Is this the right forum to post a zfs how-to question? If not, what are your
suggestions, which forum I should go to?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
This is the right forum, fire away...
Feel free to review ZFS information in advance:
http://hub.opensolaris.org/bin/view/Community+Group+zfs/docs
ZFS Administration Guide (Solaris 10):
http://docs.sun.com/app/docs/doc/819-5461
ZFS Best Practices Guide:
On Tue, Aug 31, 2010 at 12:47:49PM -0700, Brandon High wrote:
On Mon, Aug 30, 2010 at 3:05 PM, Ray Van Dolson rvandol...@esri.com wrote:
I want to fix (as much as is possible) a misalignment issue with an
X-25E that I am using for both OS and as an slog device.
It's pretty easy to get the
What does 'zpool import' show? If that's empty, what about 'zpool import
-d /dev'?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I just tried
admin$ zpool replace BackupRAID /dev/disk0 /dev/disk1 /dev/disk2
too many arguments
As you can see, it didn't do what I need to accomplish.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
I think, I just destroyed the information on the old raidz members by doing
zpool create BackupRAID raidz /dev/disk0s2 /dev/disk1s2 /dev/disk2s2
The pool mounted fine after that, but is empty. None of the old information is
present. Am I right?
--
This message posted from opensolaris.org
my company changed our SAN and I migrated our zpool to the new luns with
attach/detach while the zpool stayed online. Once finished we had a cluster
crash and the zpool (on new luns) get corrupted. No way to import it, zpool
import -fFX failed.
The old luns are detached and probably sane
On Thu, 2 Sep 2010, Dominik Hoffmann wrote:
I think, I just destroyed the information on the old raidz members by doing
zpool create BackupRAID raidz /dev/disk0s2 /dev/disk1s2 /dev/disk2s2
It should have warned you that two of the disks were already formatted
with a zfs pool. Did it not do
There was no warning. This is the output:
admin$ sudo zpool create BackupRAID raidz /dev/disk0s2 /dev/disk1s2 /dev/disk2s2
Password:
admin$ zpool status
pool: BackupRAID
state: ONLINE
status: The pool is formatted using an older on-disk format. The pool can
still be used, but some
I can only tell you what it is now:
admin$ zpool import
no pools available to import
admin$ zpool import -d /dev
no pools available to import
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Also, I am quite sure that I was using the actual drives from the old raidz.
This is my drive listing:
admin$ diskutil list
/dev/disk0
___#:___TYPE NAMESIZE___IDENTIFIER
___0:__GUID_partition_scheme*465.8 Gi___disk0
Those of you who have read my previous post know that I was trying to
reassemble a raidz after a complete reinstall of the OS on a Mac running
zfs-119. In a fit of impatience, I executed the zpool create command on the
three volumes, two of which were part of the old raidz, the third one having
Dominik,
You overwrite your data when you recreated a pool with the same
name and the same disks with zpool create.
If I try to recreate a pool that already exists, at least exported,
I will see a message similar to the following:
# zpool create tank c3t3d0
invalid vdev specification
use '-f'
Yes, I did try to import the pool. However, the response of the command was no
pools available to import.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Yes, I did try to import the pool. However, the
response of the command was no pools available to
import.
I'm not sure what happened to your pool, but I think it is possible
that the pool information on these disks was removed accidentally.
I'm not sure what the diskutil command does but if
Folks,
Has anyone seen a panic traceback like the following? This is Solaris-10u7
on a Thumper, acting as an NFS server. The machine was up for nearly a
year, I added a dataset to an existing pool, set compression=on for the
first time on this system, loaded some data in there (via rsync),
then
23 matches
Mail list logo