The reason I asked was just to understand how those attributes play with
ufs/vxfs...
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Zfs scrub needs to access all written data on all
disks and is usually
disk-seek or disk I/O bound so it is difficult to
keep it from hogging
the disk resources. A pool based on mirror devices
will behave much
more nicely while being scrubbed than one based on
RAIDz2.
Experience
Greeing All
This might be an old question !!
Does any one know how to use ZFS with Mysql, i.e how to make mysql use a ZFS
file system , how to point zfs to tank/myzfs ???
Thanks
--
Abdullah Al-Dahlawi
George Washington University
Department. Of Electrical Computer Engineering
Check
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
Look up the inode number of README. (for example, ls -i README)
(suppose its inode 12345)
find /tank/.zfs/snapshot -inum 12345
Problem is, the find
On 28.04.10 14:06, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
Look up the inode number of README. (for example, ls -i README)
(suppose it’s inode 12345)
find
The original drive pool was configured with 144 1TB drives and a hardware raid
0 strip across every 4 drives to create 4TB luns. These luns where then
combined into 6 raidz2 luns and added to the zfs pool. I would like to delete
the original hardware raid 0 stripes and add the 144 drives
We are running the latest dev release.
I was hoping to just mirror the zfs voumes and not the whole pool. The original
pool size is around 100TB in size. The spare disks I have come up with will
total around 40TB. We only have 11TB of space in use on the original zfs pool.
--
This message
On Apr 28, 2010, at 1:34 AM, Tonmaus wrote:
Zfs scrub needs to access all written data on all
disks and is usually
disk-seek or disk I/O bound so it is difficult to
keep it from hogging
the disk resources. A pool based on mirror devices
will behave much
more nicely while being scrubbed
On Wed, Apr 28 at 1:34, Tonmaus wrote:
Zfs scrub needs to access all written data on all
disks and is usually
disk-seek or disk I/O bound so it is difficult to
keep it from hogging
the disk resources. A pool based on mirror devices
will behave much
more nicely while being scrubbed than one
On Apr 28, 2010, at 6:40 AM, Wolfraider wrote:
We are running the latest dev release.
I was hoping to just mirror the zfs voumes and not the whole pool. The
original pool size is around 100TB in size. The spare disks I have come up
with will total around 40TB. We only have 11TB of space
On Apr 28, 2010, at 6:37 AM, Wolfraider wrote:
The original drive pool was configured with 144 1TB drives and a hardware
raid 0 strip across every 4 drives to create 4TB luns.
For the archives, this is not a good idea...
These luns where then combined into 6 raidz2 luns and added to the zfs
On Apr 28, 2010, at 6:37 AM, Wolfraider wrote:
The original drive pool was configured with 144 1TB
drives and a hardware raid 0 strip across every 4
drives to create 4TB luns.
For the archives, this is not a good idea...
Exactly, This is the reason I want to blow all the old configuration
Mirrors are made with vdevs (LUs or disks), not
pools. However, the
vdev attached to a mirror must be the same size (or
nearly so) as the
original. If the original vdevs are 4TB, then a
migration to a pool made
with 1TB vdevs cannot be done by replacing vdevs
(mirror method).
-- richard
Hi Eric,
While there may be some possible optimizations, i'm
sure everyone
would love the random performance of mirror vdevs,
combined with the
redundancy of raidz3 and the space of a raidz1.
However, as in all
ystems, there are tradeoffs.
I think we all may agree that the topic here is
On 28 April, 2010 - Eric D. Mudama sent me these 1,6K bytes:
On Wed, Apr 28 at 1:34, Tonmaus wrote:
Zfs scrub needs to access all written data on all
disks and is usually
disk-seek or disk I/O bound so it is difficult to
keep it from hogging
the disk resources. A pool based on mirror
Looks like I've hit this bug:
http://bugs.opensolaris.org/view_bug.do?bug_id=6782540 However, none of the
workaround listed in that bug, or any of the related bugs, works. :(
Going through the zfs-discuss and freebsd-fs archives, I see that others
have run into this issue, and managed to solve
On Wed, 28 Apr 2010, Richard Elling wrote:
the disk resources. A pool based on mirror devices will behave
much more nicely while being scrubbed than one based on RAIDz2.
The data I have does not show a difference in the disk loading while
scrubbing for different pool configs. All HDDs
Hi Abdullah,
You can review the ZFS/MySQL presentation at this site:
http://forge.mysql.com/wiki/MySQL_and_ZFS#MySQL_and_ZFS
We also provide some ZFS/MySQL tuning info on our wiki,
here:
http://hub.opensolaris.org/bin/view/Community+Group+zfs/zfsanddatabases
Thanks,
Cindy
On 04/28/10
On Apr 28, 2010, at 8:39 AM, Wolfraider wrote:
Mirrors are made with vdevs (LUs or disks), not
pools. However, the
vdev attached to a mirror must be the same size (or
nearly so) as the
original. If the original vdevs are 4TB, then a
migration to a pool made
with 1TB vdevs cannot be done
adding on...
On Apr 28, 2010, at 8:57 AM, Tomas Ögren wrote:
On 28 April, 2010 - Eric D. Mudama sent me these 1,6K bytes:
On Wed, Apr 28 at 1:34, Tonmaus wrote:
Zfs scrub needs to access all written data on all
disks and is usually
disk-seek or disk I/O bound so it is difficult to
keep
For this type of migration a downtime is required. However, it can be reduce
to only a few hours to a few minutes depending how much change need to be
synced.
I have done this many times on a NetApp Filer but can be apply to zfs as well.
First thing is consider is only do the migration once
I took a snapshot of one of my oracle filesystems this week and when someone
tried to add data to it it filled up.
I tried to remove some data but the snapshot seemed to keep reclaiming it as i
deleted it. I had taken the snapshot says earlier. Does this make sense?
--
This message posted from
So on the point of not need an migration back.
Even at 144 disk. they won't be on the same raid group. So figure out what is
the best raid group size for you since zfs don't support changing number of
disk in raidz yet. I usually use the number of the slots per shelf. or a good
number is
I took a snapshot of one of my oracle filesystems this week and when someone
tried to add data to it it filled up.
I tried to remove some data but the snapshot seemed to keep reclaiming it as
i deleted it. I had taken the snapshot says earlier. Does this make sense?
Snapshots are completed
Sorry, I need to correct myself. Mirror luns on the windows side to switch
storage pool under it is a great idea and I think you can do this without
downtime.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
On Wed, 28 Apr 2010, Jim Horng wrote:
So on the point of not need an migration back.
Even at 144 disk. they won't be on the same raid group. So figure
out what is the best raid group size for you since zfs don't support
changing number of disk in raidz yet. I usually use the number of
New to Solairs/ZFS and having a difficult time getting ZFS, NFS and ACLs
all working together, properly. Trying access/use zfs shared
filesystems on a linux client. When I access the dir/files on the linux
client, my permissions do not carry over, nor do the newly created
files, and I can
3 shelves with 2 controllers each. 48 drive per shelf. These are Fibrechannel
attached. We would like all 144 drives added to the same large pool.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
I understand your point. however in most production system the selves are added
incrementally so make sense to be related to number of slots per shelf. and in
most case withstand a shelf failure is to much of overhead on storage any are.
for example in his case he will have to configure 1+0
On 28 apr 2010, at 14.06, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
Look up the inode number of README. (for example, ls -i README)
(suppose it’s inode 12345)
find
On Wed, April 28, 2010 10:16, Eric D. Mudama wrote:
On Wed, Apr 28 at 1:34, Tonmaus wrote:
Zfs scrub needs to access all written data on all
disks and is usually
disk-seek or disk I/O bound so it is difficult to
keep it from hogging
the disk resources. A pool based on mirror devices
will
Sorry for the double post but I think this was better suite for zfs forum.
I am running OpenSolaris snv_134 as a file server in a test environment,
testing deduplication. I am transferring large amount of data from our
production server via using rsync.
The Data pool is on a separated raidz1-0
On Apr 28, 2010, at 8:00 PM, Freddie Cash wrote:
Looks like I've hit this bug:
http://bugs.opensolaris.org/view_bug.do?bug_id=6782540 However, none of the
workaround listed in that bug, or any of the related bugs, works. :(
Going through the zfs-discuss and freebsd-fs archives, I see
On Wed, 28 Apr 2010, Jim Horng wrote:
I understand your point. however in most production system the
selves are added incrementally so make sense to be related to number
of slots per shelf. and in most case withstand a shelf failure is
to much of overhead on storage any are. for example in
I had a pool which I created using zfs-fuse, which is using March code base
(exact version, I don't know; if someone can tell me the command to find the
zpool format version, I would be grateful).
I exported it and now I tried to import it in OpenSolaris, which is running Feb
bits because it
On Wed, Apr 28, 2010 at 1:51 PM, Jim Horng jho...@stretchinc.com wrote:
I have now turned the dedup off on the pools and the rsync seem to be going
further than before. Is this a known bug? Is there an workaround for this
without rebooting the system? I am not an Solaris expert and I haven't
Hi Mary Ellen,
We were looking at this problem and are unsure what the problem is...
To rule out NFS as the root cause, could you create and share a test ZFS
file system without any ACLs to see if you can access the data from the
Linux client?
Let us know the result of your test.
Thanks,
On 04/29/10 10:21 AM, devsk wrote:
I had a pool which I created using zfs-fuse, which is using March code base
(exact version, I don't know; if someone can tell me the command to find the
zpool format version, I would be grateful).
Try [zfs|zpool] upgrade.
These commands will tell you
On Thu, 29 Apr 2010, Ian Collins wrote:
You can create pools and filesystems with older versions if you want them to
be backwards compatible. I have done this when I was sending data to a
backup server running an older Solaris version.
From the zpool manual page, it seems that it should be
One quick question: When will the next formal release be released?
Does oracle have plan to support OpenSolaris community as Sun did before?
What is the direction of ZFS in future?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
From: Ragnar Sundblad [mailto:ra...@csc.kth.se]
Sent: Wednesday, April 28, 2010 3:49 PM
What indicators do you have that ONTAP/WAFL has inode-name lookup
functionality?
I don't have any such indicator, and if that's the way my words came out,
sorry for that. Allow me to clarify:
In
On 04/29/10 11:02 AM, autumn Wang wrote:
One quick question: When will the next formal release be released?
Of what?
Does oracle have plan to support OpenSolaris community as Sun did before?
What is the direction of ZFS in future?
Do you really expect answers to those question
On Apr 29, 2010, at 3:03 AM, Edward Ned Harvey wrote:
From: Ragnar Sundblad [mailto:ra...@csc.kth.se]
Sent: Wednesday, April 28, 2010 3:49 PM
What indicators do you have that ONTAP/WAFL has inode-name lookup
functionality?
I don't have any such indicator, and if that's the way my words
This is not a performance issue. The rsync will hang hard and one of the child
process can not be killed (I assume it's the one running on the zfs). the
command gets slower I am referring to the output of the file system commands
(zpool, zfs, df, du, etc) from the different shell. I left the
On Wed, Apr 28, 2010 at 5:09 PM, Jim Horng jho...@stretchinc.com wrote:
This is not a performance issue. The rsync will hang hard and one of the
child process can not be killed (I assume it's
I've seen a similar issue on a b133 host that has a large DDT, but I
haven't waited very long to see
Ian: Of course they expected answers to those questions here. It seems many
people do not read the forums or mailing list archives to see their
questions
previously asked (and answered) many many times over, or the flames that
erupt from them. It's scary how much people don't check historical
[...]
There is a way to do this kind of object to name
mapping, though there's no documented public
interface for it. See zfs_obj_to_path() function and
ZFS_IOC_OBJ_TO_PATH ioctl.
I think it should also be possible to extend it to
handle multiple names (in case of multiple hardlinks)
in
On 28/04/10 11:07 AM, Brad wrote:
Whats the default size of the file system cache for Solaris 10 x86 and can it
be tuned?
I read various posts on the subject and its confusing..
http://dlc.sun.com/osol/docs/content/SOLTUNEPARAMREF/soltuneparamref.html
should have all the answers you need.
Today, Compellant announced their zNAS addition to their unified storage
line. zNAS uses ZFS behind the scenes.
http://www.compellent.com/Community/Blog/Posts/2010/4/Compellent-zNAS.aspx
Congrats Compellant!
-- richard
ZFS storage and performance consulting at http://www.RichardElling.com
ZFS
3 shelves with 2 controllers each. 48 drive per
shelf. These are Fibrechannel attached. We would like
all 144 drives added to the same large pool.
I would do either a 12 or 16 disk raidz3 vdev and do spread out the disk across
controllers within vdevs. also may want to leave a least 1 spare
On Apr 28, 2010, at 9:48 PM, Jim Horng wrote:
3 shelves with 2 controllers each. 48 drive per
shelf. These are Fibrechannel attached. We would like
all 144 drives added to the same large pool.
I would do either a 12 or 16 disk raidz3 vdev and do spread out the disk
across controllers
I'm looking for a way to backup my entire system, the rpool zfs pool to an
external HDD so that it can be recovered in full if the internal HDD fails.
Previously with Solaris 10 using UFS I would use ufsdump and ufsrestore, which
worked so well, I was very confident with it. Now ZFS doesn't
52 matches
Mail list logo