Hi.
I have tested zfs for a while and is very impressed with the ease one
can create filesystems (tanks). I'm about to try it out on a atabeast
with 42 ata 400 GB disks for internal use, mailny as a fileserver. If
this goes well (as I assume it will) I'll consider to desploy zfs on a
larger
Our main storage is a HDS 9585V Thunder with vxfs and raid5 on 400 GB
sata disk handled by the storage system. If I would migrate to zfs
that would mean 390 jbod's.
How so?
Do you mean whether the hds can be turned in to a (rather large) jbod
storage system? I just found out yesterday that
Our main storage is a HDS 9585V Thunder with vxfs and raid5 on 400 GB
sata disk handled by the storage system. If I would migrate to zfs
that would mean 390 jbod's.
How so?
CG Do you mean whether the hds can be turned in to a (rather large) jbod
CG storage system? I just found out
Hi.
I have a nexsan atabeast with 2 raid-controllers each with 21 disks @ 400 GB.
Each raid-controller has five raid-5 LUN's and one hotspare. The
solaris is from 2006/11. I have created a single raidz2 tank from the
ten LUN's.
The raid-conroller is connected to a Dell PE 2650 with two qlogic
Try throttling back the max # of IOs. I saw a number of errors similar to this
on Pillar and EMC.
In /etc/system, set:
set sd:sd_max_throttle=20
and reboot.
I have added the setting and rebooted. I'm doing the same tests now
and will know in a day or so if I can avoid the error (from the
Try throttling back the max # of IOs. I saw a number of errors similar to
this on Pillar and EMC.
In /etc/system, set:
set sd:sd_max_throttle=20
and reboot.
I have added the setting and rebooted. I'm doing the same tests now
and will know in a day or so if I can avoid the error (from the
And then you complain you can't get zfs or nvidia or wifi or ...
drivers, because you want that drivers and you want to force those
companies to give them for you under GPLv2. Some companies try to go
around that problem and there's still no consensus if it's legal or
not - but everyone is happy
Gents, how come this thread - without any relation to zfs at all - is
discussed on this list? Do move this irrelevant thread to another
fora.
My intentions subscribing to this list was *not* to read about
lay-man's perception of this nor that license!
regards
Claus
On 4/18/07, Shawn Walker
Over the weekend I got ZFS up and running under FreeBSD and have
had much the same experience with it that I have with Solaris - it works
great out of the box and once configured, it is easy to forget about.
So far the only real difference is anything you might tune via /etc/system
(or mdb) is
I'm currently using 4 TB partitions with vxfs. When hosted on FreeBSD
I was limited to 2 TB but using UFS2/FreeBSD was impractical for
several reasons. With vxfs 4 TB is a practical limit, when files are
Could you please give some hints on these reasons? I only know that
FreeBSDs UFS2 is not
Speaking of backup-software. I heard that legato supports zfs but is
anyone using zfs and legato or some other backup-software that can
handle multi-TB-file systems (in production)?
--
regards
Claus
___
zfs-discuss mailing list
Hi.
I'm managing a HDS storage system which is slightly larger than 100 TB
and we have used approx. 3/4. We use vxfs. The storage system is
attached to a solaris 9 on sparc via a fiberswitch. The storage is
shared via nfs to our webservers.
If I was to replace vxfs with zfs I could utilize
I'm all set for doing performance comparsion between Solaris/ZFS and
FreeBSD/ZFS. I spend last few weeks on FreeBSD/ZFS optimizations and I
think I'm ready. The machine is 1xQuad-core DELL PowerEdge 1950, 2GB
RAM, 15 x 74GB-FC-10K accesses via 2x2Gbit FC links. Unfortunately the
links to disks
I have just (re)installed FreeBSD amd64 current with gcc 4.2 with src
from May. 21'st on a dual Dell PE 2850. Does the post-gcc-4-2 current
include all your zfs-optimizations?
I have commented out INVARIANTS, INVARIANTS_SUPPORT, WITNESS and
WITNESS_SKIPSPIN in my kernel and recompiled with
Won't disabling ZIL minimize the chance of a consistent zfs-
filesystem
if - for some reason - the server did an unplanned reboot?
ZIL in ZFS is only used to speed-up various workloads, it has
nothing to
do with file system consistency. ZFS is always consistent on disk no
matter if you
Perhaps Jonathan Schwartz really didn't want ZFS in OS X - Solaris competition
- and he knew that if he did pre-announce ZFS in OS X that Steve Jobs would
drop it just to get back at him. Maybe this was intentionally done by Schwartz
to keep ZFS out of a competing OS. Just a thought.
Whatever
I would suggest that this thread will be moved to an
apple-related
list since it has nothing to do with zfs anymore.
Hmm, I don't know how you figure this has nothing to do with zfs. This is all
about zfs and seems to me zfs-discuss is the perfect thread for it.
Because the discussion
Hi.
Only indirectly related to zfs. I need to test diskusage/performance
on zfs shared via nfs. I have installed nevada b64a. Historically
uid/gid for user www has been 16/16 but when I try to add uid/gid www
via smc with the value 16 I'm not allowed to do so.
I'm coming from a FreeBSD
Only indirectly related to zfs. I need to test diskusage/performance
on zfs shared via nfs. I have installed nevada b64a. Historically
uid/gid for user www has been 16/16 but when I try to add uid/gid www
via smc with the value 16 I'm not allowed to do so.
I'm coming from a FreeBSD
Only indirectly related to zfs. I need to test diskusage/performance
on zfs shared via nfs. I have installed nevada b64a. Historically
uid/gid for user www has been 16/16 but when I try to add uid/gid www
via smc with the value 16 I'm not allowed to do so.
I'm coming from a FreeBSD
Hi.
I have many small - mostly jpg - files where the original file is
approx. 1 MB and the thumbnail generated is approx. 4 KB. The files
are currently on vxfs. I have copied all files from one partition onto
a zfs-ditto. The vxfs-partition occupies 401 GB and zfs 449 GB. Most
files uploaded are
Will a different volblocksize (during creation of the partition) make
better use of the available diskspace? Will (meta)data require less
space if compression is enabled?
Just re-read the evil-tuning-guide and metadata is allready compressed
I have many small - mostly jpg - files where the original file is
approx. 1 MB and the thumbnail generated is approx. 4 KB. The files
are currently on vxfs. I have copied all files from one partition onto
a zfs-ditto. The vxfs-partition occupies 401 GB and zfs 449 GB. Most
files uploaded
So the 1 MB files are stored as ~8 x 128K recordsize.
Because of
5003563 use smaller tail block for last block of object
The last block of you file is partially used. It will depend
on your filesize distribution by without that info we can
only guess that we're wasting an avg of
The files are approx. 1 MB with an thumbnail of approx. 4 KB.
So the 1 MB files are stored as ~8 x 128K recordsize.
Because of
5003563 use smaller tail block for last block of object
I tried to search for this but only saw references to this and similar
threads. Is there a
Hi.
I read the zfs getting started guided at
http://www.opensolaris.org/os/community/zfs/intro/;jsessionid=A64DABB3DF86B8FDBF8A3E281C30B8B2.
I created zpool disk1 and created disk1/home and assigned /export/home
to disk1/home as mountpoint. Then I create a user with 'zfs create
Hi.
Just migrated to zfs on opensolaris. I copied data to the server using
rsync and got this message:
Oct 10 17:24:04 zetta ^Mpanic[cpu1]/thread=ff0007f1bc80:
Oct 10 17:24:04 zetta genunix: [ID 683410 kern.notice] BAD TRAP:
type=e (#pf Page fault) rp=ff0007f1b640 addr=fffecd873000
I can neither confirm nor deny that I can confirm or deny what somebody else
said.
http://www.techworld.com/storage/features/index.cfm?featureID=3728pagtype=samecatsamechan
-- richard
No problem, just say yes or no! :-)
--
regards
Claus
When lenity and cruelty play for a kingdom,
the
Hi.
I have created some zfs-partitions. First I create the
home/user-partitions. Beneath that I create additional partitions.
Then I have do a chown -R for that user. These partitions are shared
using the sharenfs=on. The owner- and group-id is 1009.
These partitions are visible as the user
Is the mount using NFSv4? If so, there is likely a midguided
mapping of the user/groups between the client and server.
While not including BSD info, there is a little bit on
NFSv4 user/group mappings at this blog:
http://blogs.sun.com/nfsv4
It defaults to nfs ver. 3. As a sidenote samba is
I have created some zfs-partitions. First I create the
home/user-partitions. Beneath that I create additional partitions.
Then I have do a chown -R for that user. These partitions are shared
using the sharenfs=on. The owner- and group-id is 1009.
These partitions are visible as the
Did you mount both the parent and all the children on the client ?
No, I just assumed that the sub-partitions would inherit the same
uid/gid as the parent. I have done a chown -R.
Ahhh, the issue is not permissions, but how the NFS server
sees the various directories to
Hi.
I've ordered an areca ARC-1680 sas-adapter and supermicro 3U storage
cabinet which can hold 16 sata-disks. The disks are WD RAID Edition 2
GP - 1000GB (server-edition). This will be a jbod-storage. My plan is
to create one zpool and add disks in chunks of 10 disks in raidz2 and
allocate one
zpool add -f external c12t0d0p0
zpool add -f external c13t0d0p0 (it wouldn't work without -f, and I believe
that's because the fs was online)
No, it had nothing to do with the pool being online. It was because a
single disk was being added to a pool with raidz2. The error message that
How is your setup working? Just curious!
Thx.
NV
Waiting for some parts. :-/ But as soon as I have some information
I'll post it. :-)
--
regards
Claus
When lenity and cruelty play for a kingdom,
the gentlest gamester is the soonest winner.
Shakespeare
This is the first time I tried nfs with zfs. I shared the zfs filesystem
with nfs, but i can't write to the files though i mount it as read-write.
This is for Solaris 10 update 4. I wonder if there is a bug?
---server (sdw2-2)
#zfs create -o sharenfs=on data/nfstest
Hi.
I installed solaris express developer edition (b79) on a supermicro
quad-core harpertown E5405 with 8 GB ram and two internal sata-drives.
I installed solaris onto one of the internal drives. I added an areca
arc-1680 sas-controller and configured it in jbod-mode. I attached an
external
I installed solaris express developer edition (b79) on a supermicro
quad-core harpertown E5405 with 8 GB ram and two internal sata-drives.
I installed solaris onto one of the internal drives. I added an areca
arc-1680 sas-controller and configured it in jbod-mode. I attached an
external
I'm running the version that was supplied on the CD, this is
1.20.00.15 from 2007-04-04. The firmware is V1.45 from 2008-3-27.
Check the version at the Areca website. They may have a more recent driver
there. The dates are later for the 1.20.00.15 and there is a -71010
extension.
I installed solaris express developer edition (b79) on a supermicro
quad-core harpertown E5405 with 8 GB ram and two internal sata-drives.
I installed solaris onto one of the internal drives. I added an areca
arc-1680 sas-controller and configured it in jbod-mode. I attached an
external
| How would you describe the difference between the data recovery
| utility and ZFS's normal data recovery process?
The data recovery utility should not panic my entire system if it runs
into some situation that it utterly cannot handle. Solaris 10 U5 kernel
ZFS code does not have this
see, originally when i read about zfs it said it could expand to petabytes or
something. but really, that's not as a single filesystem ? that could only
be accomplished through combinations of pools?
i don't really want to have to even think about managing two separate
partitions - i'd
Are there any benchmarks or numbers showing the performance difference using
a 15 disk raidz2 zpool? I am fine sacrificing some performance but obviously
don't want to make the machine crawl.
It sounds like I could go with 15 disks evenly and have to sacrifice 3, but I
would have 1 parity
Has anyone had issues with creating ZFS pools greater than 1 terabyte (TB)?
I've created 11 LUNs from a Sun 2540 Disk array (approx 1 TB each). The host
system ( SUN Enterprise 5220) reconizes the disks as each having 931GB
space. So that should be 10+ TB in size total. However when I
44 matches
Mail list logo