There are *lots* of options for configuring a Thumper, what you choose really
depends on the kind of performance you want. I found these sites incredibly
helpful in working out what was best for us:
http://blogs.sun.com/relling/entry/zfs_raid_recommendations_space_performance
Gaaah, no idea what happened to that. It looked ok in preview, but it seems
the message board is adding odd characters to my text. Trying again:
RAID-Z2
Disks per setSetsWrite PerformanceRead PerformanceRead IOPS
Hey folks,
I can see there's been a fair bit of discussion on this topic in the past, I
wondered if anybody has any feedback on the best way to do this?
We're looking to use ZFS and Samba to serve files to our windows clients, which
means we'll be using NFSv4 permissions. While the data will
Hi,
I'm testing ZFS+CIFS server using nexenta core rc4, everything seems fine and
speed is also ok, but DOS programs don't see sub-dirs (command.com sees them,
though).
I've set casesensitivity=insensitive in the ZFS filesystem that I'm sharing.
I've made this test using Windows2000,
Just a reminder that the ZFS Crypto project is going for PSARC
commitment review this week.
Meeting Details are here:
http://opensolaris.org/os/community/arc/announcements/#2008-01-31_OpenSolaris_ARC_Agenda_for_February_6__2008
Submitted review materials are here:
Paul B. Henson wrote:
On Mon, 4 Feb 2008, Darren J Moffat wrote:
At this time the libzfs C interfaces are not stable public documented
interfaces so there are no Perl bindings for them either.
The commands are the only stable and documented interfaces to ZFS at this
time.
Perhaps not
Hi all,
I posted to osol-discuss with this but got no resolution - I'm hoping a
more focused mailing list will yield better results.
I have a ZFS filesystem (zpool version 8, on SXDE 9/07) on which there
was a directory that contained a large number of files (~2 million).
After deleting these
Maurilio Longo wrote:
Hi,
I'm testing ZFS+CIFS server using nexenta core rc4, everything seems fine and
speed is also ok, but DOS programs don't see sub-dirs (command.com sees them,
though).
I've set casesensitivity=insensitive in the ZFS filesystem that I'm sharing.
I've made this
We're currently evaluating ZFS prior to (hopefully) rolling it out across our
server room, and have managed to lock up a server after connecting to an iSCSI
target, and then changing the IP address of the target.
Basically we have two test Solaris servers running, and I followed the
I don't think this is so much a ZFS problem as an iSCSI initiator
problem. Are you using static configs or Send Target discovery? There
are many reports of sent target discovery misbehavior in the storage
discuss forum.
To recover:
1. Boot into single user from CD
2. mount the root slice on /a
3.
Mark Shellenbaum wrote:
Maurilio Longo wrote:
Hi,
I'm testing ZFS+CIFS server using nexenta core rc4, everything seems fine
and speed is also ok, but DOS programs don't see sub-dirs (command.com sees
them, though).
I've set casesensitivity=insensitive in the ZFS filesystem that I'm
Sam,
This sounds like something for the zfs-discuss list. Cross-posting...
Ed
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 2/4/2008 5:40 PM, Jeremy Kister wrote:
1. What do I have to do (short of replacing the seemingly good disk) to
get c3t8d0 back online?
I did find a related thread:
http://mail.opensolaris.org/pipermail/zfs-discuss/2006-June/032179.html
but the thread ended without a resolution.
also,
This may not be a ZFS issue, so please bear with me!
I have 4 internal drives that I have striped/mirrored with ZFS and have an
application server which is reading/writing to hundreds of thousands of files
on it, thousands of files @ a time.
If 1 client uses the app server, the transaction
Some more information about the system. NOTE: Cpu utilization never goes above
10%.
Sun Fire v40z
4 x 2.4 GHz proc
8 GB memory
3 x 146 GB Seagate Drives (10k RPM)
1 x 146 GB Fujitsu Drive (10k RPM)
This message posted from opensolaris.org
___
Ed Pate wrote:
Sam,
This sounds like something for the zfs-discuss list. Cross-posting...
What's the question?
Ian
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 2/5/08, Sam [EMAIL PROTECTED] wrote:
Hi,
I posted in the Solaris install forum as well about the fileserver I'm
building for media files but wanted to ask more specific questions about zfs
here. The setup is 8x500GB SATAII drives to start and down the road another
4x750 SATAII drives,
I don't think this is so much a ZFS problem as an iSCSI initiator
problem. Are you using static configs or Send Target discovery? There
are many reports of sent target discovery misbehavior in the storage
discuss forum.
To recover:
1. Boot into single user from CD
2. mount the root slice on /a
3.
Cross-posting again... with the original this time :-)
sumgshiz (http://www.opensolaris.org/jive/profile.jspa?userID=92042) wrote:
Hi All,
I stumbled on Solaris after talking to my CEO about
the server I was setting up and his eyes got all big
and he started going on about Solaris and ZFS.
Ed Pate wrote:
Hi All,
I stumbled on Solaris after talking to my CEO about
the server I was setting up and his eyes got all big
and he started going on about Solaris and ZFS. I'm
experienced with Unix and Linux so I have some idea
of what is going to go into this but I've already
tried
For what it's worth, I configured a T5220 this week with a 6 disk, three
mirror zpool. (three top level mirror vdevs...).
Used only internal disks...
When pushing to disk, I was seeing bursts of 70 odd MB/s per spindle,
with all 6 spindles making the 70MB/s, so 350MB/s ish.
Read performance
Hi Al,
Thanks for the tips, I've maxed the memory on the board now (Up to 8GB from
4GB) and you are dead right about it being cheap to do so. I'd upgraded the
power supply as I thought that was an issue since the original couldn't provide
enough start-up current but that didn't make much
So, Solaris, this post is essentially asking for any
obvious suggestions or pointing out pitfalls I may be
getting myself into, just a nudge in the right
direction. Later I'm sure there will be more direct
questions :)
As Ian suggested, definitely search the archives because this question
Haik Aftandilian wrote:
If this was for my home server, I would go with Solaris Express or another
OpenSolaris distribution just because I like to be closer to the cutting
edge. For example, the new beta/experimental CIFS server was recently
integrated into OpenSolaris. Typically, it would
I was curious to see about how many filesystems one server could
practically serve via NFS, and did a little empirical testing.
Using an x4100M2 server running S10U4x86, I created a pool from a slice of
the hardware raid array created from the two internal hard disks, and set
sharenfs=on for the
Thanks guys,
Tim answered the general questions and haik thanks for pointing out Solaris
Express and CIFS, downloading that now instead, much closer to what I need.
Sam
This message posted from opensolaris.org
___
zfs-discuss mailing list
-I'm under the impression that ZFS+(ZFS2) is similar
to RAID6, so for the initial 8x500GB two drives would
be sucked into parity so I'd have a 3TB volume with
the ability to lose two discs, no?
RAIDZ2 is the term you're looking for; and yes, you'd wind up with 3 TB of
usable space.
-I
William Fretts-Saxton william.fretts.saxton at sun.com writes:
Some more information about the system. NOTE: Cpu utilization never
goes above 10%.
Sun Fire v40z
4 x 2.4 GHz proc
8 GB memory
3 x 146 GB Seagate Drives (10k RPM)
1 x 146 GB Fujitsu Drive (10k RPM)
And what version of
28 matches
Mail list logo