On 26 April, 2009 - Gary Mills sent me these 1,3K bytes:
On Sun, Apr 26, 2009 at 05:02:38PM -0500, Tim wrote:
I have to ask though... why not just serve NFS off the filer to the
Solaris box? ZFS on a LUN served off a filer seems to make about as
much sense as sticking a ZFS based
ZFS blocksize is dynamic, power of 2, with a max size == recordsize.
Minor clarification: recordsize is restricted to powers of 2, but
blocksize is not -- it can be any multiple of sector size (512 bytes).
For small files, this matters: a 37k file is stored in a 37k block.
For larger,
Create the zpool with:
zpool create name log dev(s) - for the ZIL
zpool create name cache dev(s) - for the L2ARC
On Sat, Apr 25, 2009 at 11:13 PM, Richard Elling
richard.ell...@gmail.comwrote:
Gary Mills wrote:
On Fri, Apr 24, 2009 at 09:08:52PM -0700, Richard Elling wrote:
Gary Mills
On Mon, April 27, 2009 02:13, Tomas Ögren wrote:
On 26 April, 2009 - Gary Mills sent me these 1,3K bytes:
I prefer NFS too, but the IMAP server requires POSIX semantics.
I believe that NFS doesn't support that, at least NFS version 3.
What non-POSIXness are you referring to, or is it just
On Tue, Apr 21, 2009 at 12:34 PM, Alastair Neil ajn...@gmail.com wrote:
A very basic question. I have in recent releases of opensolaris found that
a script I use to create large number of account home directories has been
failing because the script attempts to create and modify the
On Tue, Mar 31, 2009 at 8:47 PM, River Tarnell
ri...@loreley.flyingparchment.org.uk wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Matthew Ahrens:
does this mean that without an account on the NFS server, a user cannot see
his
current disk use / quota?
That's correct.
in this
Hello Alastair,
Monday, April 27, 2009, 7:17:51 PM, you wrote:
On Tue, Apr 21, 2009 at 12:34 PM, Alastair Neil ajn...@gmail.com wrote:
A very basic question. I have in recent releases of opensolaris found that a script I use to create large number of account home directories has
Will this work with Linux rquota clients, too?
Olga
On 4/1/09, Matthew Ahrens matthew.ahr...@sun.com wrote:
Mike Gerdts wrote:
On Tue, Mar 31, 2009 at 7:12 PM, Matthew Ahrens matthew.ahr...@sun.com
wrote:
River Tarnell wrote:
Matthew Ahrens:
ZFS user quotas (like other
Yes generally the filesystem gets created just that the mount seems not to
take place.
On Mon, Apr 27, 2009 at 3:41 PM, Robert Milkowski mi...@task.gda.pl wrote:
Hello Alastair,
Monday, April 27, 2009, 7:17:51 PM, you wrote:
On Tue, Apr 21, 2009 at 12:34 PM, Alastair Neil
On Sat, Apr 18, 2009 at 04:27:55PM -0500, Gary Mills wrote:
We have an IMAP server with ZFS for mailbox storage that has recently
become extremely slow on most weekday mornings and afternoons. When
one of these incidents happens, the number of processes increases, the
load average increases,
Hello Jeff,
Monday, April 27, 2009, 9:12:26 AM, you wrote:
ZFS blocksize is dynamic, power of 2, with a max size == recordsize.
JB Minor clarification: recordsize is restricted to powers of 2, but
JB blocksize is not -- it can be any multiple of sector size (512 bytes).
JB For small files,
Hi,
i'm new to the list so please bare with me. This isn't an OpenSolaris
related problem but i hope it's still the right list to post to.
I'm on the way to move a backup server to using zfs based storage, but i
don't want to spend too much drives to parity (the 16 drives are attached
to a 3ware
Leon,
RAIDZ2 is ~equivalent to RAID6. ~2 disks of parity data. Allowing a
double drive
failure and still having the pool available.
If possible though you would be best to let the 3ware controller expose
the 16 disks as a JBOD to ZFS and create a RAIDZ2 within Solaris as you
will then
gain
On Mon, Apr 27, 2009 at 4:51 PM, Scott Lawson
scott.law...@manukau.ac.nz wrote:
If possible though you would be best to let the 3ware controller expose
the 16 disks as a JBOD to ZFS and create a RAIDZ2 within Solaris as you
will then
gain the full benefits of ZFS. Block self healing etc etc.
Michael Shadle wrote:
On Mon, Apr 27, 2009 at 5:32 PM, Scott Lawson
scott.law...@manukau.ac.nz wrote:
One thing you haven't mentioned is the drive type and size that you are
planning to use as this
greatly influences what people here would recommend. RAIDZ2 is built for
big, slow SATA
On Mon, 27 Apr 2009, Michael Shadle wrote:
I was still operating under the impression that vdevs larger than 7-8
disks typically make baby Jesus nervous.
Baby Jesus might not be particularly nervous but if your drives don't
perform consistently, then there will be more chance of performance
Greetings,
We have a small Oracle project on ZFS (Solaris-10), using a SAN-connected
array which is need of replacement. I'm weighing whether to recommend
a Sun 2540 array or a Sun J4200 JBOD as the replacement. The old array
and the new ones all have 7200RPM SATA drives.
I've been watching
On Mon, 27 Apr 2009, Marion Hakanson wrote:
I guess one question I'd add is: The ops numbers seem pretty small.
Is it possible to give enough spindles to a pool to handle that many
IOP's without needing an NVRAM cache? I know latency comes into play
at some point, but are we at that point?
I have now downloaded zilstat.ksh and this is the sort of loading it
reports with my StorageTek 2540 while running the initial writer part
of the benchmark:
% ./zilstat.ksh -p Sun_2540 -l 30 10
N-Bytes N-Bytes/s N-Max-RateB-Bytes B-Bytes/s B-Max-Rateops =4kB
4-32kB =32kB
Richard Elling wrote:
Some history below...
Scott Lawson wrote:
Michael Shadle wrote:
On Mon, Apr 27, 2009 at 4:51 PM, Scott Lawson
scott.law...@manukau.ac.nz wrote:
If possible though you would be best to let the 3ware controller
expose
the 16 disks as a JBOD to ZFS and create a
Hi there,
juli...@rainforest:~$ cat /etc/issue
Ubuntu 9.04 \n \l
juli...@rainforest:~$ dpkg -l | grep -i zfs-fuse
ii zfs-fuse 0.5.1-1ubuntu5
I have two 320gb sata disks connected to a PCI raid controller:
juli...@rainforest:~$ lspci | grep -i sata
00:08.0 RAID
On Tue, Apr 28, 2009 at 11:49 AM, Julius Roberts
hooliowobb...@gmail.com wrote:
Hi there,
juli...@rainforest:~$ cat /etc/issue
Ubuntu 9.04 \n \l
juli...@rainforest:~$ dpkg -l | grep -i zfs-fuse
ii zfs-fuse 0.5.1-1ubuntu5
First of all this question might be
22 matches
Mail list logo