From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Sigbjorn Lie
What about mirroring? Do I need mirrored ZIL devices in case of a power
outage?
You don't need mirroring for the sake of *power outage* but you *do* need
mirroring for the
Hello, first time posting. I've been working with zfs on and off with limited
*nix experience for a year or so now, and have read a lot of things by a lot of
you I'm sure. Still tons I don't understand/know I'm sure.
We've been having awful IO latencies on our 7210 running about 40 VM's
Hi all,
I'm running out of space on my OpenSolaris file server and can't afford to buy
any new storage for a short while. Seeing at the machine has a dual core CPU
at 2.2GHz and 4GB ram, I was thinking compression might be the way to go...
I've read a small amount about compression, enough to
On 25 Jul 2010, at 14:12, Ben ben.lav...@gmail.com wrote:
I've read a small amount about compression, enough to find that it'll effect
performance (not a problem for me) and that once you enable compression it
only effects new files written to the file system.
Yes, that's true.
I have a semi-theoretical question about the following code in arc.c,
arc_reclaim_needed() function:
/*
* take 'desfree' extra pages, so we reclaim sooner, rather than later
*/
extra = desfree;
/*
* check that we're out of range of the pageout scanner. It starts to
* schedule paging if
Hello Mark,
I assume you have a read-intensive workload, not many synchronous writes, so
leave out the ZIL, please try:
* configure the controller to show individual disks, no RAID
* create one large striped pool (zpool create tank c0t0d{1,2,3,4,5})
* if your SSD is c0t0d6, use it as an L2ARC
Thanks Alex,
I've set compression on and have transferred data from the OpenSolaris machine
to my Mac, deleted any snapshots and am now transferring them back.
It seems to be working, but there's lots to transfer!
I didn't know that MacZFS was still going, it's great to hear that people are
On Jul 23, 2010, at 10:14 PM, Edward Ned Harvey sh...@nedharvey.com wrote:
From: Arne Jansen [mailto:sensi...@gmx.net]
Can anyone else confirm or deny the correctness of this statement?
As I understand it that's the whole point of raidz. Each block is its
own
stripe.
Nope, that
On Fri, 2010-07-23 at 22:20 -0400, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Linder, Doug
On a related note - all other things being equal, is
there any reason
to choose NFS over ISCI, or
On 2010-Jul-25 21:12:08 +0800, Ben ben.lav...@gmail.com wrote:
I've read a small amount about compression, enough to find that it'll effect
performance (not a problem for me) and that once you enable compression it
only effects new files written to the file system.
Is this still true of b134?
I've read a small amount about compression, enough to find that it'll effect
performance (not a problem for me) and that once you enable compression it
only effects new files written to the file system.
Yes, that's true. Compression on defaults to lzjb which is fast; but gzip-9
can be
OK...decided to do a fresh install of Fedora (FC12)...install completed to
iScsi target...now trying to boot it.
Fedora is finding the target, then then throwing an I/O error. When I snoop
on the ZFS server I see the following:
1) Initiator connects and logs into the target OK
2) Initiator
On Sun, 2010-07-25 at 17:53 -0400, Saxon, Will wrote:
I think there may be very good reason to use iSCSI, if you're limited
to gigabit but need to be able to handle higher throughput for a
single client. I may be wrong, but I believe iSCSI to/from a single
initiator can take advantage of
On Sun, Jul 25, 2010 at 8:50 PM, Garrett D'Amore garr...@nexenta.com wrote:
On Sun, 2010-07-25 at 17:53 -0400, Saxon, Will wrote:
I think there may be very good reason to use iSCSI, if you're limited
to gigabit but need to be able to handle higher throughput for a
single client. I may be
Gr...I finally figured out I was specifying the wrong LUN (I was using 1 in
earlier testing but current targets are Lun 0).
I also mispoke...this is actually Etherboot's gPXE's Iscsi logic, NOT fedora.
Here it is working now:
iSCSI (SCSI Data In)
Opcode: SCSI Data In (0x25)
Flags:
On Jul 24, 2010, at 2:20 PM, Edward Ned Harvey wrote:
I remember asking about this a long time ago, and everybody seemed to think
it was a non-issue. The vague and unclearly reported rumor that ZFS behaves
poorly when it's 100% full. Well now I have one really solid data point to
16 matches
Mail list logo