[zfs-discuss] one ZIL SLOG per zpool?

2010-08-12 Thread Chris Twa
I have three zpools on a server and want to add a mirrored pair of ssd's for 
the ZIL.  Can the same pair of SSDs be used for the ZIL of all three zpools or 
is it one ZIL SLOG device per zpool?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] File system ownership details of ZFS file system.

2010-08-12 Thread Ramesh Babu
Hi,

I am looking for the file system ownership information of ZFS file system. I
would like to know the amount of space used and number of files owned by
each user in ZFS file system. I could get the user space using 'ZFS
userspace' command. However i didn't find any switch to get Number of files
owned by each user. quot command will display this information of ufs but
not for zfs. Please let me know how to get number of files owned by each
user for ZFS file system.

Also I would like to know how to get the hard and soft limit quota of disk
space and inodes for each user.

Thanks,
Ramesh
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to obtain vdev information for a zpool?

2010-08-12 Thread Peter Taps
Hi James,

Appreciate your help.

Regards,
Peter
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Optimizing performance on a ZFS-based NAS

2010-08-12 Thread valrh...@gmail.com
Thanks to the help from many people on this board, I finally got my 
OpenSolaris-based NAS box up and running.

I have a Dell T410 with a Xeon E5504 2.0 GHz (Nehalem) quad-core processor, 8 
GB of RAM. I have six 2TB Hitachi Deskstar (HD32000IDK/7K) SATA drives, set up 
as stripes across three mirrored pairs. I have an OCZ Vertex 2 (NOT Pro) 60 GB 
SSD (Sandforce-based) for the L2ARC. All seven drives are attached to a Dell 
SAS 6i/R controller, which is an 8-channel SAS controller based on an LSI 
chipset. I've enabled dedup and compression on all filesystems of the single 
zpool.

Everything is working pretty well, and over NFS, I can get a solid 80 MB/sec if 
I'm copying big files. This is adequate, but I am wondering if I can do any 
better. I'm only using this box to share between two or three other machines, 
in a private (home or lab) network. I think I've followed all of the 
suggestions I've been given; in particular, running 8 GB of RAM with the 60 GB 
SSD for the L2ARC should allow full caching of the dedup table. I ran 
zilstat.ksh, but it always came up with zeros, which suggests there's no point 
in a ZIL log SSD. 

Is there anything left to tune? If so, how do I go about figuring out how to 
increase performance? Right now, I'm just copying large files and looking at 
the transfer rate as calculated by nautilus, or with iostat -x. What's the next 
thing to do, as far as diagnostics? I'd like to learn a bit more about the 
process of optimizing, since I have other such boxes I want to set up and tune, 
but with different hardware.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Degraded Pool, Spontaneous Reboots

2010-08-12 Thread Michael Anderson
Hello,

I've been getting warnings that my zfs pool is degraded. At first it was 
complaining about a few corrupt files, which were listed as hex numbers instead 
of filenames, i.e.

VOL1:0x0

After a scrub, a couple of the filenames appeared - turns out they were in 
snapshots I don't really need, so I destroyed those snapshots and started a new 
scrub. Subsequently, I typed  zpool status -v VOL1 ... and the machine 
rebooted. When I could log on again, I looked at /var/log/messages, but found 
nothing interesting prior to the reboot. I typed  zpool status -v VOL1 again, 
whereupon the machine rebooted. When the machine was back up, I stopped the 
scrub, waited a while, then typed zpool status -v VOL1 again, and this time 
got:


r...@nexenta1:~# zpool status -v VOL1
pool: VOL1
state: DEGRADED
status: One or more devices has experienced an unrecoverable error. An
attempt was made to correct the error. Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
using 'zpool clear' or replace the device with 'zpool replace'.
see: http://www.sun.com/msg/ZFS-8000-9P
scan: scrub canceled on Wed Aug 11 11:03:15 2010
config:

NAME STATE READ WRITE CKSUM
VOL1 DEGRADED 0 0 0
raidz1-0 DEGRADED 0 0 0
c2d0 DEGRADED 0 0 0 too many errors
c3d0 DEGRADED 0 0 0 too many errors
c4d0 DEGRADED 0 0 0 too many errors
c5d0 DEGRADED 0 0 0 too many errors

So, I have the following questions:

1) How do I find out which file is corrupt, when I only get something like 
VOL1:0x0
2) What could be causing these reboots?
3) How can I fix my pool?

Thanks!
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] one ZIL SLOG per zpool?

2010-08-12 Thread Darren J Moffat

On 12/08/2010 07:27, Chris Twa wrote:

I have three zpools on a server and want to add a mirrored pair of ssd's for 
the ZIL.  Can the same pair of SSDs be used for the ZIL of all three zpools or 
is it one ZIL SLOG device per zpool?


Only if you partition it up and give slices to the pools, however I 
personally don't like giving parts of the same device to multiple pools 
if I can help it.


The only vdev types that can be shared between pools are spares, all 
others need to be per pool or the physical devices partitioned up.


--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Unable to import ZFS pool after disk issues

2010-08-12 Thread Peter Van Buren
I have a 8 disk raidz2 pool (ZP02) that I am having some issues with. 
The pool is using WD20EADS 2TB drives connected to a Intel SASUC8I 
controller (LSI 1068E chip).
The pool was originally created when the machine was running SXDE 1/08. 
I later installed OpenSolaris 2009.06 and imported the pool. I did not 
upgrade the pool itself so it is still

using the older format.

Last week I noticed any commands that accessed the pool or filesystem 
within would hang. I noticed unrecovered SCSI read errors for one drive 
in the messages log. I was unable to gracefully reboot the machine and 
had to power cycle it.  After the reboot it would just hang when trying 
to process the ZFS config. I booted off the 2009.06 Live CD and was able 
to login via maintenance mode. I ran zpool import and it showed
the pool as being online, but in use by another system. It showed all 
the drives as being online. At this point I ran zpool -f import ZP02. 
So far it has been almost 90 hours and there has been no output
to the terminal (just a blinking cursor). At this point I cannot tell if 
the import command is doing anything.


Are there different steps I can try to recover this pool? Is there a way 
to open another terminal when booted into system maintenance mode from 
the Live CD? I'd like to see what the command is doing.
The machine has 4GB of ram. I will increase it to 8GB next week to see 
if that helps. I understand that these WD drives may not be ideal for 
ZFS. The pool has some important database backups on it.

I'd like to recover them if at all possible.

Thanks,

Peter.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Raidz - what is stored in parity?

2010-08-12 Thread Arne Schwabe
 Am 11.08.10 00:40, schrieb Peter Taps:
 Hi,

 I am going through understanding the fundamentals of raidz. From the man 
 pages, a raidz configuration of P disks and N parity provides (P-N)*X storage 
 space where X is the size of the disk. For example, if I have 3 disks of 10G 
 each and I configure it with raidz1, I will have 20G of usable storage. In 
 addition, I continue to work even if 1 disk fails.

 First, I don't understand why parity takes so much space. From what I know 
 about parity, there is typically one parity bit per byte. Therefore, the 
 parity should be taking 1/8 of storage, not 1/3 of storage. What am I missing?

 Second, if one disk fails, how is my lost data reconstructed? There is no 
 duplicate data as this is not a mirrored configuration. Somehow, there should 
 be enough information in the parity disk to reconstruct the lost data. How is 
 this possible?

 Thank you in advance for your help.

Nah it is more like Disk3 is disk2 xor disk1. You can read about it on
Raid5 (raidz is more complicated but the basic idea stays the same). The
parity you describe is only for error checking. More like a zfs checksum
which also one takes very little additional space.

Arne



smime.p7s
Description: S/MIME Cryptographic Signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs replace problems please please help

2010-08-12 Thread Seth Keith


 -Original Message-
 From: Mark J Musante [mailto:mark.musa...@oracle.com]
 Sent: Wednesday, August 11, 2010 5:03 AM
 To: Seth Keith
 Cc: zfs-discuss@opensolaris.org
 Subject: Re: [zfs-discuss] zfs replace problems please please help
 
 On Tue, 10 Aug 2010, seth keith wrote:
 
  # zpool status
   pool: brick
  state: UNAVAIL
  status: One or more devices could not be used because the label is missing
 or invalid.  There are insufficient replicas for the pool to continue
 functioning.
  action: Destroy and re-create the pool from a backup source.
see: http://www.sun.com/msg/ZFS-8000-5E
  scrub: none requested
  config:
 
 NAME   STATE READ WRITE CKSUM
 brick  UNAVAIL  0 0 0  insufficient replicas
   raidz1   UNAVAIL  0 0 0  insufficient replicas
 c13d0  ONLINE   0 0 0
 c4d0   ONLINE   0 0 0
 c7d0   ONLINE   0 0 0
 c4d1   ONLINE   0 0 0
 replacing  UNAVAIL  0 0 0  insufficient replicas
   c15t0d0  UNAVAIL  0 0 0  cannot open
   c11t0d0  UNAVAIL  0 0 0  cannot open
 c12d0  FAULTED  0 0 0  corrupted data
 c6d0   ONLINE   0 0 0
 
  What I want is to remove c15t0d0 and c11t0d0 and replace with the original 
  c6d1.
 Suggestions?
 
 Do the labels still exist on c6d1?  e.g. what do you get from zdb -l
 /dev/rdsk/c6d1s0?
 
 If the label still exists, and the pool guid is the same as the labels on
 the other disks, you could try doing a zpool detach brick c15t0d0 (or
 c11t0d0), then export  try re-importing.  ZFS may find c6d1 at that
 point.  There's no way to guarantee that'll work.

When I do a zdb -l /dev/rdsk/any device I get the same output for all my 
drives in the pool, but I don't think it looks right:

# zdb -l /dev/rdsk/c4d0

LABEL 0

failed to unpack label 0

LABEL 1

failed to unpack label 1

LABEL 2

failed to unpack label 2

LABEL 3

failed to unpack label 3


If I try this zpool deatch action,  can it be reversed if there is a problem?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Raidz - what is stored in parity?

2010-08-12 Thread Adam Leventhal
 In my case, it gives an error that I need at least 11 disks (which I don't) 
 but the point is that raidz parity does not seem to be limited to 3. Is this 
 not true?

RAID-Z is limited to 3 parity disks. The error message is giving you false hope 
and that's a bug. If you had plugged in 11 disks or more in the example you 
provided you would have simply gotten a different error.

- ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Backup zpool

2010-08-12 Thread Edward Ned Harvey
 From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
 boun...@opensolaris.org] On Behalf Of Simone Caldana
 
 I would like to backup my main zpool (originally called data) inside
 an equally originally named backupzpool, which will also holds other
 
 Basically I'd like to end up with
 backup/data
 
 this is quite simply doable by using zfs send / zfs receive.
 
 the problem is with compression. I have default compression enabled on
 my data pool, but I'd like to use gzip-2 on backup/data.

zfs create -o compression=gzip-2 /backup/data
zfs send -F d...@snap backup/data

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and VMware

2010-08-12 Thread Paul Kraus
On Wed, Aug 11, 2010 at 6:15 PM, Saxon, Will will.sa...@sage.com wrote:


 It really depends on your VM system, what you plan on doing with VMs and how 
 you plan to do it.

 I have the vSphere Enterprise product and I am using the DRS feature, so VMs 
 are vmotioned around
 my cluster all throughout the day. All of my VM users are able to create and 
 manage their own VMs
 through the vSphere client. None of them care to know anything about VM 
 storage as long as it's
 fast, and most of them don't want to have to make choices about which 
 datastore to put their new
 VM on. Only 30-40% of the total number of VMs registered in the cluster are 
 powered on at any given time.

We have three production VMware VSphere 4 clusters, each with
four hosts. The number of guests varies, but ranges from a low of 40
on one cluster to 80 on another. We do not generally have many guests
being created or destroyed, but a slow steady growth in their numbers.

The guests are both production as well as test / development
and the vast majority of them are Windows, mostly Server 2008. The
rule is to roll out Windows servers as VMs with the notable exception
of the Exchange servers, which are physical servers. The VMs are used
for everything including domain controllers, file servers, print
servers, dhcp servers, dns servers, workstations (my physical desktop
run Linux but I need a Windows system for Outlook and a few other
applications, that runs as a VM), SharePoint servers, MS-SQL servers,
and other assorted application servers.

We are using DRS and VMs do migrate around a bit
(transparently). We take advantage of maintenance mode for exactly
what the name says.

We have had a fairly constant, but low rate of FC issues with
VMware, from when we first rolled out VMware (version 3.0) through
today (4.1). The multi-pathing seems to occasionally either loose one
or more paths to a given LUN or completely loose access to a given
LUN. These problems do not happen often, but when they do it has
caused downtime on production VMs. Part of the reason we started
looking at NFS/iSCSI was to get around the VMware (Linux) FC drivers.
We also like the low overhead snapshot feature of ZFS (and are
leveraging it for other data extensively).

Now we are getting serious about using ZFS + NFS/iSCSI and are
looking to learn from other's experience as well as our own. For
example, is anyone using NFS with Oracle Cluster for HA storage for
VMs or are sites trusting to a single NFS server ?

-- 
{1-2-3-4-5-6-7-}
Paul Kraus
- Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
- Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
- Technical Advisor, RPI Players
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Degraded Pool, Spontaneous Reboots

2010-08-12 Thread Edward Ned Harvey
I am guessing you're experiencing cpu or memory failure.  Or motherboard, or
disk controller.



 -Original Message-
 From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
 boun...@opensolaris.org] On Behalf Of Michael Anderson
 Sent: Thursday, August 12, 2010 3:46 AM
 To: zfs-discuss@opensolaris.org
 Subject: [zfs-discuss] Degraded Pool, Spontaneous Reboots
 
 Hello,
 
 I've been getting warnings that my zfs pool is degraded. At first it
 was complaining about a few corrupt files, which were listed as hex
 numbers instead of filenames, i.e.
 
 VOL1:0x0
 
 After a scrub, a couple of the filenames appeared - turns out they were
 in snapshots I don't really need, so I destroyed those snapshots and
 started a new scrub. Subsequently, I typed  zpool status -v VOL1 ...
 and the machine rebooted. When I could log on again, I looked at
 /var/log/messages, but found nothing interesting prior to the reboot. I
 typed  zpool status -v VOL1 again, whereupon the machine rebooted.
 When the machine was back up, I stopped the scrub, waited a while, then
 typed zpool status -v VOL1 again, and this time got:
 
 
 r...@nexenta1:~# zpool status -v VOL1
 pool: VOL1
 state: DEGRADED
 status: One or more devices has experienced an unrecoverable error. An
 attempt was made to correct the error. Applications are unaffected.
 action: Determine if the device needs to be replaced, and clear the
 errors
 using 'zpool clear' or replace the device with 'zpool replace'.
 see: http://www.sun.com/msg/ZFS-8000-9P
 scan: scrub canceled on Wed Aug 11 11:03:15 2010
 config:
 
 NAME STATE READ WRITE CKSUM
 VOL1 DEGRADED 0 0 0
 raidz1-0 DEGRADED 0 0 0
 c2d0 DEGRADED 0 0 0 too many errors
 c3d0 DEGRADED 0 0 0 too many errors
 c4d0 DEGRADED 0 0 0 too many errors
 c5d0 DEGRADED 0 0 0 too many errors
 
 So, I have the following questions:
 
 1) How do I find out which file is corrupt, when I only get something
 like VOL1:0x0
 2) What could be causing these reboots?
 3) How can I fix my pool?
 
 Thanks!
 --
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and VMware

2010-08-12 Thread Eff Norwood
We are doing NFS in VMWare 4.0U2 production, 50K users using OpenSolaris 
SNV_134 on SuperMicro boxes with SATA drives. Yes, I am crazy. Our experience 
has been that iSCSI for ESXi 4.x is fast and works well with minimal fussing 
until there is a problem. When that problem happens, getting to data on VMFS 
LUNs even with the free java VMFS utility to do so is problematic at best and 
game over at worst.

With NFS, data access in problem situations is a non event. Snapshots happen 
and everyone is happy. The problem with it is the VMWare NFS client which makes 
every write an F_SYNC write. That kills NFS performance dead. To get around 
that, we're using DDRdrive X1s for our ZIL and the problem is solved. I have 
not looked at the NFS client changes in 4.1, perhaps it's better or at least 
tuneable now.

I would recommend NFS as the overall strategy, but you must get a good ZIL 
device to make that happen. Do not disable the ZIL. Do make sure you set your 
I/O queue depths correctly.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] one ZIL SLOG per zpool?

2010-08-12 Thread Edward Ned Harvey
 From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
 boun...@opensolaris.org] On Behalf Of Chris Twa
 
 I have three zpools on a server and want to add a mirrored pair of
 ssd's for the ZIL.  Can the same pair of SSDs be used for the ZIL of
 all three zpools or is it one ZIL SLOG device per zpool?

If you format, fdisk, and partition the disks, you can use the slices for
slogs.  (You can also implement it in some other ways.)  However, the point
of the slog device is *performance* so you're defeating your purpose by
doing this.

People are always tempted to put more than one log onto a SSD because Hey,
the system could never use more than 8G, but I've got a 32G drive!  What a
waste of money!  Which has some truth in it.  But the line of thought you
should have is Hey, the system will do its best to max out the 3Gbit/sec or
6Gbit/sec bus to the drive, so that disk is already fully utilized!

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Backup zpool

2010-08-12 Thread Marty Scholes
 Hello,
 
 I would like to backup my main zpool (originally
 called data) inside an equally originally named
 backupzpool, which will also holds other kinds of
 backups.
 
 Basically I'd like to end up with 
 backup/data
 backup/data/dataset1
 backup/data/dataset2
 backup/otherthings/dataset1
 backup/otherthings/dataset2
 
 this is quite simply doable by using zfs send / zfs
 receive.
 
 the problem is with compression. I have default
 compression enabled on my data pool, but I'd like to
 use gzip-2 on backup/data.
 I am using b134 with zpool version 22, which I read
 had some new features regarding this use case
 (http://arc.opensolaris.org/caselog/PSARC/2009/510/200
 90924_tom.erickson). The problem is, I don't
 understand how to to this. I don't really care about
 mantaining former properties but of course that would
 be a plus. 

I have a similar situation where dedup is enabled on the backup, but not the 
main pool, for performance reasons.  Once the pools are set, I have a script 
which does exactly what you are looking for using the time-slider snaps.  It 
finds the latest snap common to the main and backup pool, rolls back the backup 
to that snap, then sends the incrementals in between.  It also handles the case 
of no destination file system and tries to send the first snap.

At least in 128a, the auto snapshot seems to delete the old snaps from both 
pools, even though it is not configured to snap the backup pool, which keeps 
the snap count sane on the backup pool.

I would never claim the script is world-class, but I run it hourly from cron 
and it keeps the stuff in sync without me having to do anything.  Say the word 
and I'll send you a copy.

Good luck,
Marty
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Backup zpool

2010-08-12 Thread Simone Caldana
Il giorno 12/ago/2010, alle ore 15.10, Marty Scholes ha scritto:

 Say the word and I'll send you a copy.

pretty please :)

thanks

(meanwhile, I created the top dataset on the backup pool, set compression to 
gzip-2, removed any local compression setting on the source dataset children 
and I am sending one childrean at a time. This way what is sent has no property 
set and thus inherits the backup dataset one).


-- 
Simone Caldana
Senior Consultant
Critical Path
via Cuniberti 58, 10100 Torino, Italia
+39 011 4513811 (Direct)
+39 011 4513825 (Fax)
simone.cald...@criticalpath.net
http://www.cp.net/

Critical Path
A global leader in digital communications


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS data from snv_b134 to fbsd

2010-08-12 Thread Dick Hoogendijk
I want to transfer a lot of ZFS data from an old OpenSolaris ZFS mirror 
(v22) to a new FreeBSD-8.1 ZFs mirror (v14).
If I boot off the OpenSolaris boot CD and import both mirrors will the 
copying from v22 ZFS to v14 ZFS be harmless?
I'm not sure if this is teh right mailinglist for this question. Let me 
know.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and VMware

2010-08-12 Thread Richard Jahnel
We are using zfs backed fibre targets for ESXi 4.1 and previously 4.0 and have 
had good performance with no issues. The fibre LUNS were formated with vmfs by 
the ESXi boxes.

SQLIO benchmarks from guest system running on fibre attacted ESXi host.

File Size MBThreads Read/Write  DurationSector Size KB  Pattern 
IOs oustanding  IO/Sec  MB/Sec  Lat. Min.   Lat. Ave.   Lat. Max.

24576   8   R   30  8   random  64  37645   294 0   
1   141

24576   8   W   30  8   random  64  17304   135 0   
3   303

24576   8   R   30  64  random  64  6250391 1   
9   176

24576   8   W   30  64  random  64  5742359 1   
10  203

The array is a raidz2 with 14 x 256 gb Patriot Torqx drives and a cache with 4 
x 32 gb intel 32 GB G1s

When I get around to doing the next series of boxes I'll probably use c300s in 
place of the indellix based drives.

iSCSI was disappointing and seemed to be CPU bound. Possibly by a stupid amount 
of interupts coming from the less than stellar nic on the test box.

NFS we have only used as an ISO store, but it has worked ok and without issues.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] one ZIL SLOG per zpool?

2010-08-12 Thread Roy Sigurd Karlsbakk
 People are always tempted to put more than one log onto a SSD because
 Hey,
 the system could never use more than 8G, but I've got a 32G drive!
 What a
 waste of money! Which has some truth in it. But the line of thought
 you
 should have is Hey, the system will do its best to max out the
 3Gbit/sec or
 6Gbit/sec bus to the drive, so that disk is already fully utilized!

That depends on your workload. I would guess most pools aren't utilising their 
SLOGs 100%, since this will need pretty high sync-write usage, and typically, 
about 90% of the I/O on most servers is read.

Vennlige hilsener / Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] one ZIL SLOG per zpool?

2010-08-12 Thread Chris Twa
Thank you everyone for your answers.

Cost is a factor, but the main obstacle is that the chassis will only support 
four SSDs (and that's with using the spare 5.25 bay for a 4x2.5 hotswap bay).  

My plan now is to buy the ssd's and do extensive testing.  I want to focus my 
performance efforts on two zpools (7x146GB 15K U320 + 7x73GB 10k U320).  I'd 
really like two ssd's for L2ARC (one ssd per zpool) and then slice the other 
two ssd's and then mirror the slices for SLOG (one mirrored slice per zpool).  
I'm worried that the ZILs won't be significantly faster than writing to disk.  
But I guess that's what testing is for.  If the ZIL in this arrangement isn't 
beneficial then I can have four disks for L2ARC instead of two (or my wife and 
I get ssd's for our laptops).

Thank you again everyone for your quick responses
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] User level transactional API

2010-08-12 Thread Jason
Has any thought been given to exposing some sort of transactional API
for ZFS at the user level (even if just consolidation private)?

Just recently, it would seem a poorly timed unscheduled poweroff while
NWAM was attempting to update nsswitch.conf left me with a 0 byte
nsswitch.conf (which when the system came back up, had rather
unpleasant results).

While not on opensolaris (though I suspect the problem exists there),
I've also encountered instances where a shutdown while the ldap_client
process is in the middle of updating /var/ldap/ldap_client_file has
left the file empty.

In both instances, additional outages were incurred while things had
to be recovered manually.

And in both instances, code could be added to each utility to try to
recover from such a situation.  An easier (and would seem more
elegant) solution would be for both utilities to simply be able to
mark the 'truncate; write new data' sequence as atomic, which is
possible with zfs.  It's possible other utilities could benefit as
well (and would prevent them from all having to implement recovery
mechanisms when transitioning data files from one state to another).
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] User level transactional API

2010-08-12 Thread Norm Jacobs


For single file updates, this is commonly solved by writing data to a 
temp file and using rename(2) to move it in place when it's ready.


-Norm

On 08/12/10 04:51 PM, Jason wrote:

Has any thought been given to exposing some sort of transactional API
for ZFS at the user level (even if just consolidation private)?

Just recently, it would seem a poorly timed unscheduled poweroff while
NWAM was attempting to update nsswitch.conf left me with a 0 byte
nsswitch.conf (which when the system came back up, had rather
unpleasant results).
While not on opensolaris (though I suspect the problem exists there),
I've also encountered instances where a shutdown while the ldap_client
process is in the middle of updating /var/ldap/ldap_client_file has
left the file empty.
In both instances, additional outages were incurred while things had
to be recovered manually.

And in both instances, code could be added to each utility to try to
recover from such a situation.  An easier (and would seem more
elegant) solution would be for both utilities to simply be able to
mark the 'truncate; write new data' sequence as atomic, which is
possible with zfs.  It's possible other utilities could benefit as
well (and would prevent them from all having to implement recovery
mechanisms when transitioning data files from one state to another).
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] User level transactional API

2010-08-12 Thread Nicolas Williams
On Thu, Aug 12, 2010 at 07:48:10PM -0500, Norm Jacobs wrote:
 For single file updates, this is commonly solved by writing data to
 a temp file and using rename(2) to move it in place when it's ready.

For anything more complicated you need... a more complicated approach.

Note that transactional API means, among other things, rollback --
easy at the whole dataset level, hard in more granular form.  Dataset-
level rollback is nowhere need granular enough for applications).

Application transactions consisting of more than one atomic filesystem
operation require application-level recovery code.  SQLite3 is a good
(though maybe extreme?) example of such an application; there are many
others.

Nico
-- 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Backup zpool

2010-08-12 Thread Marty Scholes
Script attached.

Cheers,
Marty
-- 
This message posted from opensolaris.org

zfs_sync
Description: Binary data
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] New Supermicro SAS/SATA controller: AOC-USAS2-L8e in SOHO NAS and HD HT

2010-08-12 Thread valrh...@gmail.com
Has anyone bought one of these cards recently? It seems to list for around $170 
at various places, which seems like quite a decent deal. But no well-known 
reputable vendor I know seems to sell these, and I want to be able to have 
someone backing the sale if something isn't perfect. Where do you all recommend 
buying this card from?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] New Supermicro SAS/SATA controller: AOC-USAS2-L8e in SOHO NAS and HD HT

2010-08-12 Thread TheJay
Not myself yet - but here is some really interesting reading on it:

http://hardforum.com/showthread.php?p=1035820555


On Aug 12, 2010, at 7:03 PM, valrh...@gmail.com wrote:

 Has anyone bought one of these cards recently? It seems to list for around 
 $170 at various places, which seems like quite a decent deal. But no 
 well-known reputable vendor I know seems to sell these, and I want to be able 
 to have someone backing the sale if something isn't perfect. Where do you all 
 recommend buying this card from?
 -- 
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss