Re: [zfs-discuss] Homegrown Hybrid Storage

2010-06-07 Thread Erik Trimble
Comments in-line. On 6/6/2010 9:16 PM, Ken wrote: I'm looking at VMWare, ESXi 4, but I'll take any advice offered. On Sun, Jun 6, 2010 at 19:40, Erik Trimble erik.trim...@oracle.com mailto:erik.trim...@oracle.com wrote: On 6/6/2010 6:22 PM, Ken wrote: Hi, I'm looking to build

Re: [zfs-discuss] Homegrown Hybrid Storage

2010-06-07 Thread Jens Elkner
On Sun, Jun 06, 2010 at 09:16:56PM -0700, Ken wrote: I'm looking at VMWare, ESXi 4, but I'll take any advice offered. ... I'm looking to build a virtualized web hosting server environment accessing files on a hybrid storage SAN. I was looking at using the Sun X-Fire x4540 with the

Re: [zfs-discuss] Homegrown Hybrid Storage

2010-06-07 Thread Roy Sigurd Karlsbakk
Which Virtual Machine technology are you going to use? VirtualBox VMWare Xen Solaris Zones Somethinge else... It will make a difference as to my recommendation (or, do you want me to recommend a VM type, too?) This is somehow off-topic @zfs-discuss, but still. After trying to fight a

Re: [zfs-discuss] Deduplication and ISO files

2010-06-07 Thread Roy Sigurd Karlsbakk
- Brandon High bh...@freaks.com skrev: On Sun, Jun 6, 2010 at 10:46 AM, Brandon High bh...@freaks.com wrote: No, that's the number that stuck in my head though. Here's a reference from Richard Elling: (http://mail.opensolaris.org/pipermail/zfs-discuss/2010-March/038018.html) Around

Re: [zfs-discuss] ssd pool + ssd cache ?

2010-06-07 Thread David Magda
On Jun 7, 2010, at 00:15, Richard Jahnel wrote: I use 4 intel 32gb ssds as read cache for each pool of 10 Patriot Torx drives which are running in a raidz2 configuration. No Slogs as I haven't seen a compliant SSD drive yet. Besides STEC's Zeus drives you mean? (Which aren't available in

[zfs-discuss] ZFS Component Naming Requirements

2010-06-07 Thread eXeC001er
Hi All! Can i create pool or dataset with name that contains non-latin letters (russian letters, specific germany letters, etc ...)? I tried to create pool with non-latin letters, but could not. In ZFS User Guide i see next information: Each ZFS component must be named according to the

Re: [zfs-discuss] ssd pool + ssd cache ?

2010-06-07 Thread Richard Jahnel
I'll have to take your word on the Zeus drives. I don't see any thing in thier literature that explicitly states that cache flushes are obeyed or other wise protected against power loss. As for OCZ they cancelled the Vertex 2 Pro which was to be the one with the super cap. For the moment they

[zfs-discuss] zfs send/receive as backup tool

2010-06-07 Thread Toyama Shunji
Can I extract one or more specific files from zfs snapshot stream? Without restoring full file system. Like ufs based 'restore' tool. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] ssd pool + ssd cache ?

2010-06-07 Thread David Magda
On Mon, June 7, 2010 09:21, Richard Jahnel wrote: I'll have to take your word on the Zeus drives. I don't see any thing in thier literature that explicitly states that cache flushes are obeyed or other wise protected against power loss. The STEC units is what Oracle/Sun use in their 7000

Re: [zfs-discuss] zfs send/receive as backup tool

2010-06-07 Thread David Magda
On Mon, June 7, 2010 10:34, Toyama Shunji wrote: Can I extract one or more specific files from zfs snapshot stream? Without restoring full file system. Like ufs based 'restore' tool. No. (Check the archives of zfs-discuss for more details. Send/recv has been discussed at length many times.)

Re: [zfs-discuss] ssd pool + ssd cache ?

2010-06-07 Thread Christopher George
No Slogs as I haven't seen a compliant SSD drive yet. As the architect of the DDRdrive X1, I can state categorically the X1 correctly implements the SCSI Synchronize Cache (flush cache) command. Christopher George Founder/CTO www.ddrdrive.com -- This message posted from opensolaris.org

Re: [zfs-discuss] ssd pool + ssd cache ?

2010-06-07 Thread Richard Jahnel
And a very nice device it is indeed. However for my purposes it doesn't work as it doesn't fit into a 2.5 slot and use sata/sas connections. Unfortunately all my pci express slots are in use. 2 raid controllers 1 Fibre HBA 1 10gb ethernet card. -- This message posted from opensolaris.org

Re: [zfs-discuss] ssd pool + ssd cache ?

2010-06-07 Thread Garrett D'Amore
On Mon, 2010-06-07 at 07:51 -0700, Christopher George wrote: No Slogs as I haven't seen a compliant SSD drive yet. As the architect of the DDRdrive X1, I can state categorically the X1 correctly implements the SCSI Synchronize Cache (flush cache) command. Christopher George Founder/CTO

Re: [zfs-discuss] ZFS Component Naming Requirements

2010-06-07 Thread Cindy Swearingen
Hi-- Pool names must contain alphanumeric characters as described here: http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/common/zfs/zfs_namecheck.c The problem you might be having is probably with an special characters, such as umlauts or accents (?). Pool names only allow 4

Re: [zfs-discuss] ssd pool + ssd cache ?

2010-06-07 Thread Tim Cook
On Mon, Jun 7, 2010 at 9:45 AM, David Magda dma...@ee.ryerson.ca wrote: On Mon, June 7, 2010 09:21, Richard Jahnel wrote: I'll have to take your word on the Zeus drives. I don't see any thing in thier literature that explicitly states that cache flushes are obeyed or other wise protected

Re: [zfs-discuss] zfs send/receive as backup tool

2010-06-07 Thread Cindy Swearingen
Hi Toyama, You cannot restore an individual file from a snapshot stream like the ufsrestore command. If you have snapshots stored on your system, you might be able to access them from the .zfs/snapshot directory. See below. Thanks, Cindy % rm reallyimportantfile % cd .zfs/snapshot % cd

Re: [zfs-discuss] ssd pool + ssd cache ?

2010-06-07 Thread Christopher George
Thanks Garrett! 2) it is dependent on an external power source (a little wall wart provides low voltage power to the card... I don't recall the voltage off hand) 9V DC. 3) the contents of the card's DDR ram are never flushed to non-volatile storage automatically, but require an explicit

Re: [zfs-discuss] ssd pool + ssd cache ?

2010-06-07 Thread David Magda
On Mon, June 7, 2010 12:56, Tim Cook wrote: The STEC units is what Oracle/Sun use in their 7000 series appliances, and I believe EMC and many others use them as well. When did that start? Every 7000 I've seen uses Intel drives. According to the Sun System Handbook for the 7310, the 18 GB

Re: [zfs-discuss] Deduplication and ISO files

2010-06-07 Thread Ray Van Dolson
On Fri, Jun 04, 2010 at 01:10:44PM -0700, Ray Van Dolson wrote: On Fri, Jun 04, 2010 at 01:03:32PM -0700, Brandon High wrote: On Fri, Jun 4, 2010 at 12:37 PM, Ray Van Dolson rvandol...@esri.com wrote: Makes sense.  So, as someone else suggested, decreasing my block size may improve the

Re: [zfs-discuss] Homegrown Hybrid Storage

2010-06-07 Thread Miles Nordin
et == Erik Trimble erik.trim...@oracle.com writes: et With NFS-hosted VM disks, do the same thing: create a single et filesystem on the X4540 for each VM. previous posters pointed out there are unreasonable hard limits in vmware to the number of NFS mounts or iSCSI connections or

Re: [zfs-discuss] Deduplication and ISO files

2010-06-07 Thread Roy Sigurd Karlsbakk
- Ray Van Dolson rvandol...@esri.com skrev: FYI; With 4K recordsize, I am seeing 1.26x dedupe ratio between the RHEL 5.4 ISO and the RHEL 5.5 ISO file. However, it took about 33 minutes to copy the 2.9GB ISO file onto the filesystem. :) Definitely would need more RAM in this

Re: [zfs-discuss] ssd pool + ssd cache ?

2010-06-07 Thread Richard Jahnel
Do you lose the data if you lose that 9v feed at the same time the computer losses power? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ssd pool + ssd cache ?

2010-06-07 Thread Garrett D'Amore
On Mon, 2010-06-07 at 11:49 -0700, Richard Jahnel wrote: Do you lose the data if you lose that 9v feed at the same time the computer losses power? Yes. Hence the need for a separate UPS. - Garrett ___ zfs-discuss mailing list

Re: [zfs-discuss] Homegrown Hybrid Storage

2010-06-07 Thread Bob Friesenhahn
On Mon, 7 Jun 2010, Miles Nordin wrote: FC has different QoS properties than Ethernet because of the buffer credit mechanism---it can exert back-pressure all the way through the fabric. same with IB, which is HOL-blocking. This is a big deal with storage, with its large blocks of bursty

[zfs-discuss] Native ZFS for Linux

2010-06-07 Thread Brandon High
http://www.osnews.com/story/23416/Native_ZFS_Port_for_Linux Native ZFS Port for Linux posted by Thom Holwerda on Mon 7th Jun 2010 10:15 UTC, submitted by kragil Employees of Lawrence Livermore National Laboratory have ported Sun's/Oracle's ZFS natively to Linux. Linux already had a ZFS port in

Re: [zfs-discuss] Homegrown Hybrid Storage

2010-06-07 Thread Richard Elling
On Jun 7, 2010, at 11:06 AM, Miles Nordin wrote: the other difference is in the latest comstar which runs in sync-everything mode by default, AIUI. Or it does use that mode only when zvol-backed? Or something. It depends on your definition of latest. The latest OpenSolaris release is

Re: [zfs-discuss] Native ZFS for Linux

2010-06-07 Thread Fredrich Maney
Thanks for posting this, but these two sentences seem to contradict each other: Employees of Lawrence Livermore National Laboratory have ported Sun's/Oracle's ZFS natively to Linux. The ZFS Posix Layer has not been implemented yet, therefore mounting file systems is not yet possible Not to be

Re: [zfs-discuss] Homegrown Hybrid Storage

2010-06-07 Thread Garrett D'Amore
On Mon, 2010-06-07 at 13:32 -0700, Richard Elling wrote: On Jun 7, 2010, at 11:06 AM, Miles Nordin wrote: the other difference is in the latest comstar which runs in sync-everything mode by default, AIUI. Or it does use that mode only when zvol-backed? Or something. It depends on

Re: [zfs-discuss] Native ZFS for Linux

2010-06-07 Thread Brandon High
On Mon, Jun 7, 2010 at 1:47 PM, Fredrich Maney fredrichma...@gmail.com wrote: Not to be too harsh, but as long as you can't mount filesystems, it seems to just be hype/vaporware to me. It's a big step in the right direction. You can still use zvols to create ext3 filesystems, and use the zpool

Re: [zfs-discuss] Native ZFS for Linux

2010-06-07 Thread Hillel Lubman
Native ZFS for Linux Very good to see that there is such effort in progress. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Homegrown Hybrid Storage

2010-06-07 Thread Ross Walker
On Jun 7, 2010, at 2:10 AM, Erik Trimble erik.trim...@oracle.com wrote: Comments in-line. On 6/6/2010 9:16 PM, Ken wrote: I'm looking at VMWare, ESXi 4, but I'll take any advice offered. On Sun, Jun 6, 2010 at 19:40, Erik Trimble erik.trim...@oracle.com wrote: On 6/6/2010 6:22 PM, Ken

Re: [zfs-discuss] ZFS ARC cache issue

2010-06-07 Thread Nicolas Dorfsman
When I looked for references on ARC freeing algo, I did find some lines of codes talking about freeing ARC when memory is under pressure. Nice...but what could be memory under pressure in the kernel syntax ? Jumping from C lines to blogs to docsI went back

[zfs-discuss] NOTICE: spa_import_rootpool: error 5

2010-06-07 Thread Mark S Durney
IHAC Who has an x4500(x86 box) who has a zfs root filesystem. They installed patches today, the latest solaris 10 x86 recommended patch cluster and the patching seemed to complete successfully. Then when they tried to reboot the box the machine would not boot? They get the following error

Re: [zfs-discuss] NOTICE: spa_import_rootpool: error 5

2010-06-07 Thread Pablo Méndez Hernández
Hi Mark: On Mon, Jun 7, 2010 at 23:21, Mark S Durney mark.dur...@oracle.com wrote: IHAC Who has an x4500(x86 box) who has a zfs root filesystem. They installed patches today, the latest solaris 10 x86 recommended patch cluster and the patching seemed to complete successfully. Then when

[zfs-discuss] Drive showing as removed

2010-06-07 Thread besson3c
Hello, I have a drive that was a part of the pool showing up as removed. I made no changes to the machine, and there are no errors being displayed, which is rather weird: # zpool status nm pool: nm state: DEGRADED scrub: none requested config: NAMESTATE READ WRITE

[zfs-discuss] General help with understanding ZFS performance bottlenecks

2010-06-07 Thread besson3c
Hello, I'm wondering if somebody can kindly direct me to a sort of newbie way of assessing whether my ZFS pool performance is a bottleneck that can be improved upon, and/or whether I ought to invest in a SSD ZIL mirrored pair? I'm a little confused by what the output of iostat, fsstat, the

Re: [zfs-discuss] ssd pool + ssd cache ?

2010-06-07 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Garrett D'Amore On Mon, 2010-06-07 at 11:49 -0700, Richard Jahnel wrote: Do you lose the data if you lose that 9v feed at the same time the computer losses power? Yes. Hence the need

Re: [zfs-discuss] Homegrown Hybrid Storage

2010-06-07 Thread Ken
Everyone, thank you for the comments, you've given me lots of great info to research further. On Mon, Jun 7, 2010 at 15:57, Ross Walker rswwal...@gmail.com wrote: On Jun 7, 2010, at 2:10 AM, Erik Trimble erik.trim...@oracle.com wrote: Comments in-line. On 6/6/2010 9:16 PM, Ken wrote: I'm

[zfs-discuss] ZFS disks hitting 100% busy

2010-06-07 Thread Gary Mills
Our e-mail server started to slow down today. One of the disk devices is frequently at 100% usage. The heavy writes seem to cause reads to run quite slowly. In the statistics below, `c0t0d0' is UFS, containing the / and /var slices. `c0t1d0' is ZFS, containing /var/log/syslog, a couple of

Re: [zfs-discuss] NOTICE: spa_import_rootpool: error 5

2010-06-07 Thread Bob Friesenhahn
On Mon, 7 Jun 2010, Mark S Durney wrote: The customer states that he backed out the kernel patch 142901-12 and then the x4500 boots successfully??? Has anyone seen this? It almost seems like the zfs root pool is not being seen upon reboot?? You should find out from your customer what kernel

Re: [zfs-discuss] Homegrown Hybrid Storage

2010-06-07 Thread David Magda
On Jun 7, 2010, at 16:32, Richard Elling wrote: Please don't confuse Ethernet with IP. Ethernet has no routing and no back-off other than that required for the link. Not entirely accurate going forward. IEEE 802.1Qau defines an end-to- end congestion notification management system:

Re: [zfs-discuss] zfs send/receive as backup tool

2010-06-07 Thread Toyama Shunji
Thank you David, Thank you Cindy, Certainly I feel it is difficult, but is it logically impossible to write a filter program to do that, with reasonable memory use? -- This message posted from opensolaris.org ___ zfs-discuss mailing list

Re: [zfs-discuss] Drive showing as removed

2010-06-07 Thread Richard Elling
On Jun 7, 2010, at 4:50 PM, besson3c wrote: Hello, I have a drive that was a part of the pool showing up as removed. I made no changes to the machine, and there are no errors being displayed, which is rather weird: # zpool status nm pool: nm state: DEGRADED scrub: none requested

Re: [zfs-discuss] zfs send/receive as backup tool

2010-06-07 Thread Khyron
To answer the question you asked here...the answer is no. There have been MANY discussions of this in the past. Here's the lng thread I started back in May about backup strategies for ZFS pools and file systems: http://mail.opensolaris.org/pipermail/zfs-discuss/2010-March/038678.html But

Re: [zfs-discuss] Snapshots, txgs and performance

2010-06-07 Thread Arne Jansen
thomas wrote: Very interesting. This could be useful for a number of us. Would you be willing to share your work? No problem. I'll contact you off-list. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org