On Dec 29, 2011, at 1:29 PM, Nico Williams wrote:
> On Thu, Dec 29, 2011 at 2:06 PM, sol wrote:
>> Richard Elling wrote:
>>> many of the former Sun ZFS team
>>> regularly contribute to ZFS through the illumos developer community.
>>
>> Does this mean tha
On Dec 27, 2011, at 7:46 PM, Tim Cook wrote:
> On Tue, Dec 27, 2011 at 9:34 PM, Nico Williams wrote:
> On Tue, Dec 27, 2011 at 8:44 PM, Frank Cusack wrote:
> > So with a de facto fork (illumos) now in place, is it possible that two
> > zpools will report the same version yet be incompatible acros
On Dec 21, 2011, at 11:45 AM, Gareth de Vaux wrote:
> Hi guys, after a scrub my raidz array status showed:
>
> # zpool status
> pool: pool
> state: ONLINE
> status: One or more devices has experienced an unrecoverable error. An
>attempt was made to correct the error. Applications are u
comments below…
On Dec 18, 2011, at 6:53 AM, Jan-Aage Frydenbø-Bruvoll wrote:
> Dear List,
>
> I have a storage server running OpenIndiana with a number of storage
> pools on it. All the pools' disks come off the same controller, and
> all pools are backed by SSD-based l2arc and ZIL. Performance
On Dec 11, 2011, at 2:59 PM, Mertol Ozyoney wrote:
> Not exactly. What is dedup'ed is the stream only, which is infect not very
> efficient. Real dedup aware replication is taking the necessary steps to
> avoid sending a block that exists on the other storage system.
These exist outside of ZFS (e
On Dec 4, 2011, at 8:50 AM, Ryan Wehler wrote:
>>
>> A certification does not mean that any specific implementation operates
>> without errors. A failed part,
>> noisy environment, or other influences will affect any specific
>> implementation.
>
> Would it not be more prudent to re-run the tes
On Dec 3, 2011, at 9:32 PM, Ryan Wehler wrote:
> On Dec 3, 2011, at 11:18 PM, Richard Elling wrote:
>
>> On Dec 3, 2011, at 9:02 PM, Ryan Wehler wrote:
>>>
>>> On Dec 3, 2011, at 10:31 PM, Richard Elling wrote:
>>>
>>>> On Dec 3, 2011, at
On Dec 3, 2011, at 9:02 PM, Ryan Wehler wrote:
>
> On Dec 3, 2011, at 10:31 PM, Richard Elling wrote:
>
>> On Dec 3, 2011, at 7:36 PM, Ryan Wehler wrote:
>>
>>> Hi Richard,
>>> Thanks for getting back to me.
>>>
>>>
>>> On
On Dec 3, 2011, at 7:36 PM, Ryan Wehler wrote:
> Hi Richard,
> Thanks for getting back to me.
>
>
> On Dec 3, 2011, at 9:03 PM, Richard Elling wrote:
>
>> On Dec 1, 2011, at 5:08 PM, Ryan Wehler wrote:
>>
>>> During the diagnostics of my SAN fai
more below…
On Dec 1, 2011, at 8:21 PM, Erik Trimble wrote:
> On 12/1/2011 6:44 PM, Ragnar Sundblad wrote:
>> Thanks for your answers!
>>
>> On 2 dec 2011, at 02:54, Erik Trimble wrote:
>>
>>> On 12/1/2011 4:59 PM, Ragnar Sundblad wrote:
I am sorry if these are dumb questions. If there are
On Dec 1, 2011, at 5:08 PM, Ryan Wehler wrote:
> During the diagnostics of my SAN failure last week we thought we had seen a
> backplane failure due to high error counts with 'lsiutil'. However, even
> with a new backplane and ruling out failed cards (MPXIO or singular) or bad
> cables I'm sti
On Nov 30, 2011, at 6:06 AM, Sašo Kiselkov wrote:
> On 11/30/2011 02:40 PM, Edmund White wrote:
>> Absolutely.
>>
>> I'm using a fully-populated D2700 with an HP ProLiant DL380 G7 server
>> running NexentaStor.
>>
>> On the HBA side, I used the LSI 9211-8i 6G controllers for the server's
>> inte
Hi Matt,
On Nov 22, 2011, at 7:39 PM, Matt Breitbach wrote:
> So I'm looking at files on my ZFS volume that are compressed, and I'm
> wondering to myself, "self, are the values shown here the size on disk, or
> are they the pre-compressed values". Google gives me no great results on
> the first
On Nov 16, 2011, at 7:35 AM, David Dyer-Bennet wrote:
>
> On Tue, November 15, 2011 17:05, Anatoly wrote:
>> Good day,
>>
>> The speed of send/recv is around 30-60 MBytes/s for initial send and
>> 17-25 MBytes/s for incremental. I have seen lots of setups with 1 disk
>> to 100+ disks in pool. Bu
tip below…
On Nov 13, 2011, at 3:24 AM, Pasi Kärkkäinen wrote:
> On Sat, Nov 12, 2011 at 10:08:04AM -0800, Richard Elling wrote:
>>
>> On Nov 12, 2011, at 8:31 AM, Pasi Kärkkäinen wrote:
>>
>>> On Sat, Nov 12, 2011 at 08:15:31AM -0500, David Magda wrote:
>>&
On Nov 12, 2011, at 8:31 AM, Pasi Kärkkäinen wrote:
> On Sat, Nov 12, 2011 at 08:15:31AM -0500, David Magda wrote:
>> On Nov 12, 2011, at 00:55, Richard Elling wrote:
>>
>>> Better than ?
>>> If the disks advertise 512 bytes, the only way around it is with
On Nov 10, 2011, at 7:47 PM, David Magda wrote:
> On Nov 10, 2011, at 18:41, Daniel Carosone wrote:
>
>> On Tue, Oct 11, 2011 at 08:17:55PM -0400, John D Groenveld wrote:
>>> Under both Solaris 10 and Solaris 11x, I receive the evil message:
>>> | I/O request is not aligned with 4096 disk sector
FWIW, we recommend disabling C-states in the BIOS for NexentaStor systems.
C-states are evil.
-- richard
On Oct 31, 2011, at 9:46 PM, Lachlan Mulcahy wrote:
> Hi All,
>
>
> We did not have the latest firmware on the HBA - through a lot of pain I
> managed to boot into an MS-DOS disk and run t
On Oct 26, 2011, at 7:56 PM, weiliam.hong wrote:
>
> Questions:
> 1. Why does SG SAS drives degrade to <10 MB/s while WD RE4 remain consistent
> at >100MB/s after 10-15 min?
> 2. Why does SG SAS drive show only 70+ MB/s where is the published figures
> are > 100MB/s refer here?
Are the SAS driv
On Oct 27, 2011, at 11:04 PM, Mark Wolek wrote:
> Still kicking around this idea and didn’t see it addressed in any of the
> threads before the forum closed.
>
> If one made an all ssd pool, would a log/cache drive just slow you down?
> Would zil slow you down?
In general, a slog makes sens
On Oct 18, 2011, at 6:35 PM, David Magda wrote:
> If we've found one bad disk, what are our options?
Live with it or replace it :-)
-- richard
--
ZFS and performance consulting
http://www.RichardElling.com
VMworld Copenhagen, October 17-20
OpenStorage Summit, San Jose, CA, October 24-27
LISA
On Oct 18, 2011, at 5:21 PM, Edward Ned Harvey wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Tim Cook
>>
>> I had and have redundant storage, it has *NEVER* automatically fixed
>> it. You're the first person I've heard that has
On Oct 15, 2011, at 12:31 PM, Toby Thain wrote:
> On 15/10/11 2:43 PM, Richard Elling wrote:
>> On Oct 15, 2011, at 6:14 AM, Edward Ned Harvey wrote:
>>
>>>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>>>> boun...@opensolaris.org] On Beha
On Oct 16, 2011, at 10:22 AM, Jesus Cea wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> On 16/10/11 18:49, Jesus Cea wrote:
>>> These are special on disk blocks for storing file system metadata
>>> attributes when there isn't enough space in the bonus buffer
>>> area of the on disk v
On Oct 16, 2011, at 3:56 AM, Jim Klimov wrote:
> 2011-09-29 17:15, Zaeem Arshad пишет:
>>
>>
>> On Thu, Sep 29, 2011 at 11:33 AM, Garrett D'Amore
>> wrote:
>>
>>
>> I think he means, resilver faster.
>>
>> SSDs can be driven harder, and have more IOPs so we can hit them harder with
>> less
On Oct 15, 2011, at 6:14 AM, Edward Ned Harvey wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Tim Cook
>>
>> In my example - probably not a completely clustered FS.
>> A clustered ZFS pool with datasets individually owned by
>> sp
On Oct 14, 2011, at 7:02 PM, John D Groenveld wrote:
> As a sanity check, I connected the drive to a Windows 7 installation.
> I was able to partition, create an NTFS volume on it, eject and
> remount it.
>
> I also tried creating the zpool on my Solaris 10 system, exporting
> and trying to impor
On Oct 9, 2011, at 10:28 AM, Jim Klimov wrote:
> Hello all,
>
> ZFS developers have for a long time stated that ZFS is not intended,
> at least not in near term, for clustered environments (that is, having
> a pool safely imported by several nodes simultaneously). However,
> many people on forums
On Oct 11, 2011, at 2:03 PM, Frank Van Damme wrote:
> 2011/10/11 Richard Elling :
>>> ZFS Tunables (/etc/system):
>>> set zfs:zfs_arc_min = 0x20
>>> set zfs:zfs_arc_meta_limit=0x1
>>
>> It is not uncommon to tune arc meta limit
On Oct 6, 2011, at 5:19 AM, Frank Van Damme wrote:
> Hello,
>
> quick and stupid question: I'm breaking my head over how to tunz
> zfs_arc_min on a running system. There must be some magic word to pipe
> into mdb -kw but I forgot it. I tried /etc/system but it's still at the
> old value after re
On Oct 11, 2011, at 2:25 AM, KES wrote:
> Hi
>
> I have the next configuration: 3 disk 1Gb in raid0
> all disks in zfs pool
we recommend protecting the data. Friends don't let friends use raid-0.
nit: We tend to refer to disk size in bytes (B), not bits (b)
> freespace on so raid is 1.5Gb and
[exposed organs below…]
On Oct 7, 2011, at 8:25 PM, Daniel Carosone wrote:
> On Tue, Oct 04, 2011 at 09:28:36PM -0700, Richard Elling wrote:
>> On Oct 4, 2011, at 4:14 PM, Daniel Carosone wrote:
>>
>>> I sent it twice, because something strange happened on the first se
On Oct 4, 2011, at 4:14 PM, Daniel Carosone wrote:
> I sent a zvol from host a, to host b, twice. Host b has two pools,
> one ashift=9, one ashift=12. I sent the zvol to each of the pools on
> b. The original source pool is ashift=9, and an old revision (2009_06
> because it's still running xen
On Sep 27, 2011, at 6:30 PM, Fajar A. Nugraha wrote:
> On Wed, Sep 28, 2011 at 8:21 AM, Edward Ned Harvey
>> So again: Not a problem if you're making your pool out of SSD's.
>
> Big problem if your system is already using most of the available IOPSduring
> normal operation.
Resilvers are thrott
On Sep 20, 2011, at 12:21 AM, Markus Kovero wrote:
> Hi, I was wondering do you guys have any recommendations as replacement for
> Intel X25-E as it is being EOL’d? Mainly as for log device.
Can you rank your priorities:
+ cost/IOPS
+ cost
+ latency
+ predictable l
more below…
On Sep 19, 2011, at 9:51 AM, Fred Liu wrote:
>>
>> No, but your pool is not imported.
>>
>
> YES. I see.
>> and look to see which disk is missing"?
>>
>> The label, as displayed by "zdb -l" contains the heirarchy of the
>> expected pool config.
>> The contents are used to build th
On Sep 19, 2011, at 9:16 AM, Fred Liu wrote:
>>
>> For each disk, look at the output of "zdb -l /dev/rdsk/DISKNAMEs0".
>> 1. Confirm that each disk provides 4 labels.
>> 2. Build the vdev tree by hand and look to see which disk is missing
>>
>> This can be tedious and time consuming.
>
> Do I ne
rd
>
> Thanks.
>
> Fred
>
>> -Original Message-
>> From: Fred Liu
>> Sent: 星期一, 九月 19, 2011 22:28
>> To: 'Richard Elling'
>> Cc: zfs-discuss@opensolaris.org
>> Subject: RE: [zfs-discuss] remove wrongly added device from zpool
On Sep 19, 2011, at 12:10 AM, Fred Liu wrote:
> Hi,
>
> For my carelessness, I added two disks into a raid-z2 zpool as normal data
> disk, but in fact
> I want to make them as zil devices.
You don't mention which OS you are using, but for the past 5 years of
[Open]Solaris
releases, the system
Question below…
On Sep 14, 2011, at 12:07 PM, Paul Kraus wrote:
> On Wed, Sep 14, 2011 at 2:30 PM, Richard Elling
> wrote:
>
>> I don't recall a bug with that description. However, there are several bugs
>> that
>> relate to how the internals work that were
On Sep 14, 2011, at 9:50 AM, Paul Kraus wrote:
>I know there was (is ?) a bug where a zfs destroy of a large
> snapshot would run a system out of kernel memory, but searching the
> list archives and on defects.opensolaris.org I cannot find it. Could
> someone here explain the failure mechanism
On Sep 11, 2011, at 3:41 AM, Matt Harrison wrote:
> Hi list,
>
> I've got a system with 3 WD and 3 seagate drives. Today I got an email that
> zpool status indicated one of the seagate drives as REMOVED.
The removed state can be the result of a transport issue. If this is a
Solaris-based
OS,
On Sep 7, 2011, at 2:05 AM, Roy Sigurd Karlsbakk wrote:
>> The common use for desktop drives is having a single disk without
>> redundancy.. If a sector is feeling bad, it's better if it tries a bit
>> harder to recover it than just say "blah, there was a bit of dirt in
>> the corner.. I don't fee
On Sep 6, 2011, at 9:01 PM, Freddie Cash wrote:
> Just curious if anyone has looked into the relationship between zpool dedupe,
> zfs zend dedupe, memory use, and network throughput.
>
Yes.
> For example, does 'zfs send -D' use the same DDT as the pool?
>
No.
> Or does it require more memory
On Aug 29, 2011, at 2:07 PM, Roy Sigurd Karlsbakk wrote:
> Hi all
>
> It seems recent WD drives that aren't "Raid edition" can cause rather a lot
> of problems on RAID systems. We have a few machines with LSI controllers
> (6801/6081/9201) and we're seeing massive errors occuring. The usual pat
Hi Gary,
We use this method to implement NexentaStor HA-Cluster and, IIRC,
Solaris Cluster uses shared cachefiles, too. More below...
On Aug 29, 2011, at 11:13 AM, Gary Mills wrote:
> I have a system with ZFS root that imports another zpool from a start
> method. It uses a separate cache file f
On Aug 28, 2011, at 5:55 AM, Edward Ned Harvey wrote:
> What do you expect to happen if you're in progress doing a zfs send, and then
> simultaneously do a zfs destroy of the snapshot you're sending?
It depends on the release. For modern implementations, a hold is placed on the
snapshot and
it
On Aug 26, 2011, at 4:02 PM, Brandon High wrote:
> On Fri, Aug 12, 2011 at 6:34 PM, Tom Tang wrote:
>> Suppose I want to build a 100-drive storage system, wondering if there is
>> any disadvantages for me to setup 20 arrays of HW RAID0 (5 drives each),
>> then setup ZFS file system on these 20
On Aug 15, 2011, at 11:17 PM, Ding Honghui wrote:
> My solaris storage hangs. I login to the console and there is messages[1]
> display on the console.
> I can't login into the console and seems the IO is totally blocked.
>
> The system is solaris 10u8 on Dell R710 with disk array Dell MD3000. 2
On Aug 11, 2011, at 1:16 PM, Ray Van Dolson wrote:
> On Thu, Aug 11, 2011 at 01:10:07PM -0700, Ian Collins wrote:
>> On 08/12/11 08:00 AM, Ray Van Dolson wrote:
>>> Are any of you using the Intel 320 as ZIL? It's MLC based, but I
>>> understand its wear and performance characteristics can be bum
On Aug 8, 2011, at 9:01 AM, John Martin wrote:
> Is there a list of zpool versions for development builds?
>
> I found:
>
> http://blogs.oracle.com/stw/entry/zfs_zpool_and_file_system
Since Oracle no longer shares that info, you might look inside the firewall :-)
>
> where it says Solaris 11
On Aug 8, 2011, at 4:01 PM, Peter Jeremy wrote:
> On 2011-Aug-08 17:12:15 +0800, Andrew Gabriel
> wrote:
>> periodic scrubs to cater for this case. I do a scrub via cron once a
>> week on my home system. Having almost completely filled the pool, this
>> was taking about 24 hours. However, now
On Aug 6, 2011, at 9:56 AM, Roy Sigurd Karlsbakk wrote:
>> In my experience, SATA drives behind SAS expanders just don't work.
>> They "fail" in the manner you
>> describe, sooner or later. Use SAS and be happy.
>
> Funny thing is Hitachi and Seagate drives work stably, whereas WD drives tend
>
On Aug 6, 2011, at 9:45 AM, Roy Sigurd Karlsbakk wrote:
> Hi all
>
> We have a few servers with WD Black (and some green) drives on Super Micro
> systems. We've seen both drives work well with direct attach, but with LSI
> controllers and Super Micro's SAS expanders, well, that's another story.
On Aug 5, 2011, at 6:14 AM, Darren J Moffat wrote:
> On 08/05/11 13:11, Edward Ned Harvey wrote:
>> After a certain rev, I know you can set the "sync" property, and it
>> takes effect immediately, and it's persistent across reboots. But that
>> doesn't apply to Solaris 10.
>>
>> My question: Is
On Aug 1, 2011, at 2:16 PM, Neil Perrin wrote:
> In general the blogs conclusion is correct . When file systems get full there
> is
> fragmentation (happens to all file systems) and for ZFS the pool uses gang
> blocks of smaller blocks when there are insufficient large blocks.
> However, the ZIL
On Jul 31, 2011, at 8:20 AM, Eugen Leitl wrote:
> On Sun, Jul 31, 2011 at 05:19:07AM -0700, Erik Trimble wrote:
>
>>
>> Yes. You can attach a ZIL or L2ARC device anytime after the pool is created.
>
> Excellent.
:-)
>
>> Also, I think you want an Intel 320, NOT the 311, for use as a ZIL. T
Thanks Jens,
I have a vdbench profile and script that will run the new SNIA Solid State
Storage (SSS)
Performance Test Suite (PTS). I'd be happy to share if anyone is interested.
-- richard
On Jul 28, 2011, at 7:10 AM, Jens Elkner wrote:
> Hi,
>
> Roy Sigurd Karlsbakk wrote:
>> Crucial RealSSD
On Jul 28, 2011, at 4:55 AM, Koopmann, Jan-Peter wrote:
> Hi,
>
> my system is running oi148 on a super micro X8SIL-F board. I have two pools
> (2 disc mirror, 4 disc RAIDZ) with RAID level SATA drives. (Hitachi HUA72205
> and SAMSUNG HE103UJ). The system runs as expected however every few day
On Jul 21, 2011, at 4:08 PM, Gordon Ross wrote:
> I'm looking to upgrade the disk in a high-end laptop (so called
> "desktop replacement" type). I use it for development work,
> runing OpenIndiana (native) with lots of ZFS data sets.
>
> These "hybrid" drives look kind of interesting, i.e. for a
On Jul 7, 2011, at 3:33 PM, nathan wrote:
> On 7/07/2011 3:12 PM, X4 User wrote:
>> I am bumping this thread because I too have the same question ... can I put
>> modern 3TB disks (hitachi deskstars) into an old x4500 ?
X4500 uses the LSI 1068e. AFAIK, that HBA does not support disks > 2TB for a
Thomas,
On Jul 4, 2011, at 9:53 AM, Thomas Nau wrote:
> Richard
>
>
> On 07/04/2011 03:58 PM, Richard Elling wrote:
>> On Jul 4, 2011, at 6:42 AM, Lanky Doodle wrote:
>>
>>> Hiya,
>>>
>>> I''ve been doing a lot of research surround
On Jul 4, 2011, at 6:42 AM, Lanky Doodle wrote:
> Hiya,
>
> I''ve been doing a lot of research surrounding this and ZFS, including some
> posts on here, though I am still left scratching my head.
>
> I am planning on using slow RPM drives for a home media server, and it's
> these that seem to
On Jul 2, 2011, at 6:39 AM, Edward Ned Harvey wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
>>
>> Conclusion: Yes it matters to enable the write_cache.
>
> Now the question of whether or not it matters to us
On Jun 24, 2011, at 5:29 AM, Sašo Kiselkov wrote:
> Hi All,
>
> I'd like to ask about whether there is a method to enforce a certain txg
> commit frequency on ZFS. I'm doing a large amount of video streaming
> from a storage pool while also slowly continuously writing a constant
> volume of data
On Jun 23, 2011, at 1:13 PM, Kitty Tam wrote:
> I wonder if there is a limit on the size of disk to mount for Solaris.
> I was able to run "format" on a WD 1TB disk several months ago.
> The diff is that it's a 2.5TB one this time.
>
2TB limit for 32-bit Solaris. If you hit this, then you'll fin
On Jun 21, 2011, at 8:18 AM, Garrett D'Amore wrote:
>>
>> Does that also go through disksort? Disksort doesn't seem to have any
>> concept of priorities (but I haven't looked in detail where it plugs in to
>> the whole framework).
>>
>>> So it might make better sense for ZFS to keep the disk qu
On Jun 15, 2011, at 1:33 PM, Nomen Nescio wrote:
> Has there been any change to the server hardware with respect to number of
> drives since ZFS has come out? Many of the servers around still have an even
> number of drives (2, 4) etc. and it seems far from optimal from a ZFS
> standpoint. All you
On Jun 20, 2011, at 6:31 AM, Gary Mills wrote:
> On Sun, Jun 19, 2011 at 08:03:25AM -0700, Richard Elling wrote:
>> On Jun 19, 2011, at 6:28 AM, Edward Ned Harvey wrote:
>>>> From: Richard Elling [mailto:richard.ell...@gmail.com]
>>>> Sent: Saturday, June 18, 2011
On Jun 19, 2011, at 6:04 AM, Andrew Gabriel wrote:
> Richard Elling wrote:
>> Actually, all of the data I've gathered recently shows that the number of
>> IOPS does not significantly increase for HDDs running random workloads.
>> However the response time does :-( My
On Jun 19, 2011, at 6:28 AM, Edward Ned Harvey wrote:
>> From: Richard Elling [mailto:richard.ell...@gmail.com]
>> Sent: Saturday, June 18, 2011 7:47 PM
>>
>> Actually, all of the data I've gathered recently shows that the number of
>> IOPS does not significant
On Jun 16, 2011, at 8:05 PM, Daniel Carosone wrote:
> On Thu, Jun 16, 2011 at 10:40:25PM -0400, Edward Ned Harvey wrote:
>>> From: Daniel Carosone [mailto:d...@geek.com.au]
>>> Sent: Thursday, June 16, 2011 10:27 PM
>>>
>>> Is it still the case, as it once was, that allocating anything other
>>>
On Jun 17, 2011, at 4:07 PM, MasterCATZ wrote:
>
>>
> ok what is the Point of the RESERVE
>
> When we can not even delete a file when their is no space left !!!
>
> if they are going to have a RESERVE they should make it a little smarter and
> maybe have the FS use some of that free space so w
On Jun 16, 2011, at 3:36 PM, Sven C. Merckens wrote:
> Hi roy, Hi Dan,
>
> many thanks for Your responses.
>
> I am using napp-it to control the OpenSolaris-Systems
> The napp-it-interface shows a dedup factor of 1.18x on System 1 and 1.16x on
> System 2.
You're better off disabling dedup for
more below...
On Jun 16, 2011, at 2:27 AM, Fred Liu wrote:
> Fixing a typo in my last thread...
>
>> -Original Message-
>> From: Fred Liu
>> Sent: 星期四, 六月 16, 2011 17:22
>> To: 'Richard Elling'
>> Cc: Jim Klimov; zfs-discuss@opensolaris.or
On Jun 17, 2011, at 12:55 AM, Lanky Doodle wrote:
> Thanks Richard.
>
> How does ZFS enumerate the disks? In terms of listing them does it do them
> logically, i.e;
>
> controller #1 (motherboard)
>|
>|--- disk1
>|--- disk2
> controller #3
>|--- disk3
>|--- disk4
>|--- d
On Jun 16, 2011, at 12:09 AM, Simon Walter wrote:
> On 06/16/2011 09:09 AM, Erik Trimble wrote:
>> We had a similar discussion a couple of years ago here, under the title "A
>> Versioning FS". Look through the archives for the full discussion.
>>
>> The jist is that application-level versioning
On Jun 16, 2011, at 2:07 AM, Lanky Doodle wrote:
> Thanks guys.
>
> I have decided to bite the bullet and change to 2TB disks now rather than go
> through all the effort using 1TB disks and then maybe changing in 6-12 months
> time or whatever. The price difference between 1TB and 2TB disks is
my point exactly, more below...
On Jun 15, 2011, at 8:20 PM, Fred Liu wrote:
>> This is only true if the pool is not protected. Please protect your
>> pool with mirroring or raidz*.
>> -- richard
>>
>
> Yes. We use a raidz2 without any spares. In theory, with one disk broken,
> there should be
On Jun 15, 2011, at 4:45 AM, Darren J Moffat wrote:
> On 06/15/11 12:29, Edward Ned Harvey wrote:
>>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>>> boun...@opensolaris.org] On Behalf Of Richard Elling
>>>
>>> That would suck worse.
&
On Jun 15, 2011, at 4:22 AM, Pawel Jakub Dawidek wrote:
> On Tue, Jun 14, 2011 at 11:49:56AM -0700, Bill Sommerfeld wrote:
>> On 06/14/11 04:15, Rasmus Fauske wrote:
>>> I want to replace some slow consumer drives with new edc re4 ones but
>>> when I do a replace it needs to scan the full pool and
On Jun 15, 2011, at 2:44 AM, Fred Liu wrote:
>> -Original Message-
>> From: Richard Elling [mailto:richard.ell...@gmail.com]
>> Sent: 星期三, 六月 15, 2011 14:25
>> To: Fred Liu
>> Cc: Jim Klimov; zfs-discuss@opensolaris.org
>> Subject: Re: [zfs-discuss] zfs
On Jun 14, 2011, at 10:31 PM, Fred Liu wrote:
>
>> -Original Message-
>> From: Richard Elling [mailto:richard.ell...@gmail.com]
>> Sent: 星期三, 六月 15, 2011 11:59
>> To: Fred Liu
>> Cc: Jim Klimov; zfs-discuss@opensolaris.org
>> Subject: Re: [zfs-discus
On Jun 14, 2011, at 2:36 PM, Fred Liu wrote:
> What is the difference between warm spares and hot spares?
Warm spares are connected and powered. Hot spares are connected,
powered, and automatically brought online to replace a "failed" disk.
The reason I'm leaning towards warm spares is because I
On Jun 14, 2011, at 10:25 AM, Simon Walter wrote:
> I'm looking to create a NAS with versioning for non-technical users (Windows
> and Mac). I want the users to be able to simply save a file, and a
> revision/snapshot is created. I could use a revision control software like
> SVN (it has autove
On Jun 14, 2011, at 10:38 AM, Jim Klimov wrote:
> 2011-06-14 19:23, Richard Elling пишет:
>> On Jun 14, 2011, at 5:18 AM, Jim Klimov wrote:
>>
>>> Hello all,
>>>
>>> Is there any sort of a "Global Hot Spare" feature in ZFS,
>>> i
On Jun 14, 2011, at 5:18 AM, Jim Klimov wrote:
> Hello all,
>
> Is there any sort of a "Global Hot Spare" feature in ZFS,
> i.e. that one sufficiently-sized spare HDD would automatically
> be pulled into any faulted pool on the system?
Yes. See the ZFS Admin Guide section on Designating Hot Spa
On Jun 12, 2011, at 1:53 PM, James Sutherland wrote:
> A reboot and then another scrub fixed this. Reboot made no difference. So
> after the reboot I started another scrub and now the pool shows clean.
>
> So the sequence was like this:
> 1. zpool reported ioerrors after a scrub with an erro
On Jun 12, 2011, at 5:04 PM, Edmund White wrote:
> On 6/12/11 6:18 PM, "Jim Klimov" wrote:
>> 2011-06-12 23:57, Richard Elling wrote:
>>>
>>> How long should it wait? Before you answer, read through the thread:
>>> http://lists.illumos.org/piper
On Jun 12, 2011, at 4:18 PM, Jim Klimov wrote:
> 2011-06-12 23:57, Richard Elling wrote:
>>
>> How long should it wait? Before you answer, read through the thread:
>> http://lists.illumos.org/pipermail/developer/2011-April/001996.html
>> Then add your
On Jun 11, 2011, at 9:26 AM, Jim Klimov wrote:
> 2011-06-11 19:15, Pasi Kärkkäinen пишет:
>> On Sat, Jun 11, 2011 at 08:35:19AM -0500, Edmund White wrote:
>>>I've had two incidents where performance tanked suddenly, leaving the VM
>>>guests and Nexenta SSH/Web consoles inaccessible and req
On Jun 11, 2011, at 6:35 AM, Edmund White wrote:
> Posted in greater detail at Server Fault -
> http://serverfault.com/q/277966/13325
>
Replied in greater detail at same.
> I have an HP ProLiant DL380 G7 system running NexentaStor. The server has
> 36GB RAM, 2 LSI 9211-8i SAS controllers (no S
On Jun 11, 2011, at 5:46 AM, Edward Ned Harvey wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Jim Klimov
>>
>> See FEC suggestion from another poster ;)
>
> Well, of course, all storage mediums have built-in hardware FEC. At lea
On May 10, 2011, at 9:18 AM, Ray Van Dolson wrote:
> We recently had a disk fail on one of our whitebox (SuperMicro) ZFS
> arrays (Solaris 10 U9).
>
> The disk began throwing errors like this:
>
> May 5 04:33:44 dev-zfs4 scsi: [ID 243001 kern.warning] WARNING:
> /pci@0,0/pci8086,3410@9/pci15d9
On Jun 10, 2011, at 8:59 AM, David Magda wrote:
> On Fri, June 10, 2011 07:47, Edward Ned Harvey wrote:
>
>> #1 A single bit error causes checksum mismatch and then the whole data
>> stream is not receivable.
>
> I wonder if it would be worth adding a (toggleable?) forward error
> correction (F
On Jun 7, 2011, at 9:12 AM, Phil Harman wrote:
> Ok here's the thing ...
>
> A customer has some big tier 1 storage, and has presented 24 LUNs (from four
> RAID6 groups) to an OI148 box which is acting as a kind of iSCSI/FC bridge
> (using some of the cool features of ZFS along the way). The OI
Beautiful, ship it
-- richard
On Jun 6, 2011, at 6:56 PM, Eric Schrock wrote:
> Good catch. For consistency, I updated the property description to match
> "compressratio" exactly.
>
> - Eric
>
> On Mon, Jun 6, 2011 at 9:39 PM, Mark Musante wrote:
>
> minor quibble: compressratio uses a low
n also be used as the property name. So maybe the
> full name should be "refcompressratio" as the long name and "refratio" as the
> short name would make sense, as that matches "compressratio". Matt?
>
> - Eric
>
>
> On Mon, Jun 6, 2011 at 7:0
On Jun 6, 2011, at 2:54 PM, Yuri Pankov wrote:
> On Mon, Jun 06, 2011 at 02:19:50PM -0700, Matthew Ahrens wrote:
>> I have implemented a new property for ZFS, "refratio", which is the
>> compression ratio for referenced space (the "compressratio" is the ratio for
>> used space). We are using this
On Jun 3, 2011, at 6:25 AM, Roch wrote:
>
> Edward Ned Harvey writes:
>> Based on observed behavior measuring performance of dedup, I would say, some
>> chunk of data and its associated metadata seem have approximately the same
>> "warmness" in the cache. So when the data gets evicted, the associ
201 - 300 of 2596 matches
Mail list logo