On Sat, 24 Oct 2009, Albert Chin wrote:
5-disk RAIDZ2 - doesn't that equate to only 3 data disks?
Seems pointless - they'd be much better off using mirrors,
which is a better choice for random IO...
Is it really pointless? Maybe they want the insurance RAIDZ2
provides. Given the choice betwee
The controller connects to two disk shelves (expanders), one per port on the
card. If you look back in the thread, you'll see our zpool config has one vdev
per shelf. All of the disks are Western Digital (model WD1002FBYS-18A6B0) 1TB
7.2K, firmware rev. 03.00C06. Without actually matching up the
> Apple can currently just take the ZFS CDDL code and incorporate it
> (like they did with DTrace), but it may be that they wanted a "private
> license" from Sun (with appropriate technical support and
> indemnification), and the two entities couldn't come to mutually
> agreeable terms.
I
On Sat, Oct 24, 2009 at 03:31:25PM -0400, Jim Mauro wrote:
> Posting to zfs-discuss. There's no reason this needs to be
> kept confidential.
>
> 5-disk RAIDZ2 - doesn't that equate to only 3 data disks?
> Seems pointless - they'd be much better off using mirrors,
> which is a better choice for rand
Posting to zfs-discuss. There's no reason this needs to be
kept confidential.
5-disk RAIDZ2 - doesn't that equate to only 3 data disks?
Seems pointless - they'd be much better off using mirrors,
which is a better choice for random IO...
Looking at this now...
/jim
Jeff Savit wrote:
Hi all,
On Sat, Oct 24, 2009 at 12:30 PM, Carson Gaspar wrote:
>
> I saw this with my WD 500GB SATA disks (HDS725050KLA360) and LSI firmware
> 1.28.02.00 in IT mode, but I (almost?) always had exactly 1 "stuck" I/O.
> Note that my disks were one per channel, no expanders. I have _not_ seen it
> since rep
On 10/24/09 9:43 AM, Richard Elling wrote:
OK, here we see 4 I/Os pending outside of the host. The host has
sent them on and is waiting for them to return. This means they are
getting dropped either at the disk or somewhere between the disk
and the controller.
When this happens, the sd driver w
On 10/24/09 8:37 AM, Richard Elling wrote:
At LISA09 in Baltimore next week, Darren is scheduled to give an update
on the ZFS crypto project. We should grab him, take him to our secret
rendition site at Inner Harbor, force him into a comfy chair, and
beer-board him until he confesses.
I can su
more below...
On Oct 24, 2009, at 2:49 AM, Adam Cheal wrote:
The iostat I posted previously was from a system we had already
tuned the zfs:zfs_vdev_max_pending depth down to 10 (as visible by
the max of about 10 in actv per disk).
I reset this value in /etc/system to 7, rebooted, and start
On Sat, 24 Oct 2009, Bob Friesenhahn wrote:
Is solaris incapable of issuing a SATA command FLUSH CACHE EXT?
It issues one for each update to the intent log.
I should mention that FLASH SSDs without a capacitor/battery-backed
cache flush (like the X25-E) are likely to get burned out pretty
On Sat, Oct 24, 2009 at 11:20 AM, Tim Cook wrote:
>
>
> On Sat, Oct 24, 2009 at 4:49 AM, Adam Cheal wrote:
>
>> The iostat I posted previously was from a system we had already tuned the
>> zfs:zfs_vdev_max_pending depth down to 10 (as visible by the max of about 10
>> in actv per disk).
>>
>> I
On Sat, Oct 24, 2009 at 4:49 AM, Adam Cheal wrote:
> The iostat I posted previously was from a system we had already tuned the
> zfs:zfs_vdev_max_pending depth down to 10 (as visible by the max of about 10
> in actv per disk).
>
> I reset this value in /etc/system to 7, rebooted, and started a sc
On Fri, 23 Oct 2009, Eric D. Mudama wrote:
I don't believe the above statement is correct.
According to anandtech who asked Intel:
http://www.anandtech.com/cpuchipsets/intel/showdoc.aspx?i=3403&p=10
the DRAM doesn't hold user data. The article claims that data goes
through an internal 256KB
On Oct 24, 2009, at 7:18 AM, David Magda wrote:
On Oct 23, 2009, at 19:27, BJ Quinn wrote:
Anyone know if this means that this will actually show up in SNV
soon, or whether it will make 2010.02? (on disk dedup specifically)
It will go in when it goes in. If you have a support contract call
On Oct 23, 2009, at 19:27, BJ Quinn wrote:
Anyone know if this means that this will actually show up in SNV
soon, or whether it will make 2010.02? (on disk dedup specifically)
It will go in when it goes in. If you have a support contract call up
Sun and ask for details; if you're using a f
On Oct 24, 2009, at 08:53, Joerg Schilling wrote:
The article that was mentioned a few hours ago did mention
licensing problems without giving any kind of evidence for
this claim. If there is evidence, I would be interested in
knowing the background, otherwise it looks to me like FUD.
I'm gue
> I have a functional OpenSolaris x64 system on which I need to physically
> move the boot disk, meaning its physical device path will change and
> probably its cXdX name.
>
> When I do this the system fails to boot
...
> How do I inform ZFS of the new path?
...
> Do I need to boot from the Li
Alex Blewitt wrote:
> Apple has finally canned [1] the ZFS port [2]. To try and keep momentum up
> and continue to use the best filing system available, a group of fans have
> set up a continuation project and mailing list [3,4].
The article that was mentioned a few hours ago did mention
licen
Apple is known to strong arm in licensing negotiations. I'd really like to
hear the straight-talk about what transpired.
That's ok, it just means that I won't be using mac as a server.
--
This message posted from opensolaris.org
___
zfs-discuss mailin
Would this be possible to implement ontop ZFS? Maybe it is a dumb idea, I dont
know. What do you think, and how to improve this?
Assume all files are put in the zpool, helter skelter. And then you can create
arbitrary different filters that shows you the files you want to see.
As of now, you h
On 23/10/2009, at 9:39 AM, Travis Tabbal wrote:
I have a new array of 4x1.5TB drives running fine. I also have the
old array of 4x400GB drives in the box on a separate pool for
testing. I was planning to have the old drives just be a backup file
store, so I could keep snapshots and such ove
We actually hit similar issues with LSI, but within workload not scrub, result
is same but it seems to choke on writes rather than reads with suboptimal
performance.
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6891413
Anyway, we haven't experienced this _at all_ with RE3-version o
Apple has finally canned [1] the ZFS port [2]. To try and keep momentum up and
continue to use the best filing system available, a group of fans have set up a
continuation project and mailing list [3,4].
If anyone's interested in joining in to help, please join in the mailing list.
[1] http://a
Hi,
Im Karim from the Solaris software support, i need a help from your side
regarding this issue :
Why ZFS is full while it's zpool has 3.11 Go available ?
zfs list -t filesystem | egrep "db-smp|NAME"
NAME USED AVAIL REFER MOUNTPOINT
db-smp.zpool 196G 0 1K legacy
db-smp.zpool/db-smp.zfs 180G
The iostat I posted previously was from a system we had already tuned the
zfs:zfs_vdev_max_pending depth down to 10 (as visible by the max of about 10 in
actv per disk).
I reset this value in /etc/system to 7, rebooted, and started a scrub. iostat
output showed busier disks (%b is higher, which
Gruber (http://daringfireball.net/linked/2009/10/23/zfs) is normally
well-informed and has some feedbackseems possible that legal canned it.
--Craig
On 23 Oct 2009, at 20:42, Tim Cook wrote:
On Fri, Oct 23, 2009 at 2:38 PM, Richard Elling > wrote:
FYI,
The ZFS project on MacOS forge (zfs.
How do you estimate needed queue depth if one has say 64 to 128 disks sitting
behind LSI?
Is it bad idea having queuedepth 1?
Yours
Markus Kovero
Lähettäjä: zfs-discuss-boun...@opensolaris.org
[zfs-discuss-boun...@opensolaris.org] käyttäjän Richard Ellin
And, ZFS likes 64bit CPUs. I had 32bit P4 and 1GB RAM. It worked fine, but I
only got 20-30MB/sec. 64 bit CPU and 2-3GB gives you over 100MB/sec.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http:/
You dont need a hw raid card with ZFS, instead ZFS prefers to work alone. It is
the best solution to ditch a hw raid card.
I strongly advice you to use raidz2 (raid-6). Because if you use raidz1
(raid-5) and a drive fails, you have to swap that disc and repair your zfs
raid. That will cause lot
You could add these new drives to your zpool. Then you should create a new vdev
as a raidz1 or raidz2 vdev, and then add them to your zpool. I suggest raidz2,
becuase that gives you greater reliability.
However, you can not remove a vdev. In the future, say that you have swapped
your original d
30 matches
Mail list logo