roland [EMAIL PROTECTED] wrote:
there is also no filesystem based approach in compressing/decompressing a
whole filesystem. you can have 499gb of data on a 500gb partition - and if
you need some more space you would think turning on compression on that fs
would solve your problem. but
I retracted that statement in the above edit.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Dave Johnson wrote:
roland [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote:
there is also no filesystem based approach in
compressing/decompressing a whole filesystem. you can have 499gb of data
on a 500gb partition - and if you need some more space you would think
turning on
Hi all,
I'm new here and to ZFS but I've been lurking for quite some time... My
question is simple: which is better 8+2 or 8+1+spare? Both follow the
(N+P) N={2,4,8} P={1,2} rule, but 8+2 results in a total or 10 disks,
which is one disk more than 3=num-disks=9 rule. But 8+2 has much
On Mon, Jul 09, 2007 at 11:14:58AM -0400, Kent Watsen wrote:
Hi all,
I'm new here and to ZFS but I've been lurking for quite some time... My
question is simple: which is better 8+2 or 8+1+spare? Both follow the
(N+P) N={2,4,8} P={1,2} rule, but 8+2 results in a total or 10 disks,
which is better 8+2 or 8+1+spare?
8+2 is safer for the same speed
8+2 requires alittle more math, so its slower in theory. (unlikely seen)
(4+1)*2 is 2x faster, and in theory is less likely to have wasted space
in transaction group (unlikely seen)
(4+1)*2 is cheaper to upgrade in
Does anyone have a customer
using IBM Tivoli Storage Manager (TSM) with ZFS? I see that IBM has a
client for Solaris 10, but does it work with ZFS?
--
Dan Christensen
System Engineer
Sun Microsystems, Inc.
Des Moines, IA 50266 US
877-263-2204
On 09 July, 2007 - Dan [EMAIL PROTECTED] sent me these 4,2K bytes:
Does anyone have a customer using IBM Tivoli Storage Manager (TSM) with
ZFS? I see that IBM has a client for Solaris 10, but does it work with ZFS?
You can backup ZFS filesystems, but it doesn't understand the ACLs right
now.
Rob Logan wrote:
which is better 8+2 or 8+1+spare?
8+2 is safer for the same speed
8+2 requires alittle more math, so its slower in theory. (unlikely seen)
(4+1)*2 is 2x faster, and in theory is less likely to have wasted space
in transaction group (unlikely seen)
(4+1)*2 is
Orvar Korvar wrote:
When I copy that file from ZFS to /dev/null I get this output:
real0m0.025s
user0m0.002s
sys 0m0.007s
which can't be correct. Is it wrong of me to use time cp fil fil2 when
measuring disk performance?
replying to just this part of your message for now
cp
However, I've one more question - do you guys think NCQ with short
stroked zones help or hurt performance? I have this feeling (my
gut, that is), that at a low queue depth it's a Great Win, whereas
at a deeper queue it would degrade performance more so than without
it. Any
On Tue, Jul 03, 2007 at 11:02:24AM -0700, Bryan Cantrill wrote:
On Tue, Jul 03, 2007 at 10:26:20AM -0500, Albert Chin wrote:
It would also be nice for extra hardware (PCI-X, PCIe card) that
added NVRAM storage to various sun low/mid-range servers that are
currently acting as ZFS/NFS
Cyril Plisko wrote:
On 7/7/07, Neil Perrin [EMAIL PROTECTED] wrote:
Cyril,
I wrote this case and implemented the project. My problem was
that I didn't know what policy (if any) Sun has about publishing
ARC cases, and a mail log with a gazillion email addresses.
I did receive an answer to
You sir, are a gentleman and a scholar! Seriously, this is exactly the
information I was looking for, thank you very much!
Would you happen to know if this has improved since build 63 or if chipset has
any effect one way or the other?
This message posted from opensolaris.org
yu larry liu wrote:
If you are using the whole lun as your vdev in zpool and using EFI
Does it have to be EFI? Will this work on a system with an
SMI label too?
label, you can export zpool, relabel the luns (using the new capacity)
and import that zpool. You should be able to see the
On Jul 9, 2007, at 11:21 AM, Scott Lovenberg wrote:
You sir, are a gentleman and a scholar! Seriously, this is exactly
the information I was looking for, thank you very much!
Would you happen to know if this has improved since build 63 or if
chipset has any effect one way or the other?
Thank you very much, this answers all my questions! Much appreciated!
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Jul 8, 2007, at 8:05 PM, Peter C. Norton wrote:
List,
Sorry if this has been done before - I'm sure I'm not the only person
interested in this, but I haven't found anything with the searches
I've done.
I'm looking to compare nfs performance between nfs on zfs and a
lower-end netapp
It seems to me that the URL above refers to the publishing
materials of *historical* cases. Do you think the case in hand
should be considered historical ?
In this context, historical means any case that was not originally
open, and so can not be presumed to be clear of any proprietary info.
Er with attachment this time.
So I've attached the accepted proposal. There was (as expected) not
much discussion of this case as it was considered an obvious extension.
The actual psarc case materials when opened will not have much more info
than this.
PSARC CASE: 2007/171 ZFS Separate Intent
I think that the 3=num-disks=9 rule only applies to RAIDZ and it was
changed to 4=num-disks=10 for RAIDZ2, but I might be remembering wrong.
Can anybody confirm that the 3=num-disks=9 rule only applies to RAIDZ
and that 4=num-disks=10 applies to RAIDZ2?
Thanks,
Kent
On Mon, Jul 09, 2007 at 03:57:30PM -0400, Kent Watsen wrote:
(4+1)*2 is 2x faster, and in theory is less likely to have wasted space
in transaction group (unlikely seen)
(4+1)*2 is cheaper to upgrade in place because of its fewer elements
I'm aware of these benefits but I feel
Don't confuse vdevs with pools. If you add two 4+1 vdevs to a single pool it
still appears to be one place to put things. ;)
Newbie oversight - thanks!
Kent
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Another reason to recommend spares is when you have multiple top-level
vdevs
and want to amortize the spare cost over multiple sets. For example, if
you have 19 disks then 2x 8+1 raidz + spare amortizes the cost of the
spare
across two raidz sets.
-- richard
Interesting - I hadn't
Kent Watsen wrote:
Rob Logan wrote:
which is better 8+2 or 8+1+spare?
8+2 is safer for the same speed
8+2 requires alittle more math, so its slower in theory. (unlikely seen)
(4+1)*2 is 2x faster, and in theory is less likely to have wasted space
in transaction group
Your data gets striped across the two sets so what you get is a raidz stripe
giving you the 2x faster.
tank
---raidz
--devices
---raidz
--devices
sorry for the diagram.
So you got your zpool tank with raidz stripe.
This message posted from opensolaris.org
Rob Logan wrote:
which is better 8+2 or 8+1+spare?
8+2 is safer for the same speed
8+2 requires alittle more math, so its slower in theory. (unlikely seen)
(4+1)*2 is 2x faster, and in theory is less likely to have wasted space
in transaction group (unlikely seen)
I keep reading
John-Paul Drawneek wrote:
Your data gets striped across the two sets so what you get is a raidz stripe
giving you the 2x faster.
tank
---raidz
--devices
---raidz
--devices
sorry for the diagram.
So you got your zpool tank with raidz stripe.
Thanks - I think you all have
Hi,
It might be interesting to focus on compression algorithms which are
optimized for particular workloads and data types, an Oracle database for
example.
Yes, I agree. That is what I meant when I said The study might be
extended to the analysis of data in specific applications (e.g. web
Hi,
why not starting with lzo first - it`s already in zfs-fuse on linux and it
looks, that it`s just in between lzjb and gzip in terms of performance and
compression ratio.
there needs yet to be demonstrated that it behaves similar on solaris.
Good question and I'm afraid I don't have a
On Jul 9 2007, Domingos Soares wrote:
Hi,
It might be interesting to focus on compression algorithms which are
optimized for particular workloads and data types, an Oracle database for
example.
Yes, I agree. That is what I meant when I said The study might be
extended to the analysis of data
Wouldn't ZFS's being an integrated filesystem make it easier for it to
identify the file types vs. a standard block device with a filesystem
overlaid upon it?
I'm not sure. I would think that most applications are going to use the
POSIX layer where there's no separate API for filetypes.
On Mon, Jul 09, 2007 at 05:27:44PM -0500, Haudy Kazemi wrote:
Wouldn't ZFS's being an integrated filesystem make it easier for it to
identify the file types vs. a standard block device with a filesystem
overlaid upon it?
How? The API to ZFS that most everything uses is the POSIX API.
On Mon, Jul 09, 2007 at 03:42:03PM -0700, Darren Dunham wrote:
Wouldn't ZFS's being an integrated filesystem make it easier for it to
identify the file types vs. a standard block device with a filesystem
overlaid upon it?
I'm not sure. I would think that most applications are going to
We just had an article published on SDN about how different changes to the ways
shares are handled has an impact to the boot up time for large numbers of ZFS
filesystems.
For me, one of the neat things about it was it being a topic at several points
on OpenSolaris discussion boards.
You can
Richard Elling [EMAIL PROTECTED] wrote:
Dave Johnson wrote:
roland [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote:
there is also no filesystem based approach in
compressing/decompressing a whole filesystem.
one could kludge this by setting the compression parameters desired on
the
Lori Alt wrote:
yu larry liu wrote:
If you are using the whole lun as your vdev in zpool and using EFI
Does it have to be EFI? Will this work on a system with an
SMI label too?
If the whole lun is used to be a vdev, EFI is the default and best choice.
If you are using a
37 matches
Mail list logo