Re: [zfs-discuss] Using L2ARC on an AdHoc basis.

2012-10-13 Thread Michael Armstrong
Ok, so it is possible to remove. Good to know, thanks . I move the pool maybe 
once a month for a few days, on an otherwise daily used fixed location. So 
thought the warm up allowance may be worth it. I guess I just wanted to know if 
adding a cache device was a one way operation or not and whether or not it 
risked integrity. 


Sent from my iPhone

On 13 Oct 2012, at 23:02, Ian Collins  wrote:

> On 10/14/12 10:02, Michael Armstrong wrote:
>> Hi Guys,
>> 
>> I have a "portable pool" i.e. one that I carry around in an enclosure. 
>> However, any SSD I add for L2ARC, will not be carried around... meaning the 
>> cache drive will become unavailable from time to time.
>> 
>> My question is Will random removal of the cache drive put the pool into 
>> a "degraded" state or affect the integrity of the pool at all? Additionally, 
>> how adversely will this effect "warm up"...
>> Or will moving the enclosure between machines with and without cache, just 
>> automatically work, and offer benefits when cache is available, and less 
>> benefits when it isn't?
> 
> Why bother with cache devices at all if you are moving the pool around?  As 
> you hinted above, the cache can take a while to warm up and become useful.
> 
> You should zpool remove the cache device before exporting the pool.
> 
> -- 
> Ian.
> 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Using L2ARC on an AdHoc basis.

2012-10-13 Thread Michael Armstrong
Hi Guys,

I have a "portable pool" i.e. one that I carry around in an enclosure. However, 
any SSD I add for L2ARC, will not be carried around... meaning the cache drive 
will become unavailable from time to time.

My question is Will random removal of the cache drive put the pool into a 
"degraded" state or affect the integrity of the pool at all? Additionally, how 
adversely will this effect "warm up"...
Or will moving the enclosure between machines with and without cache, just 
automatically work, and offer benefits when cache is available, and less 
benefits when it isn't?

I hope this question isn't too much of a rambling :) thanks.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Drive upgrades

2012-04-13 Thread Michael Armstrong
Yes this Is another thing im weary of... I should have slightly under 
provisioned at the start or mixed manufacturers... Now i may have to replace 
2tb fails with 2.5 for the sake of a block

Sent from my iPhone

On 13 Apr 2012, at 17:30, Tim Cook  wrote:

> 
> 
> On Fri, Apr 13, 2012 at 9:35 AM, Edward Ned Harvey 
>  wrote:
> > From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> > boun...@opensolaris.org] On Behalf Of Michael Armstrong
> >
> > Is there a way to quickly ascertain if my seagate/hitachi drives are as
> large as
> > the 2.0tb samsungs? I'd like to avoid the situation of replacing all
> drives and
> > then not being able to grow the pool...
> 
> It doesn't matter.  If you have a bunch of drives that are all approx the
> same size but vary slightly, and you make (for example) a raidz out of them,
> then the raidz will only be limited by the size of the smallest one.  So you
> will only be wasting 1% of the drives that are slightly larger.
> 
> Also, given that you have a pool currently made up of 13x2T and 5x1T ... I
> presume these are separate vdev's.  You don't have one huge 18-disk raidz3,
> do you?  that would be bad.  And it would also mean that you're currently
> wasting 13x1T.  I assume the 5x1T are a single raidzN.  You can increase the
> size of these disks, without any cares about the size of the other 13.
> 
> Just make sure you have the autoexpand property set.
> 
> But most of all, make sure you do a scrub first, and make sure you complete
> the resilver in between each disk swap.  Do not pull out more than one disk
> (or whatever your redundancy level is) while it's still resilvering from the
> previously replaced disk.  If you're very thorough, you would also do a
> scrub in between each disk swap, but if it's just a bunch of home movies
> that are replaceable, you will probably skip that step.
> 
> 
> You will however have an issue replacing them if one should fail.  You need 
> to have the same block count to replace a device, which is why I asked for a 
> "right-sizing" years ago.  Deaf ears :/
> 
> --Tim
>  
>  
> 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Drive upgrades

2012-04-13 Thread Michael Armstrong
Hi Guys,I currently have a 18 drive system built from 13x 2.0tb Samsung's and 5x WD 1tb's... I'm about to swap out all of my 1tb drives with 2tb ones to grow the pool a bit... My question is...The replacement 2tb drives are from various manufacturers (seagate/hitachi/samsung) and I know from previous experience that the geometry/boundaries of each manufacturer's 2tb offerings are different.Is there a way to quickly ascertain if my seagate/hitachi drives are as large as the 2.0tb samsungs? I'd like to avoid the situation of replacing all drives and then not being able to grow the pool...Thanks,Michael
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs-discuss Digest, Vol 64, Issue 21

2011-02-07 Thread Michael Armstrong
I obtained smartmontools (which includes smartctl) from the standard apt 
repository (i'm using nexenta however), in addition its neccessary to use the 
device type of sat,12 with smartctl to get it to read attributes correctly in 
OS afaik. Also regarding dev id's on the system, from what i've seen they are 
assigned to ports therefor do not change, however upon changing a controller 
will most likely change unless its the same chipset with exactly the same port 
configuration. Hope this helps.

On 7 Feb 2011, at 18:04, zfs-discuss-requ...@opensolaris.org wrote:

> Having managed to muddle through this weekend without loss (though with a
> certain amount of angst and duplication of efforts), I'm in the mood to
> label things a bit more clearly on my system :-).
> 
> smartctl doesn't seem to be on my system, though.  I'm running
> snv_134.  I'm still pretty badly lost in the whole repository /
> package thing with Solaris, most of my brain cells were already
> occupied with Red Hat, Debian, and Perl package information :-( .
> Where do I look?
> 
> Are the controller port IDs, the "C9T3D0" things that ZFS likes,
> reasonably stable?  They won't change just because I add or remove
> drives, right; only maybe if I change controller cards?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs-discuss Digest, Vol 64, Issue 13

2011-02-06 Thread Michael Armstrong
Additionally, the way I do it is to draw a diagram of the drives in the system, 
labelled with the drive serial numbers. Then when a drive fails, I can find out 
from smartctl which drive it is and remove/replace without trial and error.

On 5 Feb 2011, at 21:54, zfs-discuss-requ...@opensolaris.org wrote:

> 
> Message: 7
> Date: Sat, 5 Feb 2011 15:42:45 -0500
> From: rwali...@washdcmail.com
> To: David Dyer-Bennet 
> Cc: zfs-discuss@opensolaris.org
> Subject: Re: [zfs-discuss] Identifying drives (SATA)
> Message-ID: <58b53790-323b-4ae4-98cd-575f93b66...@washdcmail.com>
> Content-Type: text/plain; charset=us-ascii
> 
> 
> On Feb 5, 2011, at 2:43 PM, David Dyer-Bennet wrote:
> 
>> Is there a clever way to figure out which drive is which?  And if I have to 
>> fall back on removing a drive I think is right, and seeing if that's true, 
>> what admin actions will I have to perform to get the pool back to safety?  
>> (I've got backups, but it's a pain to restore of course.) (Hmmm; in 
>> single-user mode, use dd to read huge chunks of one disk, and see which 
>> lights come on?  Do I even need to be in single-user mode to do that?)
> 
> Obviously this depends on your lights working to some extent (the right light 
> doing something when the right disk is accessed), but I've used:
> 
> dd if=/dev/rdsk/c8t3d0s0 of=/dev/null bs=4k count=10
> 
> which someone mentioned on this list.  Assuming you can actually read from 
> the disk (it isn't completely dead), it should allow you to direct traffic to 
> each drive individually.
> 
> Good luck,
> Ware

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is my bottleneck RAM?

2011-01-18 Thread Michael Armstrong
Ah ok, I wont be using dedup anyway just wanted to try. Ill be adding more ram 
though, I guess you can't have too much. Thanks

Erik Trimble  wrote:

>You can't really do that.
>
>Adding an SSD for L2ARC will help a bit, but L2ARC storage also consumes
>RAM to maintain a cache table of what's in the L2ARC.  Using 2GB of RAM
>with an SSD-based L2ARC (even without Dedup) likely won't help you too
>much vs not having the SSD. 
>
>If you're going to turn on Dedup, you need at least 8GB of RAM to go
>with the SSD.
>
>-Erik
>
>
>On Tue, 2011-01-18 at 18:35 +, Michael Armstrong wrote:
>> Thanks everyone, I think overtime I'm gonna update the system to include an 
>> ssd for sure. Memory may come later though. Thanks for everyone's responses
>> 
>> Erik Trimble  wrote:
>> 
>> >On Tue, 2011-01-18 at 15:11 +, Michael Armstrong wrote:
>> >> I've since turned off dedup, added another 3 drives and results have 
>> >> improved to around 148388K/sec on average, would turning on compression 
>> >> make things more CPU bound and improve performance further?
>> >> 
>> >> On 18 Jan 2011, at 15:07, Richard Elling wrote:
>> >> 
>> >> > On Jan 15, 2011, at 4:21 PM, Michael Armstrong wrote:
>> >> > 
>> >> >> Hi guys, sorry in advance if this is somewhat a lowly question, I've 
>> >> >> recently built a zfs test box based on nexentastor with 4x samsung 2tb 
>> >> >> drives connected via SATA-II in a raidz1 configuration with dedup 
>> >> >> enabled compression off and pool version 23. From running bonnie++ I 
>> >> >> get the following results:
>> >> >> 
>> >> >> Version 1.03b   --Sequential Output-- --Sequential Input- 
>> >> >> --Random-
>> >> >>   -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- 
>> >> >> --Seeks--
>> >> >> MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  
>> >> >> /sec %CP
>> >> >> nexentastor  4G 60582  54 20502   4 12385   3 53901  57 105290  10 
>> >> >> 429.8   1
>> >> >>   --Sequential Create-- Random 
>> >> >> Create
>> >> >>   -Create-- --Read--- -Delete-- -Create-- --Read--- 
>> >> >> -Delete--
>> >> >> files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  
>> >> >> /sec %CP
>> >> >>16  7181  29 + +++ + +++ 21477  97 + +++ 
>> >> >> + +++
>> >> >> nexentastor,4G,60582,54,20502,4,12385,3,53901,57,105290,10,429.8,1,16,7181,29,+,+++,+,+++,21477,97,+,+++,+,+++
>> >> >> 
>> >> >> 
>> >> >> I'd expect more than 105290K/s on a sequential read as a peak for a 
>> >> >> single drive, let alone a striped set. The system has a relatively 
>> >> >> decent CPU, however only 2GB memory, do you think increasing this to 
>> >> >> 4GB would noticeably affect performance of my zpool? The memory is 
>> >> >> only DDR1.
>> >> > 
>> >> > 2GB or 4GB of RAM + dedup is a recipe for pain. Do yourself a favor, 
>> >> > turn off dedup
>> >> > and enable compression.
>> >> > -- richard
>> >> > 
>> >> 
>> >> ___
>> >> zfs-discuss mailing list
>> >> zfs-discuss@opensolaris.org
>> >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>> >
>> >
>> >Compression will help speed things up (I/O, that is), presuming that
>> >you're not already CPU-bound, which it doesn't seem you are.
>> >
>> >If you want Dedup, you pretty much are required to buy an SSD for L2ARC,
>> >*and* get more RAM.
>> >
>> >
>> >These days, I really don't recommend running ZFS as a fileserver without
>> >a bare minimum of 4GB of RAM (8GB for anything other than light use),
>> >even with Dedup turned off. 
>> >
>> >
>> >-- 
>> >Erik Trimble
>> >Java System Support
>> >Mailstop:  usca22-317
>> >Phone:  x67195
>> >Santa Clara, CA
>> >Timezone: US/Pacific (GMT-0800)
>> >
>-- 
>Erik Trimble
>Java System Support
>Mailstop:  usca22-317
>Phone:  x67195
>Santa Clara, CA
>Timezone: US/Pacific (GMT-0800)
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is my bottleneck RAM?

2011-01-18 Thread Michael Armstrong
Thanks everyone, I think overtime I'm gonna update the system to include an ssd 
for sure. Memory may come later though. Thanks for everyone's responses

Erik Trimble  wrote:

>On Tue, 2011-01-18 at 15:11 +0000, Michael Armstrong wrote:
>> I've since turned off dedup, added another 3 drives and results have 
>> improved to around 148388K/sec on average, would turning on compression make 
>> things more CPU bound and improve performance further?
>> 
>> On 18 Jan 2011, at 15:07, Richard Elling wrote:
>> 
>> > On Jan 15, 2011, at 4:21 PM, Michael Armstrong wrote:
>> > 
>> >> Hi guys, sorry in advance if this is somewhat a lowly question, I've 
>> >> recently built a zfs test box based on nexentastor with 4x samsung 2tb 
>> >> drives connected via SATA-II in a raidz1 configuration with dedup enabled 
>> >> compression off and pool version 23. From running bonnie++ I get the 
>> >> following results:
>> >> 
>> >> Version 1.03b   --Sequential Output-- --Sequential Input- 
>> >> --Random-
>> >>   -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- 
>> >> --Seeks--
>> >> MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  
>> >> /sec %CP
>> >> nexentastor  4G 60582  54 20502   4 12385   3 53901  57 105290  10 
>> >> 429.8   1
>> >>   --Sequential Create-- Random 
>> >> Create
>> >>   -Create-- --Read--- -Delete-- -Create-- --Read--- 
>> >> -Delete--
>> >> files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec 
>> >> %CP
>> >>16  7181  29 + +++ + +++ 21477  97 + +++ + 
>> >> +++
>> >> nexentastor,4G,60582,54,20502,4,12385,3,53901,57,105290,10,429.8,1,16,7181,29,+,+++,+,+++,21477,97,+,+++,+,+++
>> >> 
>> >> 
>> >> I'd expect more than 105290K/s on a sequential read as a peak for a 
>> >> single drive, let alone a striped set. The system has a relatively decent 
>> >> CPU, however only 2GB memory, do you think increasing this to 4GB would 
>> >> noticeably affect performance of my zpool? The memory is only DDR1.
>> > 
>> > 2GB or 4GB of RAM + dedup is a recipe for pain. Do yourself a favor, turn 
>> > off dedup
>> > and enable compression.
>> > -- richard
>> > 
>> 
>> ___
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
>
>Compression will help speed things up (I/O, that is), presuming that
>you're not already CPU-bound, which it doesn't seem you are.
>
>If you want Dedup, you pretty much are required to buy an SSD for L2ARC,
>*and* get more RAM.
>
>
>These days, I really don't recommend running ZFS as a fileserver without
>a bare minimum of 4GB of RAM (8GB for anything other than light use),
>even with Dedup turned off. 
>
>
>-- 
>Erik Trimble
>Java System Support
>Mailstop:  usca22-317
>Phone:  x67195
>Santa Clara, CA
>Timezone: US/Pacific (GMT-0800)
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is my bottleneck RAM?

2011-01-18 Thread Michael Armstrong
I've since turned off dedup, added another 3 drives and results have improved 
to around 148388K/sec on average, would turning on compression make things more 
CPU bound and improve performance further?

On 18 Jan 2011, at 15:07, Richard Elling wrote:

> On Jan 15, 2011, at 4:21 PM, Michael Armstrong wrote:
> 
>> Hi guys, sorry in advance if this is somewhat a lowly question, I've 
>> recently built a zfs test box based on nexentastor with 4x samsung 2tb 
>> drives connected via SATA-II in a raidz1 configuration with dedup enabled 
>> compression off and pool version 23. From running bonnie++ I get the 
>> following results:
>> 
>> Version 1.03b   --Sequential Output-- --Sequential Input- 
>> --Random-
>>   -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
>> MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec 
>> %CP
>> nexentastor  4G 60582  54 20502   4 12385   3 53901  57 105290  10 429.8 
>>   1
>>   --Sequential Create-- Random Create
>>   -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
>> files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
>>16  7181  29 + +++ + +++ 21477  97 + +++ + +++
>> nexentastor,4G,60582,54,20502,4,12385,3,53901,57,105290,10,429.8,1,16,7181,29,+,+++,+,+++,21477,97,+,+++,+,+++
>> 
>> 
>> I'd expect more than 105290K/s on a sequential read as a peak for a single 
>> drive, let alone a striped set. The system has a relatively decent CPU, 
>> however only 2GB memory, do you think increasing this to 4GB would 
>> noticeably affect performance of my zpool? The memory is only DDR1.
> 
> 2GB or 4GB of RAM + dedup is a recipe for pain. Do yourself a favor, turn off 
> dedup
> and enable compression.
> -- richard
> 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Is my bottleneck RAM?

2011-01-18 Thread Michael Armstrong
Hi guys, sorry in advance if this is somewhat a lowly question, I've recently 
built a zfs test box based on nexentastor with 4x samsung 2tb drives connected 
via SATA-II in a raidz1 configuration with dedup enabled compression off and 
pool version 23. From running bonnie++ I get the following results:

Version 1.03b   --Sequential Output-- --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
nexentastor  4G 60582  54 20502   4 12385   3 53901  57 105290  10 429.8   1
--Sequential Create-- Random Create
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
 16  7181  29 + +++ + +++ 21477  97 + +++ + +++
nexentastor,4G,60582,54,20502,4,12385,3,53901,57,105290,10,429.8,1,16,7181,29,+,+++,+,+++,21477,97,+,+++,+,+++


I'd expect more than 105290K/s on a sequential read as a peak for a single 
drive, let alone a striped set. The system has a relatively decent CPU, however 
only 2GB memory, do you think increasing this to 4GB would noticeably affect 
performance of my zpool? The memory is only DDR1.

Thanks in advance.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] adding extra drives without creating a second parity set

2009-12-27 Thread Michael Armstrong
Hi, I currently have 4x 1tb drives in a raidz configuration. I want to  
add another 2 x 1tb drives, however if i simply zpool add, i will only  
gain an extra 1tb of space as it will create a second raidz set inside  
the existing tank/pool. Is there a way to add my new drives into the  
existing raidz without losing even more space without rebuilding the  
entire pool from the beginning? if not, is this something being worked  
on currently? thanks and merry xmas!



On 25 Dec 2009, at 20:00, zfs-discuss-requ...@opensolaris.org wrote:


Send zfs-discuss mailing list submissions to
zfs-discuss@opensolaris.org

To subscribe or unsubscribe via the World Wide Web, visit
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
or, via email, send a message with subject or body 'help' to
zfs-discuss-requ...@opensolaris.org

You can reach the person managing the list at
zfs-discuss-ow...@opensolaris.org

When replying, please edit your Subject line so it is more specific
than "Re: Contents of zfs-discuss digest..."


Today's Topics:

  1. Re: Benchmarks results for ZFS + NFS, using SSD's as  slog
 devices (ZIL) (Freddie Cash)
  2. Re: Benchmarks results for ZFS + NFS,  using SSD's as  slog
 devices (ZIL) (Richard Elling)
  3. Re: Troubleshooting dedup performance (Michael Herf)
  4. ZFS write bursts cause short app stalls (Saso Kiselkov)


--

Message: 1
Date: Thu, 24 Dec 2009 17:34:32 PST
From: Freddie Cash 
To: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Benchmarks results for ZFS + NFS, using
SSD's as  slog devices (ZIL)
Message-ID: <2086438805.291261704902840.javamail.tweb...@sf-app1>
Content-Type: text/plain; charset=UTF-8


Mattias Pantzare wrote:
That  would leave us with three options;

1) Deal with it and accept performance as it is.
2) Find a way to speed things up further for this
workload
3) Stop trying to use ZFS for this workload


Option 4 is to re-do your pool, using fewer disks per raidz2 vdev,  
giving more vdevs to the pool, and thus increasing the IOps for the  
whole pool.


14 disks in a single raidz2 vdev is going to give horrible IO,  
regardless of how fast the individual disks are.


Redoing it with 6-disk raidz2 vdevs, or even 8-drive raidz2 vdevs  
will give you much better throughput.


Freddie
--
This message posted from opensolaris.org


--

Message: 2
Date: Thu, 24 Dec 2009 17:39:11 -0800
From: Richard Elling 
To: Freddie Cash 
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Benchmarks results for ZFS + NFS,using
SSD's as  slog devices (ZIL)
Message-ID: 
Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes

On Dec 24, 2009, at 5:34 PM, Freddie Cash wrote:


Mattias Pantzare wrote:
That  would leave us with three options;

1) Deal with it and accept performance as it is.
2) Find a way to speed things up further for this
workload
3) Stop trying to use ZFS for this workload


Option 4 is to re-do your pool, using fewer disks per raidz2 vdev,
giving more vdevs to the pool, and thus increasing the IOps for the
whole pool.

14 disks in a single raidz2 vdev is going to give horrible IO,
regardless of how fast the individual disks are.

Redoing it with 6-disk raidz2 vdevs, or even 8-drive raidz2 vdevs
will give you much better throughput.


At this point it is useful to know that if you do not have a
separate log, then the ZIL uses the pool and its data protection
scheme.  In other words, each ZIL write will be a raidz2 stripe
with its associated performance.
 -- richard



--

Message: 3
Date: Thu, 24 Dec 2009 21:22:28 -0800
From: Michael Herf 
To: Richard Elling 
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Troubleshooting dedup performance
Message-ID:

Content-Type: text/plain; charset=ISO-8859-1

FWIW, I just disabled prefetch, and my dedup + zfs recv seems to be
running visibly faster (somewhere around 3-5x faster).

echo zfs_prefetch_disable/W0t1 | mdb -kw

Anyone else see a result like this?

I'm using the "read" bandwidth from the sending pool from "zpool
iostat -x 5" to estimate transfer rate, since I assume the write rate
would be lower when dedup is working.

mike

p.s. Note to set it back to the default behavior:
echo zfs_prefetch_disable/W0t0 | mdb -kw


--

Message: 4
Date: Fri, 25 Dec 2009 18:57:32 +0100
From: Saso Kiselkov 
To: zfs-discuss@opensolaris.org
Subject: [zfs-discuss] ZFS write bursts cause short app stalls
Message-ID: <4b34fd0c.8090...@gmail.com>
Content-Type: text/plain; charset=ISO-8859-1

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

I've started porting a video streaming application to opensolaris on
ZFS, and am hitting some pretty weird performance issues. The thing  
I'm

trying to do is run 77 concurrent video capture processes (roughly
430Mbit/s in total) all writing into separate 

[zfs-discuss] Zpool problems

2009-12-07 Thread Michael Armstrong
Hi, I'm using zfs version 6 on mac os x 10.5 using the old macosforge  
pkg. When I'm writing files to the fs they are appearing as 1kb files  
and if I do zpool status or scrub or anything the command is just  
hanging. However I can still read the zpool ok, just write is having  
problems and any diagnostics. Any ideas how I can get more information  
or what my symptoms are resemblent of? I'm considering using the  
freebsd ppc port (as i have a powermac) for better zfs support. Any  
thoughts would be great on why I'm having these problems.


Thanks
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] upgrading to the latest zfs version

2009-11-18 Thread Michael Armstrong
Hi guys, after reading the mailings yesterday i noticed someone was after 
upgrading to zfs v21 (deduplication) i'm after the same, i installed 
osol-dev-127 earlier which comes with v19 and then followed the instructions on 
http://pkg.opensolaris.org/dev/en/index.shtml to bring my system up to date, 
however, the system is reporting no updates are available and stays at zfs v19, 
any ideas?


On 17 Nov 2009, at 10:28, zfs-discuss-requ...@opensolaris.org wrote:

> Send zfs-discuss mailing list submissions to
>   zfs-discuss@opensolaris.org
> 
> To subscribe or unsubscribe via the World Wide Web, visit
>   http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> or, via email, send a message with subject or body 'help' to
>   zfs-discuss-requ...@opensolaris.org
> 
> You can reach the person managing the list at
>   zfs-discuss-ow...@opensolaris.org
> 
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of zfs-discuss digest..."
> 
> 
> Today's Topics:
> 
>   1. Re: permanent files error, unable to access pool
>  (Cindy Swearingen)
>   2. Re: hung pool on iscsi (Tim Cook)
>   3. Re: hung pool on iscsi (Richard Elling)
>   4. Re: hung pool on iscsi (Jacob Ritorto)
>   5. Re: permanent files error, unable to access pool
>  (daniel.rodriguez.delg...@gmail.com)
>   6. building zpools on device aliases (sean walmsley)
>   7. Re: Best config for different sized disks (Erik Trimble)
>   8. [Fwd: [zfs-auto-snapshot] Heads up: SUNWzfs-auto-snapshot
>  obsoletion insnv 128] (Tim Foster)
> 
> 
> --
> 
> Message: 1
> Date: Mon, 16 Nov 2009 15:46:26 -0700
> From: Cindy Swearingen 
> To: "daniel.rodriguez.delg...@gmail.com"
>   
> Cc: zfs-discuss@opensolaris.org
> Subject: Re: [zfs-discuss] permanent files error, unable to access
>   pool
> Message-ID: <4b01d642.4010...@sun.com>
> Content-Type: text/plain; CHARSET=US-ASCII; format=flowed
> 
> Hi Daniel,
> 
> In some cases, when I/O is suspended, permanent errors are logged and
> you need to run a zpool scrub to clear the errors.
> 
> Are you saying that a zpool scrub cleared the errors that were
> displayed in the zpool status output? Or, did you also use zpool
> clear?
> 
> Metadata is duplicated even in a one-device pool but recovery
> must depend on the severity of metadata errors.
> 
> Thanks,
> 
> Cindy
> 
> On 11/16/09 13:18, daniel.rodriguez.delg...@gmail.com wrote:
>> Thanks Cindy,
>> 
>> In fact, after some research, I ran into the scrub suggestion and it worked 
>> perfectly. Now I think that the automated message in 
>> http://www.sun.com/msg/ZFS-8000-8A should mention something about scrub as a 
>> worthy attempt.
>> 
>> It was related to an external usb disk. I guess I am happy it happened now 
>> before I invested in getting a couple of other external disks as mirrors of 
>> the existing one. I guess I am better off installing an extra internal disk.
>> 
>> is this something common on usb disks? would it get improved in later 
>> versions of osol or it is somewhat of an incompatibility/unfriendliness of 
>> zfs with external usb disks?
> 
> 
> --
> 
> Message: 2
> Date: Mon, 16 Nov 2009 17:04:48 -0600
> From: Tim Cook 
> To: Martin Vool 
> Cc: zfs-discuss@opensolaris.org
> Subject: Re: [zfs-discuss] hung pool on iscsi
> Message-ID:
>   
> Content-Type: text/plain; charset="iso-8859-1"
> 
> On Mon, Nov 16, 2009 at 4:00 PM, Martin Vool  wrote:
> 
>> I already got my files back acctuay and the disc contains already new
>> pools, so i have no idea how it was set.
>> 
>> I have to make a virtualbox installation and test it.
>> Can you please tell me how-to set the failmode?
>> 
>> 
>> 
> 
> http://prefetch.net/blog/index.php/2008/03/01/configuring-zfs-to-gracefully-deal-with-failures/
> -- next part --
> An HTML attachment was scrubbed...
> URL: 
> 
> 
> --
> 
> Message: 3
> Date: Mon, 16 Nov 2009 15:13:49 -0800
> From: Richard Elling 
> To: Martin Vool 
> Cc: zfs-discuss@opensolaris.org
> Subject: Re: [zfs-discuss] hung pool on iscsi
> Message-ID: <7adfc8e2-3be0-48e6-8d5a-506d975a2...@gmail.com>
> Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes
> 
> 
> On Nov 16, 2009, at 2:00 PM, Martin Vool wrote:
> 
>> I already got my files back acctuay and the disc contains already  
>> new pools, so i have no idea how it was set.
>> 
>> I have to make a virtualbox installation and test it.
> 
> Don't forget to change VirtualBox's default cache flush setting.
> http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#OpenSolaris.2FZFS.2FVirtual_Box_Recommendations
>  -- richard
> 
> 
> 
> 
> --
> 
> Message: 4
> Date: Mon, 16 Nov 2009 18:22:17 -0500
> From: Jacob Ritorto 
> To: zfs