Re: [zfs-discuss] How to get a list of changed files between two snapshots?

2010-02-03 Thread Tomas Ögren
On 03 February, 2010 - Frank Cusack sent me these 0,7K bytes:

> On February 3, 2010 12:04:07 PM +0200 Henu  wrote:
>> Is there a possibility to get a list of changed files between two
>> snapshots? Currently I do this manually, using basic file system
>> functions offered by OS. I scan every byte in every file manually and it
>   ^^^
>
> On February 3, 2010 10:11:01 AM -0500 Ross Walker   
> wrote:
>> Not a ZFS method, but you could use rsync with the dry run option to list
>> all changed files between two file systems.
>
> That's exactly what the OP is already doing ...

rsync by default compares metadata first, and only checks through every
byte if you add the -c (checksum) flag.

I would say rsync is the best tool here.

The "find -newer blah" suggested in other posts won't catch newer files
with an old timestamp (which could happen for various reasons, like
being copied with kept timestamps from somewhere else).

/Tomas
-- 
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to get a list of changed files between two snapshots?

2010-02-03 Thread Jens Elkner
On Wed, Feb 03, 2010 at 06:46:57PM -0500, Ross Walker wrote:
> On Feb 3, 2010, at 12:35 PM, Frank Cusack  z...@linetwo.net> wrote:
> 
> >On February 3, 2010 12:19:50 PM -0500 Frank Cusack 
> > wrote:
> >>If you do need to know about deleted files, the find method still may
> >>be faster depending on how ddiff determines whether or not to do a
> >>file diff.  The docs don't explain the heuristics so I wouldn't want
> >>to guess on that.
> >
> >An improvement on finding deleted files with the find method would
> >be to not limit your find criteria to files.  Directories with
> >deleted files will be newer than in the snapshot so you only need
> >to look at those directories.  I think this would be faster than
> >ddiff in most cases.
> 
> So was there a final consensus on the best way to find the difference  
> between two snapshots (files/directories added, files/directories  
> deleted and file/directories changed)?
> 
> Find won't do it, ddiff won't do it,

ddiff does exactly this. However it never looks at any timestamp since
it is the most unimportant/unreliable path component "tag" wrt.
"what has been changed" and does also not take file permissions and xattrs
into account.  So ddiff is all about path names, types and content.
Not more but also not less ;-)

> I think the only real option is  
> rsync. Of course you can zfs send the snap to another system and do  
> the rsync there against a local previous version.

Probably the worst of all suggested alternatives ...

Have fun,
jel.
-- 
Otto-von-Guericke University http://www.cs.uni-magdeburg.de/
Department of Computer Science   Geb. 29 R 027, Universitaetsplatz 2
39106 Magdeburg, Germany Tel: +49 391 67 12768
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to get a list of changed files between two snapshots?

2010-02-03 Thread Ross Walker
On Feb 3, 2010, at 8:59 PM, Frank Cusack   
wrote:


On February 3, 2010 6:46:57 PM -0500 Ross Walker  
 wrote:

So was there a final consensus on the best way to find the difference
between two snapshots (files/directories added, files/directories  
deleted

and file/directories changed)?

Find won't do it, ddiff won't do it, I think the only real option is
rsync.


I think you misread the thread.  Either find or ddiff will do it and
either will be better than rsync.


Find can find files that have been added or removed between two  
directory trees?


How?

-Ross

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to get a list of changed files between two snapshots?

2010-02-03 Thread Matthew Ahrens
This is RFE 6425091 "want 'zfs diff' to list files that have changed between 
snapshots", which covers both file & directory changes, and file 
removal/creation/renaming.  We actually have a prototype of zfs diff. 
Hopefully someday we will finish it up...


--matt

Henu wrote:

Hello

Is there a possibility to get a list of changed files between two 
snapshots? Currently I do this manually, using basic file system 
functions offered by OS. I scan every byte in every file manually and it 
is of course awfully slow.


If I have understood correctly, ZFS could use its own information about 
which files use which blocks, and thereby calculate the difference very 
quickly without having to scan every byte. Currently I haven't found any 
tools like this. My application uses libZFS to handle ZFS.


On the other hand, I have noticed that ZFS send generates difference 
very quickly, even if it needs to find the small difference between many 
unchanged files. From this, I have concluded that it may be using the 
ZFS information to quickly see if file has been modified or not. Do you 
have any idea how the send works? Maybe I could use its output to get 
the list of changed files...


Henrik Heino

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] unionfs help

2010-02-03 Thread Frank Cusack

Is it possible to emulate a unionfs with zfs and zones somehow?  My zones
are sparse zones and I want to make part of /usr writable within a zone.
(/usr/perl5/mumble to be exact)

I can't just mount a writable directory on top of /usr/perl5 because then
it hides all the stuff in the global zone.  I could repopulate it in the
local zone but ugh that is unattractive.  I'm hoping for a better way.
Creating a full zone is not an option for me.

I don't think this is possible but maybe someone else knows better.  I
was thinking something with snapshots and clones?

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to get a list of changed files between two snapshots?

2010-02-03 Thread Frank Cusack
On February 3, 2010 6:46:57 PM -0500 Ross Walker  
wrote:

So was there a final consensus on the best way to find the difference
between two snapshots (files/directories added, files/directories deleted
and file/directories changed)?

Find won't do it, ddiff won't do it, I think the only real option is
rsync.


I think you misread the thread.  Either find or ddiff will do it and
either will be better than rsync.

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] verging OT: how to buy J4500 w/o overpriced drives

2010-02-03 Thread Andrew Gabriel

Brandon High wrote:

On Wed, Feb 3, 2010 at 3:13 PM, David Dyer-Bennet  wrote:
  

Which is to say that 45 drives is really quite a lot for a HOME NAS.
Particularly when you then think about backing up that data.



The origin of this thread was how to buy a J4500 (48 drive chassis).

One thing that I enjoy about this list (and I'm sure the Sun guys get
annoyed about) is the discussion of how to build various "small"
systems for home use.


We don't get annoyed at all.
What do you think we build to run at home? ;-)


After sitting on the sidelines for a while, I
assembled an 8TB server for home.

Yeah, 8TB is more than I can backup over my home DSL connection. But
it's only got about 2.5TB in use, and most of that is our DVD
collection that I've ripped to play on the Popcorn Hour in our living
room, or CDs that I've ripped. I'd hate to have to re-rip it all, but
I can get it back. The rest is photos and important documents which
are copied to a VM instance and backed up offsite via Mozy.

I'm considering doing a send/receive of a few volumes to a friend's
system (as he will do to mine) to have offsite backups of the pools.
It's mostly dependent on him buying more disk. ;-)

And for what it's worth, my toying with ZFS and discussing it with
coworkers has raised interest in Sun's storage line to replace NetApp
at the office.


Absolutely.

My homebrew system has come up in many conversations about ZFS, which 
has ended up with a customer buying a Thumpers or Amber Road systems 
from Sun. (but that's my job, I guess;-)


--
Andrew
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] verging OT: how to buy J4500 w/o overpriced drives

2010-02-03 Thread Brandon High
On Wed, Feb 3, 2010 at 3:13 PM, David Dyer-Bennet  wrote:
> Which is to say that 45 drives is really quite a lot for a HOME NAS.
> Particularly when you then think about backing up that data.

The origin of this thread was how to buy a J4500 (48 drive chassis).

One thing that I enjoy about this list (and I'm sure the Sun guys get
annoyed about) is the discussion of how to build various "small"
systems for home use. After sitting on the sidelines for a while, I
assembled an 8TB server for home.

Yeah, 8TB is more than I can backup over my home DSL connection. But
it's only got about 2.5TB in use, and most of that is our DVD
collection that I've ripped to play on the Popcorn Hour in our living
room, or CDs that I've ripped. I'd hate to have to re-rip it all, but
I can get it back. The rest is photos and important documents which
are copied to a VM instance and backed up offsite via Mozy.

I'm considering doing a send/receive of a few volumes to a friend's
system (as he will do to mine) to have offsite backups of the pools.
It's mostly dependent on him buying more disk. ;-)

And for what it's worth, my toying with ZFS and discussing it with
coworkers has raised interest in Sun's storage line to replace NetApp
at the office.

-B

-- 
Brandon High : bh...@freaks.com
If violence doesn't solve your problem, you're not using enough of it.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] What happens when: file-corrupted and no-redundancy?

2010-02-03 Thread Robert Milkowski

On 03/02/2010 23:15, Aleksandr Levchuk wrote:

We switched to OpenSolaris + ZFS. RAID6 + hot spare on LSI Engenio san 
hardware, worked well for us. (I'm used to the san management GUI. Also, 
something that RAID-Z would not be able to do is: the san lights-up the amber 
LEDs on the drives that fail, so I know which one to replace.)

So, I wanted to try to stick to the hardware RAID for data protection. I 
understand that the end-to-end checks of ZFS make it better at detecting 
corruptions.

In my case, I can imagine that ZFS would FREEZ the whole volume when a single 
block or file is found to be corrupted.

Ideally, I would not like this to happen and instead would like to get a log 
with names of corrupted files.

What exactly does happens when
zfs detects a corrupted block/file and does not have redundancy to correct it?

Alex
   


Your wish is...
that's exactly what should happen - zpool status -v should provide you 
with list of affected files which you should be able to delete. in case 
of corrupted block contained meta-data zfs should actually be able to 
fix it on the fly for you as all meta-data related block are kept in at 
least two copies even if no redundancy is configured at pool level.


Let's test it:

mi...@r600:~# mkfile 128m file1
mi...@r600:~# zpool create test `pwd`/file1
mi...@r600:~# zpool status test
  pool: test
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
testONLINE   0 0 0
  /export/home/milek/file1  ONLINE   0 0 0

errors: No known data errors
mi...@r600:~#
mi...@r600:~# cp /bin/bash /test/file1
mi...@r600:~# cp /bin/bash /test/file2
mi...@r600:~# cp /bin/bash /test/file3
mi...@r600:~# cp /bin/bash /test/file4
mi...@r600:~# cp /bin/bash /test/file5
mi...@r600:~# cp /bin/bash /test/file6
mi...@r600:~# cp /bin/bash /test/file7
mi...@r600:~# cp /bin/bash /test/file8
mi...@r600:~# cp /bin/bash /test/file9
mi...@r600:~# sync
mi...@r600:~# dd if=/dev/zero of=file1 seek=50 count=1 conv=notrunc
1+0 records in
1+0 records out
512 bytes (5.1 MB) copied, 0.179617 s, 28.5 MB/s
mi...@r600:~# sync
mi...@r600:~# zpool scrub test
mi...@r600:~# zpool status -v test
  pool: test
 state: DEGRADED
status: One or more devices has experienced an error resulting in data
corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
entire pool from backup.
   see: http://www.sun.com/msg/ZFS-8000-8A
 scrub: scrub completed after 0h0m with 7 errors on Thu Feb  4 00:18:40 
2010

config:

NAMESTATE READ WRITE CKSUM
testDEGRADED 0 0 7
  /export/home/milek/file1  DEGRADED 0 029  too many errors

errors: Permanent errors have been detected in the following files:

/test/file1
mi...@r600:~#
mi...@r600:~# rm /test/file1
mi...@r600:~# sync
mi...@r600:~# zpool scrub test
mi...@r600:~# zpool status -v test
  pool: test
 state: DEGRADED
status: One or more devices has experienced an unrecoverable error.  An
attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://www.sun.com/msg/ZFS-8000-9P
 scrub: scrub completed after 0h0m with 0 errors on Thu Feb  4 00:19:55 
2010

config:

NAMESTATE READ WRITE CKSUM
testDEGRADED 0 0 7
  /export/home/milek/file1  DEGRADED 0 029  too many errors

errors: No known data errors
mi...@r600:~# zpool clear test
mi...@r600:~# zpool scrub test
mi...@r600:~# zpool status -v test
  pool: test
 state: ONLINE
 scrub: scrub completed after 0h0m with 0 errors on Thu Feb  4 00:20:12 
2010

config:

NAMESTATE READ WRITE CKSUM
testONLINE   0 0 0
  /export/home/milek/file1  ONLINE   0 0 0

errors: No known data errors
mi...@r600:~#
mi...@r600:~# ls -la /test/
total 7191
drwxr-xr-x  2 root root 10 2010-02-04 00:19 .
drwxr-xr-x 28 root root 30 2010-02-04 00:17 ..
-r-xr-xr-x  1 root root 799040 2010-02-04 00:17 file2
-r-xr-xr-x  1 root root 799040 2010-02-04 00:17 file3
-r-xr-xr-x  1 root root 799040 2010-02-04 00:17 file4
-r-xr-xr-x  1 root root 799040 2010-02-04 00:17 file5
-r-xr-xr-x  1 root root 799040 2010-02-04 00:17 file6
-r-xr-xr-x  1 root root 799040 2010-02-04 00:17 file7
-r-xr-xr-x  1 root root 799040 2010-02-04 00:18 file8
-r-xr-xr-x  1 root root 799040 2010-02-04 00:18 file9
mi...@r600:~#


--
Robert Milkowski
htpp://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot receive incremental stream

2010-02-03 Thread Richard Elling
On Feb 3, 2010, at 3:52 PM, Jan Hlodan wrote:
> Hi,
> can anybody explain me what does it mean?:

sure

> ips% pfexec zfs recv storage/ips/osol-dev/sync_repo < 
> /storage/snapshots_sync/osol-dev-incr-20100122-20100129
> cannot receive incremental stream: most recent snapshot of 
> storage/ips/osol-dev/sync_repo does not
> match incremental source
> ips%

Check the file system snapshots on each file system to see where the
discrepancy is.  If the receiving file system has a snapshot after the last,
common snapshot, then you will get this message.

> I used to receive incremental snapshot to sync ips repository, but now I 
> can't receive a new one.
> (option -F doesn't help)

Check the snapshots for each file system. Where I usually trip over this is when
my backup machine also has auto-snapshot enabled for the backup file systems.
Since this property is inherited, it can be set on the backup system 
accidentally.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-02-03 Thread Simon Breden
Probably 6 in a RAID-Z2 vdev.

Cheers,
Simon
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to get a list of changed files between two snapshots?

2010-02-03 Thread Richard Elling
On Feb 3, 2010, at 3:46 PM, Ross Walker wrote:

> On Feb 3, 2010, at 12:35 PM, Frank Cusack  
> wrote:
> 
>> On February 3, 2010 12:19:50 PM -0500 Frank Cusack 
>>  wrote:
>>> If you do need to know about deleted files, the find method still may
>>> be faster depending on how ddiff determines whether or not to do a
>>> file diff.  The docs don't explain the heuristics so I wouldn't want
>>> to guess on that.
>> 
>> An improvement on finding deleted files with the find method would
>> be to not limit your find criteria to files.  Directories with
>> deleted files will be newer than in the snapshot so you only need
>> to look at those directories.  I think this would be faster than
>> ddiff in most cases.
> 
> So was there a final consensus on the best way to find the difference between 
> two snapshots (files/directories added, files/directories deleted and 
> file/directories changed)?
> 
> Find won't do it, ddiff won't do it, I think the only real option is rsync. 
> Of course you can zfs send the snap to another system and do the rsync there 
> against a local previous version.

bart(1m) is designed to do this.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] What happens when: file-corrupted and no-redundancy?

2010-02-03 Thread Richard Elling
On Feb 3, 2010, at 3:15 PM, Aleksandr Levchuk wrote:

> We switched to OpenSolaris + ZFS. RAID6 + hot spare on LSI Engenio san 
> hardware, worked well for us. (I'm used to the san management GUI. Also, 
> something that RAID-Z would not be able to do is: the san lights-up the amber 
> LEDs on the drives that fail, so I know which one to replace.)

In Solaris, lighting service LEDs is part of the FMA framework agents.
When ZFS has repeated checksum failures from a device, it will notify
FMA, which is responsible for diagnosis and subsequent service alerts.

> So, I wanted to try to stick to the hardware RAID for data protection. I 
> understand that the end-to-end checks of ZFS make it better at detecting 
> corruptions.
> 
> In my case, I can imagine that ZFS would FREEZ the whole volume when a single 
> block or file is found to be corrupted.

No.

> Ideally, I would not like this to happen and instead would like to get a log 
> with names of corrupted files.
> 
> What exactly does happens when
> zfs detects a corrupted block/file and does not have redundancy to correct it?

It depends on the failmode property setting and how your application reading
the file handles errors. In all cases, the "zpool status -xv" command can 
display
the file name or metadata that is corrupted.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] cannot receive incremental stream

2010-02-03 Thread Jan Hlodan

Hi,
can anybody explain me what does it mean?:

ips% pfexec zfs recv storage/ips/osol-dev/sync_repo < 
/storage/snapshots_sync/osol-dev-incr-20100122-20100129
cannot receive incremental stream: most recent snapshot of 
storage/ips/osol-dev/sync_repo does not

match incremental source
ips%

I used to receive incremental snapshot to sync ips repository, but now I 
can't receive a new one.

(option -F doesn't help)
Thank you,
Regards,

Jan Hlodan
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to get a list of changed files between two snapshots?

2010-02-03 Thread Ross Walker
On Feb 3, 2010, at 12:35 PM, Frank Cusack z...@linetwo.net> wrote:


On February 3, 2010 12:19:50 PM -0500 Frank Cusack > wrote:

If you do need to know about deleted files, the find method still may
be faster depending on how ddiff determines whether or not to do a
file diff.  The docs don't explain the heuristics so I wouldn't want
to guess on that.


An improvement on finding deleted files with the find method would
be to not limit your find criteria to files.  Directories with
deleted files will be newer than in the snapshot so you only need
to look at those directories.  I think this would be faster than
ddiff in most cases.


So was there a final consensus on the best way to find the difference  
between two snapshots (files/directories added, files/directories  
deleted and file/directories changed)?


Find won't do it, ddiff won't do it, I think the only real option is  
rsync. Of course you can zfs send the snap to another system and do  
the rsync there against a local previous version.


-Ross

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-02-03 Thread Marc Nicholas
As I previously mentioned, I'm pretty happy with the 500GB Caviar
Blacks that I have :)

One word of caution: failure and rebuild times with 1TB+ drives can be
a concern. How many spindles were you planning?

-marc

On 2/3/10, Simon Breden  wrote:
> Sounds good.
>
> I was taking a look at the 1TB Caviar Black drives which are WD1001FALS I
> think.
> They seem to have superb user ratings and good reliability comments from
> many people.
>
> I consider these "full fat" drives as opposed to the LITE (green) drives, as
> they spin at 7200 rpm instead of 5400 rpm, have higher performance  and burn
> more juice than the Green models, but they have superb reviews from almost
> everyone regarding behaviour and reliability, and at the end of the day, we
> need good, reliable drives that work well in a RAID system.
>
> I can get them for around the same price as the cheapest 1.5TB green drives
> from Samsung.
> Somewhere I saw people saying that WDTLER.EXE works to allow reduction of
> the error reporting time like the enterprise RE versions (RAID Edition).
> However I then saw another user saying on the newer revisions WD have
> disabled this. I need to check a bit more to see what's really the case.
>
> Cheers,
> Simon
>
> http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/
> --
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>

-- 
Sent from my mobile device
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] What happens when: file-corrupted and no-redundancy?

2010-02-03 Thread Aleksandr Levchuk
We switched to OpenSolaris + ZFS. RAID6 + hot spare on LSI Engenio san 
hardware, worked well for us. (I'm used to the san management GUI. Also, 
something that RAID-Z would not be able to do is: the san lights-up the amber 
LEDs on the drives that fail, so I know which one to replace.)

So, I wanted to try to stick to the hardware RAID for data protection. I 
understand that the end-to-end checks of ZFS make it better at detecting 
corruptions.

In my case, I can imagine that ZFS would FREEZ the whole volume when a single 
block or file is found to be corrupted.

Ideally, I would not like this to happen and instead would like to get a log 
with names of corrupted files.

What exactly does happens when
zfs detects a corrupted block/file and does not have redundancy to correct it?

Alex
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] verging OT: how to buy J4500 w/o overpriced drives

2010-02-03 Thread David Dyer-Bennet

On Wed, February 3, 2010 17:02, Brandon High wrote:
> On Mon, Feb 1, 2010 at 8:58 PM, matthew patton  wrote:
>> what with the home NAS conversations, what's the trick to buy a J4500
>> without any drives? SUN like every other "enterprise" storage vendor
>> thinks it's ok to rape their customers and I for one, am not interested
>> in paying 10x for a silly SATA hard drive.
>
> Another solution, for a true DIY x4500: BackBlaze has schematics for
> the 45 drive chassis that they designed available on their website.
> http://blog.backblaze.com/2009/09/01/petabytes-on-a-budget-how-to-build-cheap-cloud-storage/

I'm just coming up on upgrading my 800GB pool to 1.2TB.  I know some home
NAS setups with 10 times my capacity, but I don't know how much data they
actually have on them.  And 10 times my capacity actually fits in my
current chassis with modern drives; I'm just still using the 400GB drives
I put in originally, and have two more sitting around for the upgrade I'm
heading towards.

Which is to say that 45 drives is really quite a lot for a HOME NAS. 
Particularly when you then think about backing up that data.
-- 
David Dyer-Bennet, d...@dd-b.net; http://dd-b.net/
Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/
Photos: http://dd-b.net/photography/gallery/
Dragaera: http://dragaera.info

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Workaround for mpt timeouts in snv_127

2010-02-03 Thread Simon Breden
Hi Tonmaus,

> they are the new revision. 

OK.

> I got the impression as well that the complaints you
> reported were mainly related to embedded Linux
> systems probably running LVM / mda. (thecus, Qnap,
> ) Other reports I had seen related to typical HW
> raids. I don't think the situation is comparable to
> ZFS. 

That could be the case, but maybe I'll have to create a specific thread along 
the lines of "Anyone having success / problems with WD Green drives?" in order 
to know a bit more details. There were Mac users also complaining -- see the 
WDC links in the "Best 1.5TB drives" thread.

> I have also followed some TLER related threads here.
> I am not sure if there was ever a clear assertion if
> consumer drive related Error correction will affect a
> ZFS pool or not. Statistically we should have a lot
> of "restrictive TLER settings helped me to solve my
> ZFS pool issues" success reports here, if it were.

IIRC I think Richard said that he thought that a troublesome non-RAID drive 
would affect MTTR and not reliability. I.e. you'll have to manually intervene 
if a consumer drive causes the system to hang, and replace it, whereas the RAID 
edition drives will probably report the error quickly and then ZFS will rewrite 
the data elsewhere, and thus maybe not kick the drive.

So it sounds preferable to have TLER in operation, if one can find a 
consumer-priced drive that allows it, or just take the hit and go with whatever 
non-TLER drive you choose and expect to have to manually intervene if a drive 
plays up. OK for home user where he is not too affected, but not good for 
businesses which need to have something recovered quickly.

> That all rather points to singular issues with
> firmware bugs or similar than to a systematic issue,
> doesn't it?

I'm not sure.

Cheers,
Simon
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] verging OT: how to buy J4500 w/o overpriced drives

2010-02-03 Thread Brandon High
On Mon, Feb 1, 2010 at 8:58 PM, matthew patton  wrote:
> what with the home NAS conversations, what's the trick to buy a J4500 without 
> any drives? SUN like every other "enterprise" storage vendor thinks it's ok 
> to rape their customers and I for one, am not interested in paying 10x for a 
> silly SATA hard drive.

Another solution, for a true DIY x4500: BackBlaze has schematics for
the 45 drive chassis that they designed available on their website.
http://blog.backblaze.com/2009/09/01/petabytes-on-a-budget-how-to-build-cheap-cloud-storage/

Someone brought it up on the list a few months ago (which is how I
know about it) and there was some interesting discussion at that time.
There's no way I would use something like this for most installs, but
there is definitely some use. Now that opensolaris supports sata pmp,
you could use a similar chassis for a zfs pool.

-B

-- 
Brandon High : bh...@freaks.com
For sale: One moral compass, never used.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-02-03 Thread Simon Breden
Sounds good.

I was taking a look at the 1TB Caviar Black drives which are WD1001FALS I think.
They seem to have superb user ratings and good reliability comments from many 
people.

I consider these "full fat" drives as opposed to the LITE (green) drives, as 
they spin at 7200 rpm instead of 5400 rpm, have higher performance  and burn 
more juice than the Green models, but they have superb reviews from almost 
everyone regarding behaviour and reliability, and at the end of the day, we 
need good, reliable drives that work well in a RAID system.

I can get them for around the same price as the cheapest 1.5TB green drives 
from Samsung.
Somewhere I saw people saying that WDTLER.EXE works to allow reduction of the 
error reporting time like the enterprise RE versions (RAID Edition). However I 
then saw another user saying on the newer revisions WD have disabled this. I 
need to check a bit more to see what's really the case.

Cheers,
Simon

http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-02-03 Thread Simon Breden
That's a pity about smartmontools not working. Which controllers are you using?

Good news about no sleeping though, although perhaps not so economical.

I think I'd rather burn a bit more power and have drives that respond properly 
than weird timeout issues some people seem to be experiencing with some of the 
green low power drives.

Cheers,
Simon
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-02-03 Thread Travis Tabbal
smartmontools doesn't work with my controllers. I can try it again when the 2 
new drives I've ordered arrive. I'll try connecting to the motherboard ports 
and see if that works with smartmontools. 

I haven't noticed any sleeping with the drives. I don't get any lag accessing 
the array or any error messages about them disappearing.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Workaround for mpt timeouts in snv_127

2010-02-03 Thread Tonmaus
Hi Simon,

they are the new revision. 
I got the impression as well that the complaints you reported were mainly 
related to embedded Linux systems probably running LVM / mda. (thecus, Qnap, 
) Other reports I had seen related to typical HW raids. I don't think the 
situation is comparable to ZFS. 
I have also followed some TLER related threads here. I am not sure if there was 
ever a clear assertion if consumer drive related Error correction will affect a 
ZFS pool or not. Statistically we should have a lot of "restrictive TLER 
settings helped me to solve my ZFS pool issues" success reports here, if it 
were. That all rather points to singular issues with firmware bugs or similar 
than to a systematic issue, doesn't it?

Cheers,

Tonmaus
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Version to upgrade to?

2010-02-03 Thread David Dyer-Bennet

On Wed, February 3, 2010 12:06, Richard Elling wrote:
> On Feb 2, 2010, at 7:58 PM, Tim Cook wrote:
>>
>> As an aside, is the stable branch being regularly patched now with
>> security and bug fixes?
>
> From the horse's mouth:
> http://sunsolve.sun.com/show.do?target=opensolaris

When I follow that through, it only offers me extras and HA Cluster;
nothing about ordinary OpenSolaris updates that I can find.

-- 
David Dyer-Bennet, d...@dd-b.net; http://dd-b.net/
Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/
Photos: http://dd-b.net/photography/gallery/
Dragaera: http://dragaera.info

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to get a list of changed files between two snapshots?

2010-02-03 Thread Jens Elkner
On Wed, Feb 03, 2010 at 12:19:50PM -0500, Frank Cusack wrote:
> On February 3, 2010 6:02:52 PM +0100 Jens Elkner 
>  wrote:
> >On Wed, Feb 03, 2010 at 10:29:18AM -0500, Frank Cusack wrote:
> >># newer files
> >>find /file/system -newer /file/system/.zfs/snapshot/snapname -type f
> >># deleted files
> >>cd /file/system/.zfs/snapshot/snapname
> >>find . -type f -exec "test -f /file/system/{} || echo {}" \;
> >>
> >>The above requires GNU find (for -newer), and obviously it only finds
> >>files.  If you need symlinks or directory names modify as appropriate.
> >>
> >>The above is also obviously to compare a snapshot to the current
> >>filesystem.  To compare two snapshots make the obvious modifications.
> >
> >Perhaps http://iws.cs.uni-magdeburg.de/~elkner/ddiff/ wrt. dir2dir cmp
> >may help as well (should be faster).
> 
> If you don't need to know about deleted files, it wouldn't be.  It's hard
> to be faster than walking through a single directory tree if ddiff has to
> walk through 2 directory trees.

Yepp, but I guess the 'test ...' invocation for each file alone is much
more time consuming and IIRC the test -f path has do do several stats
as well, 'til it reaches its final target. So a lot of overhead again.

However, just finding newer files via 'find' is probably unbeatable ;-)
 
> If you do need to know about deleted files, the find method still may
> be faster depending on how ddiff determines whether or not to do a
> file diff.  The docs don't explain the heuristics so I wouldn't want
> to guess on that.

ddiff is a single process and basically travels recursively through
directories via a DirectoryStream (side by side) and stops it at the
point, where no more information is required to make the final decision
(depends on cmd line options). So it needs for very deep dirs with a
lot of entries [much] more memory than find, yes.

Not sure, how DirectoryStream is implemented, but I guess, it gets
mapped to readdir(3C) and friends ...

Regards,
jel.
-- 
Otto-von-Guericke University http://www.cs.uni-magdeburg.de/
Department of Computer Science   Geb. 29 R 027, Universitaetsplatz 2
39106 Magdeburg, Germany Tel: +49 391 67 12768
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Version to upgrade to?

2010-02-03 Thread Richard Elling
On Feb 2, 2010, at 7:58 PM, Tim Cook wrote:
> 
> As an aside, is the stable branch being regularly patched now with security 
> and bug fixes?

>From the horse's mouth:
http://sunsolve.sun.com/show.do?target=opensolaris
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to get a list of changed files between two snapshots?

2010-02-03 Thread Frank Cusack
On February 3, 2010 12:19:50 PM -0500 Frank Cusack 
 wrote:

If you do need to know about deleted files, the find method still may
be faster depending on how ddiff determines whether or not to do a
file diff.  The docs don't explain the heuristics so I wouldn't want
to guess on that.


An improvement on finding deleted files with the find method would
be to not limit your find criteria to files.  Directories with
deleted files will be newer than in the snapshot so you only need
to look at those directories.  I think this would be faster than
ddiff in most cases.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to get a list of changed files between two snapshots?

2010-02-03 Thread Frank Cusack
On February 3, 2010 6:02:52 PM +0100 Jens Elkner 
 wrote:

On Wed, Feb 03, 2010 at 10:29:18AM -0500, Frank Cusack wrote:

# newer files
find /file/system -newer /file/system/.zfs/snapshot/snapname -type f
# deleted files
cd /file/system/.zfs/snapshot/snapname
find . -type f -exec "test -f /file/system/{} || echo {}" \;

The above requires GNU find (for -newer), and obviously it only finds
files.  If you need symlinks or directory names modify as appropriate.

The above is also obviously to compare a snapshot to the current
filesystem.  To compare two snapshots make the obvious modifications.


Perhaps http://iws.cs.uni-magdeburg.de/~elkner/ddiff/ wrt. dir2dir cmp
may help as well (should be faster).


If you don't need to know about deleted files, it wouldn't be.  It's hard
to be faster than walking through a single directory tree if ddiff has to
walk through 2 directory trees.

If you do need to know about deleted files, the find method still may
be faster depending on how ddiff determines whether or not to do a
file diff.  The docs don't explain the heuristics so I wouldn't want
to guess on that.

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Workaround for mpt timeouts in snv_127

2010-02-03 Thread Mark Nipper
Looks like I got the textbook response from Western Digital:
---
Western Digital technical support only provides jumper configuration and 
physical installation support for hard drives used in systems running the 
Linux/Unix operating systems. For setup questions beyond physical installation 
of your Western Digital hard drive, please contact the vendor of your 
Linux/Unix operating system.

Please install the drive in an XP or Vista system to test the drive following 
the information below.

---

etc.  So, doesn't look like I'll get any kind of reasonable response personally.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] verging OT: how to buy J4500 w/o overpriced drives

2010-02-03 Thread Toby Thain


On 2-Feb-10, at 10:11 PM, Marc Nicholas wrote:




On Tue, Feb 2, 2010 at 9:52 PM, Toby Thain  
 wrote:


On 2-Feb-10, at 1:54 PM, Orvar Korvar wrote:

100% uptime for 20 years?

So what makes OpenVMS so much more stable than Unix? What is the  
difference?



The short answer is that uptimes like that are VMS *cluster*  
uptimes. Individual hosts don't necessarily have that uptime, but  
the cluster availability is maintained for extremely long periods.


You can probably find more discussion of this in comp.os.vms.

And the 15MB/sec of I/O throughput on that state-of-the-art cluster  
is something to write home about? ;)


Seriously, as someone alluded to earlier, we're not comparing  
apples to applies. And a 9000 series VAX Cluster was one of the  
earlier multi-user systems I worked on for reference ;)


Making that kind of stuff work with modern expectations and  
tolerances is a whole new kettle of fish...



OpenVMS runs on modern gear (Itanium).

--Toby




-marc


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to get a list of changed files between two snapshots?

2010-02-03 Thread Jens Elkner
On Wed, Feb 03, 2010 at 10:29:18AM -0500, Frank Cusack wrote:
> On February 3, 2010 12:04:07 PM +0200 Henu  wrote:
> >Is there a possibility to get a list of changed files between two
> >snapshots?
> 
> Great timing as I just looked this up last night, I wanted to verify
> that an install program was only changing the files on disk that it
> claimed to be changing.  So I have to say, "come on".  It took me but
> one google search and the answer was one of the top 3 hits.
> 
> 
> 
> # newer files
> find /file/system -newer /file/system/.zfs/snapshot/snapname -type f
> # deleted files
> cd /file/system/.zfs/snapshot/snapname
> find . -type f -exec "test -f /file/system/{} || echo {}" \;
> 
> The above requires GNU find (for -newer), and obviously it only finds
> files.  If you need symlinks or directory names modify as appropriate.
> 
> The above is also obviously to compare a snapshot to the current
> filesystem.  To compare two snapshots make the obvious modifications.

Perhaps http://iws.cs.uni-magdeburg.de/~elkner/ddiff/ wrt. dir2dir cmp
may help as well (should be faster).

Have fun,
jel.
-- 
Otto-von-Guericke University http://www.cs.uni-magdeburg.de/
Department of Computer Science   Geb. 29 R 027, Universitaetsplatz 2
39106 Magdeburg, Germany Tel: +49 391 67 12768
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to get a list of changed files between two snapshots?

2010-02-03 Thread Andrey Kuzmin
On Wed, Feb 3, 2010 at 6:11 PM, Ross Walker  wrote:
> On Feb 3, 2010, at 9:53 AM, Henu  wrote:
>
>> Okay, so first of all, it's true that send is always fast and 100%
>> reliable because it uses blocks to see differences. Good, and thanks for
>> this information. If everything else fails, I can parse the information I
>> want from send stream :)
>>
>> But am I right, that there is no other methods to get the list of changed
>> files other than the send command?

At zfs_send level there are no files, just DMU objects (modified in
some txg which is the basis for changed/unchanged decision).

>>
>> And in my situation I do not need to create snapshots. They are already
>> created. The only thing that I need to do, is to get list of all the changed
>> files (and maybe the location of difference in them, but I can do this
>> manually if needed) between two already created snapshots.
>
> Not a ZFS method, but you could use rsync with the dry run option to list
> all changed files between two file systems.

That's painfully resource-intensive on both (sending and receiving)
ends, and it would be IMHO really beneficial to come up with an
interface that lets user-space (including off-the-shelf backup tools)
to iterate objects changed between two given snapshots.


Regards,
Andrey

>
> -Ross
>
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to get a list of changed files between two snapshots?

2010-02-03 Thread Frank Cusack

On February 3, 2010 12:04:07 PM +0200 Henu  wrote:

Is there a possibility to get a list of changed files between two
snapshots? Currently I do this manually, using basic file system
functions offered by OS. I scan every byte in every file manually and it

  ^^^

On February 3, 2010 10:11:01 AM -0500 Ross Walker  
wrote:

Not a ZFS method, but you could use rsync with the dry run option to list
all changed files between two file systems.


That's exactly what the OP is already doing ...

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to get a list of changed files between two snapshots?

2010-02-03 Thread Frank Cusack

On February 3, 2010 12:04:07 PM +0200 Henu  wrote:

Is there a possibility to get a list of changed files between two
snapshots?


Great timing as I just looked this up last night, I wanted to verify
that an install program was only changing the files on disk that it
claimed to be changing.  So I have to say, "come on".  It took me but
one google search and the answer was one of the top 3 hits.



# newer files
find /file/system -newer /file/system/.zfs/snapshot/snapname -type f
# deleted files
cd /file/system/.zfs/snapshot/snapname
find . -type f -exec "test -f /file/system/{} || echo {}" \;

The above requires GNU find (for -newer), and obviously it only finds
files.  If you need symlinks or directory names modify as appropriate.

The above is also obviously to compare a snapshot to the current
filesystem.  To compare two snapshots make the obvious modifications.

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to get a list of changed files between two snapshots?

2010-02-03 Thread Ross Walker

On Feb 3, 2010, at 9:53 AM, Henu  wrote:

Okay, so first of all, it's true that send is always fast and 100%  
reliable because it uses blocks to see differences. Good, and thanks  
for this information. If everything else fails, I can parse the  
information I want from send stream :)


But am I right, that there is no other methods to get the list of  
changed files other than the send command?


And in my situation I do not need to create snapshots. They are  
already created. The only thing that I need to do, is to get list of  
all the changed files (and maybe the location of difference in them,  
but I can do this manually if needed) between two already created  
snapshots.


Not a ZFS method, but you could use rsync with the dry run option to  
list all changed files between two file systems.


-Ross

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to get a list of changed files between two snapshots?

2010-02-03 Thread Henu
Okay, so first of all, it's true that send is always fast and 100%  
reliable because it uses blocks to see differences. Good, and thanks  
for this information. If everything else fails, I can parse the  
information I want from send stream :)


But am I right, that there is no other methods to get the list of  
changed files other than the send command?


And in my situation I do not need to create snapshots. They are  
already created. The only thing that I need to do, is to get list of  
all the changed files (and maybe the location of difference in them,  
but I can do this manually if needed) between two already created  
snapshots.


Regards,
Henrik Heino

Quoting Andrey Kuzmin :


In the periodic snapshot/send diff scenario you presumably ask about,
zfs_send basically creates snapshot(n+1) and then performs pruned
tree-walk limited to blocks modified between snap(n) and snap(n+1).
See  
http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/lib/libzfs/common/libzfs_sendrecv.c#1127

for details.

Regards,
Andrey




On Wed, Feb 3, 2010 at 1:04 PM, Henu  wrote:

Hello

Is there a possibility to get a list of changed files between two snapshots?
Currently I do this manually, using basic file system functions offered by
OS. I scan every byte in every file manually and it is of course awfully
slow.

If I have understood correctly, ZFS could use its own information about
which files use which blocks, and thereby calculate the difference very
quickly without having to scan every byte. Currently I haven't found any
tools like this. My application uses libZFS to handle ZFS.

On the other hand, I have noticed that ZFS send generates difference very
quickly, even if it needs to find the small difference between many
unchanged files. From this, I have concluded that it may be using the ZFS
information to quickly see if file has been modified or not. Do you have any
idea how the send works? Maybe I could use its output to get the list of
changed files...

Henrik Heino

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss







___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS snapshot issue.

2010-02-03 Thread Dan
Hello,

I've got similar problem now - I've got auto snapshot on up to 80% of the file 
system capacity. 

# zfs get quota rpool/export/home/danjagor
NAME PROPERTY  VALUE  SOURCE
rpool/export/home/danjagor  quota 60Glocal
# df -h /export/home/danjagor
FilesystemSize  Used Avail Use% Mounted on
rpool/export/home/danjagor
  8.0G  7.9G  106M  99% /export/home/danjagor
# zfs list |grep danjagor
rpool/export/home/danjagor59.9G   106M  7.85G  /export/home/danjagor

there are apparently 46 snapshots -  10.3GB

So how is it possible that there is only 106M left?!?
I also know that once there isn't enough space on the device, it should start 
to delete the old snapshot, if I'm right. Therefore I should never really run 
out of space unless I really fill it up with files.

Thanks,
Dan
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Help needed with zfs send/receive

2010-02-03 Thread Arnaud Brand

Le 03/02/2010 09:25, Arnaud Brand a écrit :

Le 03/02/2010 04:44, Brent Jones a écrit :

On Tue, Feb 2, 2010 at 7:41 PM, Brent Jones  wrote:

On Tue, Feb 2, 2010 at 12:05 PM, Arnaud Brand  wrote:

Hi folks,

I'm having (as the title suggests) a problem with zfs send/receive.
Command line is like this :
pfexec zfs send -Rp tank/t...@snapshot | ssh remotehost pfexec zfs 
recv -v -F

-d tank

This works like a charm as long as the snapshot is small enough.

When it gets too big (meaning somewhere between 17G and 900G), I 
get ssh

errors (can't read from remote host).

I tried various encryption options (the fastest being in my case 
arcfour)

with no better results.
I tried to setup a script to insert dd on the sending and receiving 
side to

buffer the flow, still read errors.
I tried with mbuffer (which gives better performance), it didn't 
get better.
Today I tried with netcat (and mbuffer) and I got better 
throughput, but it

failed at 269GB transferred.

The two machines are connected to the switch with 2x1GbE (Intel) 
joined

together with LACP.
The switch logs show no errors on the ports.
kstat -p | grep e1000g shows one recv error on the sending side.

I can't find anything in the logs which could give me a clue about 
what's

happening.

I'm running build 131.

If anyone has the slightest clue of where I could look or what I 
could do to
pinpoint/solve the problem, I'd be very gratefull if (s)he could 
share it

with me.

Thanks and have a nice evening.

Arnaud



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



This issue seems to have started after snv_129 for me. I get "connect
reset by peer", or transfers (of any kind) simply timeout.

Smaller transfers succeed most of the time, while larger ones usually
fail. Rolling back to snv_127 (my last one) does not exhibit this
issue. I have not had time to narrow down any causes, but I did find
one bug report that found some TCP test scenarios failed during one of
the builds, but unable to find that CR at this time.

--
Brent Jones
br...@servuhome.net


Ah, I found the CR that seemed to describe the situation (broken
pipe/connection reset by peer)

http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6905510
This CR is marked as fixed in b131 and only relates to loopback or am 
I missing something ?


The transfer I started yesterday night finished with no errors :
/usr/bin/nc -l -p 8023 | /usr/local/bin/mbuffer -s1024k -m512M -P40 | 
dd of=/tank/repl.zfs bs=128k

summary: 1992 GByte in 7 h 10 min 78.9 MB/s, 8472x empty
2139334374612 bytes (2,1 TB) copied, 25860,5 s, 82,7 MB/s

So this seems to be linked someway to an high CPU load.
I'll change the network cables and, as Richard suggested, remove LACP.
Then I'll launch another transfer while at the same time zfs receiving 
the file I transferred this night.
If the transfer fails I guess it will be related to e1000g problems 
under load, not zfs, so a better place to post would be 
opensolaris-discuss.


Thanks for your help,
Arnaud

Seems to be network related : the transfer failed after 129GB even 
without LACP and with other network cables.

I'll post to networking-discuss.

Arnaud
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] verging OT: how to buy J4500 w/o overpriced drives

2010-02-03 Thread Joerg Schilling
"David Dyer-Bennet"  wrote:


> When I was first in the industry, in 1969, it was fairly normal to only be
> able to connect DEC disks to a PDP-11; but even then there were
> third-party manufacturers making products and customers buying them.  Now,
> forty years down the road, computers are constructed from mostly generic
> components.  The disk drive is one of the ones that went generic first. 
> It's absurd that we can't handle small enterprise storage on
> standards-compliant drives at this point.

If a company advertizes for being SCSI or SATA compliant, you need to be able to
connect every drive that is compliant to the standard. If this is not the case, 
you can (in Europe) give the machine back to the seller.

I remember how ever a case ~ 14 years ago, where Seagate did sell SCSI disks
that did not handle tagged command queing and disconnect/reconnect correctly. 
As a result, machines with these drives did hang. Sun at the same time did
sell drives with Sun specific firmware that have been OK.

Around 1990 we (H.Berthold AG - at that time second biggest Sun OEM) did also 
sell disks with specific firmware that we ordered from the manufacturer after 
we did run into problems with standard firmware. We also did sell disks to our
customers for more than the "street price" and the customers did get extra value
for this.

If you buy an unapproved disk, things should work, but if you get in trouble 
later, you may have to hire experts that help you to prove _which_ company
did give you the defective hardware or system as only this company would need
to give warranty help.

Jörg

-- 
 EMail:jo...@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
   j...@cs.tu-berlin.de(uni)  
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] How to get a list of changed files between two snapshots?

2010-02-03 Thread Henu

Hello

Is there a possibility to get a list of changed files between two  
snapshots? Currently I do this manually, using basic file system  
functions offered by OS. I scan every byte in every file manually and  
it is of course awfully slow.


If I have understood correctly, ZFS could use its own information  
about which files use which blocks, and thereby calculate the  
difference very quickly without having to scan every byte. Currently I  
haven't found any tools like this. My application uses libZFS to  
handle ZFS.


On the other hand, I have noticed that ZFS send generates difference  
very quickly, even if it needs to find the small difference between  
many unchanged files. From this, I have concluded that it may be using  
the ZFS information to quickly see if file has been modified or not.  
Do you have any idea how the send works? Maybe I could use its output  
to get the list of changed files...


Henrik Heino

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] verging OT: how to buy J4500 w/o overpriced drives

2010-02-03 Thread Thomas Burgess
>
> This seems to miss the point.  I presented an argument for why I think the
> qualified drives are a huge profit-center, not just making a reasonable
> profit on the work of qualification.
>
> In general, I'd much rather pay reasonable costs for each piece, rather
> than weird costs artificially shoved around to make things come out some
> strange way somebody favors.
>
> Thats easy for you to SAY but the fact of the matter is, in the REAL world
it doesn't hold water.  Almost all types of products have this exact same
model.  It's around because it works.  It's true for cheetos, cars, and hard
drives.  Saying you'd rather pay a "fair price" for things down the line and
actually DOING it are 2 different things.  I think if all companies followed
that mentality you'd see that you'd end up paying more, not less.



>
> It works great for me personally -- I'm using the software with other
> people's hardware, for free.
>
> But why should people who need a lot of storage pay proportionally more?
> I don't get that, that's grossly wrong.
> --
>

They don't HAVE to pay more.  It's about being realistic.  People who buy
sun hardware are doing so for more reasons that "to have a lot of storage"

The cost per gb isn't the only factor that should be thought of.  For raw
storage its certainly cheaper to buy a cheap case, throw in 20-24 1-2tb
consumer grade drives in 2-3 raidz2's and have a reasonable level of fault
tollerance, but that TOTALLY misses the point.  You don't buy a sunfire
x4540 or j45xx just for raw storage.  If you want the cheapest raw storage
possible you can always go build one of those blackblaze storage pods.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-02-03 Thread Arnaud Brand

Le 02/02/2010 20:26, Marc Nicholas a écrit :
On Tue, Feb 2, 2010 at 1:38 PM, Brandon High > wrote:


On Sat, Jan 16, 2010 at 9:47 AM, Simon Breden mailto:sbre...@gmail.com>> wrote:
> Which consumer-priced 1.5TB drives do people currently recommend?

I happened to be looking at the Hitachi product information, and
noticed that the Deskstar 7K2000 appears to be supported in RAID
configurations. One of the applications listed is "Video editing
arrays".

http://www.hitachigst.com/portal/site/en/products/deskstar/7K2000/


I've been having good success with the Western Digital Caviar Black 
drives...which are cousins of their Enterprise RE3 platform. AFAIK, 
you're stuck at 1TB or 2TB capacities but I've managed to get some 
good deals on them...


-marc

We're running HDS722020ALA330 (revisions A20N and A28A - 8 of each) for 
5 weeks now without any problems.
We also have 14 WD20EADS (at least 3 different versions starting with 
00S, 00R, 32S - can't tell right now, inquiry is truncated and the disks 
are in the servers).


All of these disks were bought at the same time (about 5 weeks ago) and 
while we had no problems with the hitachi drives, we already had 4 
failures with the WD, which apparently can not be linked to the server, 
since both servers are the same and the disks are mixed on the 
controllers (part of them WD, part of them hitachi).


On the performance side, Hitachi drive seem more responsive (disks are 
configured in raidz3 and the limiting factor in IOs is the WD drives).
Regarding the sequential write perf we made some tests with dd, writing 
at the beginning of the disk and at the end of the disk (512MB each time).
Hitachi sustains 130MB/s at the beginning of the drive and 75MB/s at the 
end. For WD, the figures are 110MB/s and 55MB/s.

These figures are from memory.

We decided that we won't buy any WD drives.
When warranty's over they'll get replaced with hitachis.

--arnaud





___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Help needed with zfs send/receive

2010-02-03 Thread Arnaud Brand

Le 03/02/2010 04:44, Brent Jones a écrit :

On Tue, Feb 2, 2010 at 7:41 PM, Brent Jones  wrote:
   

On Tue, Feb 2, 2010 at 12:05 PM, Arnaud Brand  wrote:
 

Hi folks,

I'm having (as the title suggests) a problem with zfs send/receive.
Command line is like this :
pfexec zfs send -Rp tank/t...@snapshot | ssh remotehost pfexec zfs recv -v -F
-d tank

This works like a charm as long as the snapshot is small enough.

When it gets too big (meaning somewhere between 17G and 900G), I get ssh
errors (can't read from remote host).

I tried various encryption options (the fastest being in my case arcfour)
with no better results.
I tried to setup a script to insert dd on the sending and receiving side to
buffer the flow, still read errors.
I tried with mbuffer (which gives better performance), it didn't get better.
Today I tried with netcat (and mbuffer) and I got better throughput, but it
failed at 269GB transferred.

The two machines are connected to the switch with 2x1GbE (Intel) joined
together with LACP.
The switch logs show no errors on the ports.
kstat -p | grep e1000g shows one recv error on the sending side.

I can't find anything in the logs which could give me a clue about what's
happening.

I'm running build 131.

If anyone has the slightest clue of where I could look or what I could do to
pinpoint/solve the problem, I'd be very gratefull if (s)he could share it
with me.

Thanks and have a nice evening.

Arnaud



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


   

This issue seems to have started after snv_129 for me. I get "connect
reset by peer", or transfers (of any kind) simply timeout.

Smaller transfers succeed most of the time, while larger ones usually
fail. Rolling back to snv_127 (my last one) does not exhibit this
issue. I have not had time to narrow down any causes, but I did find
one bug report that found some TCP test scenarios failed during one of
the builds, but unable to find that CR at this time.

--
Brent Jones
br...@servuhome.net

 

Ah, I found the CR that seemed to describe the situation (broken
pipe/connection reset by peer)

http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6905510
   
This CR is marked as fixed in b131 and only relates to loopback or am I 
missing something ?


The transfer I started yesterday night finished with no errors :
/usr/bin/nc -l -p 8023 | /usr/local/bin/mbuffer -s1024k -m512M -P40 | dd 
of=/tank/repl.zfs bs=128k

summary: 1992 GByte in 7 h 10 min 78.9 MB/s, 8472x empty
2139334374612 bytes (2,1 TB) copied, 25860,5 s, 82,7 MB/s

So this seems to be linked someway to an high CPU load.
I'll change the network cables and, as Richard suggested, remove LACP.
Then I'll launch another transfer while at the same time zfs receiving 
the file I transferred this night.
If the transfer fails I guess it will be related to e1000g problems 
under load, not zfs, so a better place to post would be opensolaris-discuss.


Thanks for your help,
Arnaud

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss