Re: [zfs-discuss] (Fletcher+Verification) versus (Sha256+No Verification)

2011-01-12 Thread Enrico Maria Crisostomo
Edward, this is OT but may I suggest you to use something like Wolfram Alpha to 
perform your calculations a bit more comfortably?

-- 
Enrico M. Crisostomo

On Jan 12, 2011, at 4:24, Edward Ned Harvey 
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:

 For anyone who still cares:
 
 I'm calculating the odds of a sha256 collision in an extremely large zpool,
 containing 2^35 blocks of data, and no repetitions.
 
 The formula on wikipedia for the birthday problem is:
 p(n;d) ~= 1-( (d-1)/d )^( 0.5*n*(n-1) )
 
 In this case, 
 n=2^35
 d=2^256
 
 The problem is, this formula does not compute because n is so large.
 Fortunately x = e^ln(x) and so we're able to use this technique to make the
 huge exponent instead, a huge multiplication.
 
 (Using the bc mathlib notation, the l(x) function is the natural log of x,
 and e(x) is e raised to the power of x)
 p(n;d) ~= 1-e(   (  0.5*n*(n-1)*l((d-1)/d)  )   )
 
 Using bc to calculate the answer:
 bc -l
 
 n=2^35
 d=2^256
 scale=1024
 1-e(   (  0.5*n*(n-1)*l((d-1)/d)  )   )
 .50978941154
 I manually truncated here (precision goes out 1024 places).  This is
 5.1*10^-57
 
 Note: I had to repeat the calculation many times, setting a larger and
 larger scale.  The default scale of 20, and even 64 and 70 and 80 were not
 precise enough to produce a convergent answer around the -57th decimal
 place.  So I just kept going larger, and in retrospect, anything over 100
 would have been fine.  I wrote 1024 above, so who cares.
 
 If you've been paying close attention you'll recognize this is the same
 answer I originally calculated, but my original equation was in fact wrong.
 It just so happens that my original equation neglects the probability of
 collisions from previous events, so it is actually accurate whenever the
 probability of previous events is insignificant.  It is merely luck that for
 the data size in question, my equation produced something that looked
 correct.  It would produce a wrong calculation of probability for a larger
 value of n.
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Storage server hardwae

2010-08-25 Thread Enrico Maria Crisostomo
I'm currently using SXCE with eSATA (with an LSI controller) and SAS disks in 
my home boxes and they run just fine. The only glitch I had after LU-upgrading 
to the latest release is eSATA disk not spinning down any longer when idle.

I export file systems with NFS to my Macs: beware that Mac OS X uses decomposed 
UTF-8 characters and sometimes I have some portability issues when file names 
contain, for example, accented characters. It runs fine and pretty better than 
CIFS, IMHO.

In some case I use an OS X iSCSI initiator and Comstar: it runs fine and it's 
the only solution I found if you need, for example, to use time machine upon a 
ZFS volume.

Bye,
Enrico
-- 
Enrico M. Crisostomo

On Aug 25, 2010, at 21:29, Dr. Martin Mundschenk m.mundsch...@me.com wrote:

 Hi!
 
 I'm running a OSOL box for quite a while and I think ZFS is an amazing 
 filesystem. As a computer I use a Apple MacMini with USB and FireWire devices 
 attached. Unfortunately the USB and sometimes the FW devices just die, 
 causing the whole system to stall, forcing me to do a hard reboot.
 
 I had the worst experience with an USB-SATA bridge running an Oxford chipset, 
 in a way that the four external devices stalled randomly within a day or so. 
 I switched to a four slot raid box, also with USB bridge, but with better 
 reliability.
 
 Well, I wonder what are the components to build a stable system without 
 having an enterprise solution: eSATA, USB, FireWire, FibreChannel?
 
 Martin
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is it safe/possible to idle HD's in a ZFS Vdev to save wear/power?

2010-04-18 Thread Enrico Maria Crisostomo
Hi.

I'm using two SIIG eSATA II PCIe PRO adapters on a Sun Ultra 24
workstation, too. The adapters are connected to four external eSATA
drives that made up a zpool used for scheduled back-up purposes. I'm
now running SXCE b129, live upgraded from b116. Before the live
upgrade the external disks were spinning down when idle, now they
never do. /etc/power.conf was not modified. I'm probably waiting the
next OpenSolaris release and check if it works. Nevertheless, in the
meanwhile, do you have any suggestion about how to debug this? I don't
like wasting energy this way and I cannot shutdown that machine so
often.

Thanks,
Enrico

On Sat, Apr 17, 2010 at 8:43 AM, Bill Sommerfeld
bill.sommerf...@oracle.com wrote:
 On 04/16/10 20:26, Joe wrote:

 I was just wondering if it is possible to spindown/idle/sleep hard disks
 that are part of a Vdev  pool SAFELY?

 it's possible.

 my ultra24 desktop has this enabled by default (because it's a known desktop
 type).  see the power.conf man page; I think you may need to add an autopm
 enable if the system isn't recognized as a known desktop.

 the disks spin down when the system is idle; there's a delay of a few
 seconds when they spin back up.

                                        - Bill
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss




-- 
Ελευθερία ή θάνατος
Programming today is a race between software engineers striving to
build bigger and better idiot-proof programs, and the Universe trying
to produce bigger and better idiots. So far, the Universe is winning.
GPG key: 1024D/FD2229AF
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] automate zpool scrub

2009-11-01 Thread Enrico Maria Crisostomo
It's ok to use zpool full path. Nontheless, I'd suggest you read  
crontab man page to learn how you can set some options, such as paths,  
shell, timezones and so on directly into your crontab files.


On Nov 1, 2009, at 16:45, Vano Beridze vanua...@gmail.com wrote:

Now I've logged in and there was a mail saying that cron did not  
found zpool


it's in my path
which zpool
/usr/sbin/spool

Does cron use different PATH setting?

Is it ok to specify /usr/sbin/zpool in crontab file?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] automate zpool scrub

2009-11-01 Thread Enrico Maria Crisostomo
Glad it helped you.

As far as it concerns your observation about the root user, please
take into account that Solaris Role Based Access control lets you fine
tune privileges you grant to users: your ZFS administrator needs not
be root. Specifically, if you have a look at your /etc/prof_attr and
/etc/exec_attr, you'll notice that there exist two profiles: ZFS
Storage Management and ZFS File System Management:

exec_attr:ZFS File System Management:solaris:cmd:::/sbin/zfs:euid=0
exec_attr:ZFS Storage Management:solaris:cmd:::/sbin/zpool:uid=0

You can run the zfs and zpool command from a mortal user account
with pfexec if such users is associated with the corresponding
profile.

Bye,
Enrico

On Sun, Nov 1, 2009 at 9:03 PM, Vano Beridze vanua...@gmail.com wrote:
 I've looked at man cron and found out that I can modify /etc/default/cron 
 file to set PATH that is defaulted for /usr/bin for mortal users and 
 /usr/bin:/usr/sbin for root.

 I did not change /etc/default/cron file, instead I've indicated full path in 
 my crontab file.

 Ethically speaking I guess scrubbing filesystem weekly is an administrative 
 task and it's more applicable to root user, So If I had created crontab job 
 for root user the whole PATH problem would not arise.

 Anyways it's my desktop so I'm the man and woman in here and there is no big 
 difference what user's crontab will do the job. :)
 --
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss




-- 
Ελευθερία ή θάνατος
Programming today is a race between software engineers striving to
build bigger and better idiot-proof programs, and the Universe trying
to produce bigger and better idiots. So far, the Universe is winning.
GPG key: 1024D/FD2229AF
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] White box server for OpenSolaris

2009-09-25 Thread Enrico Maria Crisostomo
On Fri, Sep 25, 2009 at 10:56 PM, Toby Thain t...@telegraphics.com.au wrote:

 On 25-Sep-09, at 2:58 PM, Frank Middleton wrote:

 On 09/25/09 11:08 AM, Travis Tabbal wrote:

 ... haven't heard if it's a known
 bug or if it will be fixed in the next version...

 Out of courtesy to our host, Sun makes some quite competitive
 X86 hardware. I have absolutely no idea how difficult it is
 to buy Sun machines retail,

 Not very difficult. And there is try and buy.
Indeed, at least in Spain and in Italy I had no problem buying
workstations. Recently I owned both Sun Ultra 20 M2 and Ultra 24. I
had a great feeling with them and price seemed very competitive to me,
compared to offers of other mainstream hardware providers.


 People overestimate the cost of Sun, and underestimate the real value of
 fully integrated.
+1. People like fully integration when it comes, for example, to
Apple, iPods and iPhones. When it comes, just to make another
example...,  to Solaris, ZFS, ECC memory and so forth (do you remember
those posts some time ago?), they quickly forget.


 --Toby

 but it seems they might be missing
 out on an interesting market - robust and scalable SOHO servers
 for the DYI gang ...

 Cheers -- Frank


 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss




-- 
Ελευθερία ή θάνατος
Programming today is a race between software engineers striving to
build bigger and better idiot-proof programs, and the Universe trying
to produce bigger and better idiots. So far, the Universe is winning.
GPG key: 1024D/FD2229AF
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] libzfs.h versioning

2009-09-23 Thread Enrico Maria Crisostomo
Richard,

I compared the libzfs_jni source code and they're pretty different
from what we're doing. libzfs_jni is essentially a jni wrapper to
(yet?) another set of zfs-related programs written in C. zfs for Java,
on the other hand, is a Java wrapper to the functionality of (and only
of) libzfs. I suppose that libzfs_jni capabilities could be
implemented on top of zfs for java but the approach is pretty
different: the main difference is the purpose of the exposed methods:
libzfs is the interface to ZFS and its methods are low level while
libzfs_jni exposes a set of operations which are coarse grained and
targeted to management.

Nevertheless, the functionality provided by libzfs_jni is interesting
and I'd like to build something similar by using zfs for java.
Personally, I'm doing this for two reasons: having a libzfs wrapper
for Java seems like a good thing to have and I'd like to use to build
some management interfaces (such as web but not only) instead on
having to rely on shell scripting with zfs and zpool commands. I'll
keep an eye to libzfs_jni.

Now, to return to the original question, I haven't found a way to
correlate libzfs.h versions (and dependencies) to Nevada releases. At
the moment, I'm willing to extract information from a sysinfo call
(any suggestion about a better way?) and the next step, whose logic
I'm missing, is how to correlate this information with to a concrete
libzfs.h version from openGrok: maybe it's just trivial, but I do not
find it. Have you got some information to help me address this
problem?

Thanks,
Enrico

On Fri, Sep 11, 2009 at 12:53 AM, Enrico Maria Crisostomo
enrico.m.crisost...@gmail.com wrote:
 On Fri, Sep 11, 2009 at 12:26 AM, Richard Elling
 richard.ell...@gmail.com wrote:
 On Sep 10, 2009, at 1:03 PM, Peter Tribble wrote:

 On Thu, Sep 10, 2009 at 8:52 PM, Richard Elling
 richard.ell...@gmail.com wrote:

 Enrico,
 Could you compare and contrast your effort with the existing libzfs_jni?

 http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/lib/libzfs_jni/common/

 Where's the source for the java code that uses that library?

 Excellent question!  It is used for the BUI ZFS manager webconsole that
 comes with S10 and SXCE. So you might find the zfs.jar as
 /usr/share/webconsole/webapps/zfs/WEB-INF/lib/zfs.jar
 The jar only contains the class files, though.
 Yes, that's what I thought when I saw it. Furthermore, the last time I
 tried it was still unaligned with the new ZFS capabilites: it crashed
 because of an unknown gzip compression type...


 Someone from Sun could comment on the probability that they
 will finally get their act together and have a real BUI framework for
 systems management... they've tried dozens (perhaps hundreds)
 of times, with little to show for the effort :-(
 By the way, one of the goals I'd like to reach with such kind of
 library is just that: putting the basis for building a java based
 management framework for ZFS. Unfortunately wrapping libzfs will
 hardly fulfill this goal and the more I dig into the code the more I
 realize that we will need to wrap (or reimplement) some of the logic
 of the zfs and zpool commands. I'm also confident that building a good
 library on top of this wrapper will give us a very powerful tool to
 play with from Java.

  -- richard





 --
 Ελευθερία ή θάνατος
 Programming today is a race between software engineers striving to
 build bigger and better idiot-proof programs, and the Universe trying
 to produce bigger and better idiots. So far, the Universe is winning.
 GPG key: 1024D/FD2229AF




-- 
Ελευθερία ή θάνατος
Programming today is a race between software engineers striving to
build bigger and better idiot-proof programs, and the Universe trying
to produce bigger and better idiots. So far, the Universe is winning.
GPG key: 1024D/FD2229AF
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] libzfs.h versioning

2009-09-10 Thread Enrico Maria Crisostomo
Hi.

I'm willing to maintain a project hosted on java.net
(https://zfs.dev.java.net/) that aims to provide a Java wrapper to
libzfs. I've already wrapped, although not committed yet, the last
libzfs.h I found on OpenSolaris.org (v. 10342:108f0058f837) and the
first problem I want to address is library versioning. The existing
sources are wrapping an old version of libzfs.h and, as far as I can
see, there were changes in libzfs.h history which would disrupt the
wrapper functionality and I just wouldn't like to present the user
with linker errors. Rather, I'd like to keep track of libzfs.h
history during the various Nevada builds and plug the correct
wrapper at runtime, during the library bootstrap. Obviously, an user
could just choose and use directly the wrapper, which will be the
equivalent of linking against libzfs, and do what it wants to with it.
Our idea, which the project founder has already brought on, is
wrapping much of the functionality (if not all...) behind a hierarchy
of Java classes which would take care of the implementation details
and shield the user against library changes.

The first question is, then: how can I determine which libzfs.h
version has gone in which Nevada build? Once I have this information,
how would you suggest me to plug the wrapper at runtime? I was
thinking about something like uname -rv and use that information to
load wrappers, but perhaps there are finest ways to do this.

Thanks in advance,
Enrico

-- 
Enrico M. Crisostomo
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] libzfs.h versioning

2009-09-10 Thread Enrico Maria Crisostomo
Thanks for pointing it out, Richard. I missed libzfs_jni. I'll have a
look at it and see where we're overlapping.

As far as I can see at a quick glance is that libzfs_jni is including
functionality we'd like to build upon the libzfs wrapper (that's why I
was studying zfs and zpool commands). Maybe a convergence may be
worthwhile: I'll study it ASAP.

Thanks for the pointer Richard!
Enrico


The first thing

On Thu, Sep 10, 2009 at 9:52 PM, Richard Elling
richard.ell...@gmail.com wrote:
 Enrico,
 Could you compare and contrast your effort with the existing libzfs_jni?
 http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/lib/libzfs_jni/common/

 Perhaps it would be worthwhile to try and un-privatize libzfs_jni?
  -- richard

 On Sep 10, 2009, at 12:20 PM, Enrico Maria Crisostomo wrote:

 Hi.

 I'm willing to maintain a project hosted on java.net
 (https://zfs.dev.java.net/) that aims to provide a Java wrapper to
 libzfs. I've already wrapped, although not committed yet, the last
 libzfs.h I found on OpenSolaris.org (v. 10342:108f0058f837) and the
 first problem I want to address is library versioning. The existing
 sources are wrapping an old version of libzfs.h and, as far as I can
 see, there were changes in libzfs.h history which would disrupt the
 wrapper functionality and I just wouldn't like to present the user
 with linker errors. Rather, I'd like to keep track of libzfs.h
 history during the various Nevada builds and plug the correct
 wrapper at runtime, during the library bootstrap. Obviously, an user
 could just choose and use directly the wrapper, which will be the
 equivalent of linking against libzfs, and do what it wants to with it.
 Our idea, which the project founder has already brought on, is
 wrapping much of the functionality (if not all...) behind a hierarchy
 of Java classes which would take care of the implementation details
 and shield the user against library changes.

 The first question is, then: how can I determine which libzfs.h
 version has gone in which Nevada build? Once I have this information,
 how would you suggest me to plug the wrapper at runtime? I was
 thinking about something like uname -rv and use that information to
 load wrappers, but perhaps there are finest ways to do this.

 Thanks in advance,
 Enrico

 --
 Enrico M. Crisostomo
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss





-- 
Ελευθερία ή θάνατος
Programming today is a race between software engineers striving to
build bigger and better idiot-proof programs, and the Universe trying
to produce bigger and better idiots. So far, the Universe is winning.
GPG key: 1024D/FD2229AF
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] libzfs.h versioning

2009-09-10 Thread Enrico Maria Crisostomo
On Fri, Sep 11, 2009 at 12:26 AM, Richard Elling
richard.ell...@gmail.com wrote:
 On Sep 10, 2009, at 1:03 PM, Peter Tribble wrote:

 On Thu, Sep 10, 2009 at 8:52 PM, Richard Elling
 richard.ell...@gmail.com wrote:

 Enrico,
 Could you compare and contrast your effort with the existing libzfs_jni?

 http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/lib/libzfs_jni/common/

 Where's the source for the java code that uses that library?

 Excellent question!  It is used for the BUI ZFS manager webconsole that
 comes with S10 and SXCE. So you might find the zfs.jar as
 /usr/share/webconsole/webapps/zfs/WEB-INF/lib/zfs.jar
 The jar only contains the class files, though.
Yes, that's what I thought when I saw it. Furthermore, the last time I
tried it was still unaligned with the new ZFS capabilites: it crashed
because of an unknown gzip compression type...


 Someone from Sun could comment on the probability that they
 will finally get their act together and have a real BUI framework for
 systems management... they've tried dozens (perhaps hundreds)
 of times, with little to show for the effort :-(
By the way, one of the goals I'd like to reach with such kind of
library is just that: putting the basis for building a java based
management framework for ZFS. Unfortunately wrapping libzfs will
hardly fulfill this goal and the more I dig into the code the more I
realize that we will need to wrap (or reimplement) some of the logic
of the zfs and zpool commands. I'm also confident that building a good
library on top of this wrapper will give us a very powerful tool to
play with from Java.

  -- richard





-- 
Ελευθερία ή θάνατος
Programming today is a race between software engineers striving to
build bigger and better idiot-proof programs, and the Universe trying
to produce bigger and better idiots. So far, the Universe is winning.
GPG key: 1024D/FD2229AF
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Failure of Quicktime *.mov files after move to zfs disk

2009-08-23 Thread Enrico Maria Crisostomo
Hi Harry.

Glad to hear you solved the problem. As soon as I saw that Quicktime's
error I thought about permissions. Unfortunately I sort of got used to
it. To follow on this discussion, I think there's something strange
here. It might simply be a Quicktime idiosyncrasy, which is chocking
onto those permissions because it's expecting some more.

Although it's a CIFS-related discussion rather than ZFS, I want to
make my point clear. My reasoning is: I start with what I think is a
reasonable (and intuitive) choice: mode 600 for my files (no other
ACL). If I examine permissions on the Windows side they are pretty
odd: as I said, there's no read there. If you examine special
permissions, my user is indeed allowed List folders/Read data
(amongst others).

Nevertheless, almost everything works as expected. I can indeed read
the files' contents and copy them locally, although I cannot launch
files with Quicktime directly on the share. That's why I think
Quicktime is chocking somewhere.

If I give my user the Windows' read permission, on the Solaris side it
materializes like this:
$ ls -dV IMG_0003.JPG
--+  1 enrico   staff1082821 Jul 17 19:39 IMG_0003.JPG
  everyone@:rwxp---A-W-Co-:---:deny
user:enrico:--x---:---:deny
group:staff:rwxp--:---:deny
  everyone@:--a-R-c--s:---:allow
user:enrico:rw-p--aARWc--s:---:allow
user:enrico:rw-p---A-W-Cos:---:allow
group:staff:-s:---:allow

I'd like to find some documentation about how we should map Windows'
permissions to Solaris ACLs. It doesn't seem so intuitive to me, so
far. By the way: when I set up CIFS shares and then manage them from
Windows, I don't usually encounter any problem. The first time I saw
the strange Quicktime behaviour was due to a cron-scheduled script
which was resetting permissions on the shared files to Solaris
friendly values such as 600 or 644 for files and 700 and 755 for
directories. I thought it was sufficient, after mapping users and
groups with idmap, but it seems to fall short sometimes.

Thank you for any pointer,
Enrico

On Sat, Aug 22, 2009 at 9:00 PM, Harry Putnamrea...@newsguy.com wrote:
 Scott Laird sc...@sigkill.org writes:

 Checksum all of the files using something like md5sum and see if
 they're actually identical.  Then test each step of the copy and see
 which one is corrupting your files.

 On Fri, Aug 21, 2009 at 1:43 PM, Harry Putnamrea...@newsguy.com wrote:

 [...]

 I didn't do that since I've found that opening the file from vista
 with a file browser started as `administator' worked.. so apparently
 the files are indenticle enough to play in quicktime player started as
 administrator .

 Enrico Maria Crisostomo enrico.m.crisost...@gmail.com writes:

 [...]

 Thanks for the input... I've found now that the directory on zfs
 server that I was scping the files to had not gotten included in a
 previous chmod cmd run on the zfs server.

   chmod -R A=everyone@:full_set:fd:allow

 That particular directory where the transferred files were landing,
 was created after having run the chmod cmd above on the server.

 Here something's missing to me and documentation hasn't helped me
 (yet)... There's no read here. Just set it (on the Vista side it's
 just one click) and Quicktime will work. I've got a script which
 resets my files' permissions something like:

 find /yourdir -type f -exec chmod A- {} +
 find /yourdir -type f -exec chmod 644 {} +

 The chmod command I mentioned above appears many times on the
 cifs-server list, as a way to avoid permissions problems and as it
 turns out it works in my case too. ... although it appears to only be
 of use when called after the files are transferred.

 The commands you show also appear to make things work... however, at
 first I thought the executable bits that are set when the files are
 created on windows... doesn't really seem to prevent running it from a
 remote vista laptop.  It appears here that the permission bits as they
 occur work, and so does chmoding to 644.

 [...]

 Chris Wrote:

 It might be worth checking if they've got funny Unicode chars in the
 names. What normalization's happening on both servers, what version of
 NFS is being used? How big are the files?

 Apparently not the problem in my case... thanks for the input.

 Thomas Burgess wonsl...@gmail.com writes:

 i had something similar happen to me when i switched to ZFS but it turned
 out to be an error with cpio and the mkv format...i'm not sure exactly why
 but whenever i tried to backup mkv files with cpio onto ZFS it would give me
 corrupted files.

 This was also apparently not the problem in my case... thanks for the input.

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss




-- 
Ελευθερία ή θάνατος
Programming today is a race between software

Re: [zfs-discuss] Failure of Quicktime *.mov files after move to zfs disk

2009-08-21 Thread Enrico Maria Crisostomo
Check with Vista if you have permissions to read the file. I
experienced the same problem (that's why I posted another questions to
the CIFS mailing list about mapping users with idmap). It always
happens when I copy these files from the iPhone. These files result
with such permissions:

$ ls -dV IMG_0004.MOV
-rw---   1 enrico   staff12949182 Jul 17 19:39 IMG_0004.MOV
 owner@:--x---:---:deny
 owner@:rw-p---A-W-Co-:---:allow
 group@:rwxp--:---:deny
 group@:--:---:allow
  everyone@:rwxp---A-W-Co-:---:deny
  everyone@:--a-R-c--s:---:allow

On the Vista side, having mapped with idmap even the staff group to
the corresponding Windows group of the enrico user, the file results
with the following special permissions for the user enrico:
List folder/read data
Create files/write data
Create folders/append data
Write attributes
Write extended attributes

Here something's missing to me and documentation hasn't helped me
(yet)... There's no read here. Just set it (on the Vista side it's
just one click) and Quicktime will work. I've got a script which
resets my files' permissions something like:

find /yourdir -type f -exec chmod A- {} +
find /yourdir -type f -exec chmod 644 {} +

Hope this helps,
Enrico


On Fri, Aug 21, 2009 at 11:35 PM, Scott Lairdsc...@sigkill.org wrote:
 Checksum all of the files using something like md5sum and see if
 they're actually identical.  Then test each step of the copy and see
 which one is corrupting your files.

 On Fri, Aug 21, 2009 at 1:43 PM, Harry Putnamrea...@newsguy.com wrote:
 During the course of backup I had occassion to copy a number of
 quicktime video (*.mov) files to zfs server disk.

 Once there... navigating to them with quicktime player and opening
 results in a failure that (From windows Vista laptop) says:
    error --43: A file could not be found (Welcome.mov)

 I would have attributed it to some problem from scping it to the zfs
 server had it not been for finding that if I scp it to a linux server
 the problem does not occur.

 Both the zfs and linux (Gentoo) servers are on a home lan.. but using
 the same router/switch[s] over gigabit network adaptors.

 On both occasions the files were copied using cygwin/ssh on a Vista
 laptop.

 Anyone have an idea what might cause this.

 Any more details I can add that would make diagnostics easier?

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss




-- 
Ελευθερία ή θάνατος
Programming today is a race between software engineers striving to
build bigger and better idiot-proof programs, and the Universe trying
to produce bigger and better idiots. So far, the Universe is winning.
GPG key: 1024D/FD2229AF
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send -R core dumps on SXCE 110

2009-04-03 Thread Enrico Maria Crisostomo
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Thanks Matthew, I was not sure about it. I'm looking forward to the next SXCE 
build.

Cheers,
Enrico

Matthew Ahrens wrote:
 Enrico Maria Crisostomo wrote:
 # zfs send -R -I @20090329 mypool/m...@20090330 | zfs recv -F -d
 anotherpool/anotherfs

 I experienced core dumps and the error message was:

 internal error: Arg list too long
 Abort (core dumped)
 
 This is 6801979, fixed in build 111.
 
 --matt
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (SunOS)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAknWJTsACgkQW8+x8v0iKa9WIACdGkMA8ccKox8E1GkNtmvIfuJ5
3IUAnRtAm/Nb+EGavpPV0BVpJUCY2JCu
=WBH8
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs send -R core dumps on SXCE 110

2009-04-02 Thread Enrico Maria Crisostomo
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi.

I'm running a live upgraded (from b104) SXCE build 110 and I'm experiencing a
core dumps sending and receiving ZFS snapshots.

I sent a replication stream of a snapshot with

# zfs send -R mypool/m...@20090329 | zfs recv -d anotherpool/anotherfs

and it went fine. Back then I didn't pay attention to the fact that @20090329
was the only snapshot existing at the moment.

When I tried to send incremental replication streams with:

# zfs send -R -I @20090329 mypool/m...@20090330 | zfs recv -F -d
anotherpool/anotherfs

I experienced core dumps and the error message was:

internal error: Arg list too long
Abort (core dumped)

Running pstack on the core file says:
$ pstack core
core 'core' of 11853:   zfs recv -F -d tank/backup/solaris/filesystems
 fef022a5 _lwp_kill (1, 6, 8042e78, feeaab7e) + 15
 feeaab8a raise(6, 0, 8042ec8, fee81ffa) + 22
 fee8201a abort(8042ef8, feb65000, 8042ef8, 8088670, 8088a70, 400) + f2
 feaf2d59 zfs_verror (8088648, 80f, feb4d934, 8042f2c) + d5
 feaf30be zfs_standard_error_fmt (8088648, 7, feb4d934, 8043fb0, 80457a0) + 1ea
 feaf2ec4 zfs_standard_error (8088648, 7, 8043fb0, feb0ba1d) + 28
 feb0bc2b zfs_receive_one (8088648, 0, 8046e6b, a, 80457a0, 8045d00) + 15cf
 feb0c8e6 zfs_receive_impl (8088648, 8046e6b, a, 0, 8084788, 8046c2c) + 6b2
 feb0a275 zfs_receive_package (8088648, 0, 8046e6b, a, 8046540, 8046680) + 485
 feb0c8bd zfs_receive_impl (8088648, 8046e6b, a, 0, 0, 8046c2c) + 689
 feb0ca49 zfs_receive (8088648, 8046e6b, a, 0, 0, 8046d00) + 35
 0805847a zfs_do_receive (4, 8046d00, 8046cfc, 807187c) + 172
 0805bbcf main (5, 8046cfc, 8046d14, feffb7b4) + 2af
 08053f2d _start   (5, 8046e5c, 8046e60, 8046e65, 8046e68, 8046e6b) + 7d

I now realize that even the first command fails if I try to send a snapshot
which depends on previous ones:

# zfs send -R mypool/m...@20090402 | zfs receive -d anotherpool/anotherfs
internal error: Arg list too long
Abort (core dumped)

pstack on the core file says:$ pstack core
core 'core' of 5450:zfs receive -d tank/backup/solaris/filesystems
 fef022a5 _lwp_kill (1, 6, 80432c8, feeaab7e) + 15
 feeaab8a raise(6, 0, 8043318, fee81ffa) + 22
 fee8201a abort(8043348, feb65000, 8043348, 8088670, 8088a70, 400) + f2
 feaf2d59 zfs_verror (8088648, 80f, feb4d934, 804337c) + d5
 feaf30be zfs_standard_error_fmt (8088648, 7, feb4d934, 8044400, 8045bf0) + 1ea
 feaf2ec4 zfs_standard_error (8088648, 7, 8044400, feb0ba1d) + 28
 feb0bc2b zfs_receive_one (8088648, 0, 80472ab, 2, 8045bf0, 8046150) + 15cf
 feb0c8e6 zfs_receive_impl (8088648, 80472ab, 2, 0, 8084428, 804707c) + 6b2
 feb0a275 zfs_receive_package (8088648, 0, 80472ab, 2, 8046990, 8046ad0) + 485
 feb0c8bd zfs_receive_impl (8088648, 80472ab, 2, 0, 0, 804707c) + 689
 feb0ca49 zfs_receive (8088648, 80472ab, 2, 0, 0, 8047150) + 35
 0805847a zfs_do_receive (3, 8047150, 804714c, 807187c) + 172
 0805bbcf main (4, 804714c, 8047160, feffb7b4) + 2af
 08053f2d _start   (4, 804729c, 80472a0, 80472a8, 80472ab, 0) + 7d

Any hint about diagnosing what might be going on?
Thanks in advance,
Enrico


-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (SunOS)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAknVUEIACgkQW8+x8v0iKa8sPACfRbhkf8hY6LQFWgvQhWOyEN1/
TNAAn0QqDTTD0NfZJvy/BbEgF5MITdl/
=A2Vy
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss