Re: [OmniOS-discuss] zfs send/receive corruption?

2015-10-05 Thread Michael Rasmussen
On Mon, 5 Oct 2015 11:30:04 -0600
Aaron Curry  wrote:

> # zfs get sync pool/fs
> NAMEPROPERTY  VALUE SOURCE
> pool/fs  sync  standard  default
> 
> Is that what you mean?
> 
Yes. Default means honor sync requests.

-- 
Hilsen/Regards
Michael Rasmussen

Get my public GnuPG keys:
michael  rasmussen  cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir  datanom  net
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C
mir  miras  org
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE3E80917
--
/usr/games/fortune -es says:
Love isn't only blind, it's also deaf, dumb, and stupid.


pgpVEGU6VwdGe.pgp
Description: OpenPGP digital signature
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] zfs send/receive corruption?

2015-10-05 Thread Schweiss, Chip
This smells of a problem reported fixed on FreeBSD and ZoL.
http://permalink.gmane.org/gmane.comp.file-systems.openzfs.devel/1545

On the Illumos ZFS the question was posed if the fixed have been
incorporated, but unanswered:
http://www.listbox.com/member/archive/182191/2015/09/sort/time_rev/page/1/entry/23:71/20150916025648:1487D326-5C40-11E5-A45A-20B0EF10038B/

I'd be curious to confirm if this has been fixed in Illimos or not as I now
have systems with lots of CIFS and ACLs and potential vulnerable to the
same sort of problem.  Thus far I cannot find reference to it, but I could
be looking in the wrong place, or for the wrong keywords.

-Chip

On Mon, Oct 5, 2015 at 12:45 PM, Michael Rasmussen  wrote:

> On Mon, 5 Oct 2015 11:30:04 -0600
> Aaron Curry  wrote:
>
> > # zfs get sync pool/fs
> > NAMEPROPERTY  VALUE SOURCE
> > pool/fs  sync  standard  default
> >
> > Is that what you mean?
> >
> Yes. Default means honor sync requests.
>
> --
> Hilsen/Regards
> Michael Rasmussen
>
> Get my public GnuPG keys:
> michael  rasmussen  cc
> http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
> mir  datanom  net
> http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C
> mir  miras  org
> http://pgp.mit.edu:11371/pks/lookup?op=get=0xE3E80917
> --
> /usr/games/fortune -es says:
> Love isn't only blind, it's also deaf, dumb, and stupid.
>
> ___
> OmniOS-discuss mailing list
> OmniOS-discuss@lists.omniti.com
> http://lists.omniti.com/mailman/listinfo/omnios-discuss
>
>
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] zfs send/receive corruption?

2015-10-05 Thread Michael Rasmussen
On Mon, 5 Oct 2015 10:45:37 -0600
Aaron Curry  wrote:

> 
> While I have been able to reliably pin the down a particular file or
> file/directory combination that is causing the problem and can easily
> reproduce the panic, I am at a loss of where to go from here. Are there any
> known issues with zfs send/receive? Any help would be appreciated.
> 
What is the sync setting on the receiving pool?

-- 
Hilsen/Regards
Michael Rasmussen

Get my public GnuPG keys:
michael  rasmussen  cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir  datanom  net
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C
mir  miras  org
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE3E80917
--
/usr/games/fortune -es says:
There are no emotional victims, only volunteers.


pgp582j3rwJn5.pgp
Description: OpenPGP digital signature
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


[OmniOS-discuss] zfs send/receive corruption?

2015-10-05 Thread Aaron Curry
We have a file server implementing CIFS to serve files to our users.
Periodic snapshots are replicated to a secondary system via zfs
send/receive. I recently moved services (shares, ip addresses, etc) to the
secondary system while we performed some maintenance on the primary server.
Shortly after everything was up and running on the secondary system, that
server panic'ed. Here's the stack trace:

panicstr = assertion failed: 0 == zfs_acl_node_read(dzp, B_TRUE, ,
B_FALSE), file: ../../common/fs/zfs/zfs_acl.c, line: 1717
panicstack = fba8b1a8 () | zfs:zfs_acl_ids_create+4d2 () |
zfs:zfs_make_xattrdir+96 () | zfs:zfs_get_xattrdir+103 () |
zfs:zfs_lookup+1b6 () | genunix:fop_lookup+a2 () |
genunix:xattr_dir_realdir+b3 () | genunix:xattr_lookup_cb+65 () |
genunix:gfs_dir_lookup_dynamic+7c () | genunix:gfs_dir_lookup+18c () |
genunix:gfs_vop_lookup+35 () | genunix:fop_lookup+a2 () |
smbsrv:smb_vop_lookup+ea () | smbsrv:smb_vop_stream_lookup+e5 () |
smbsrv:smb_fsop_lookup_name+158 () | smbsrv:smb_open_subr+1b8 () |
smbsrv:smb_common_open+54 () | smbsrv:smb_com_nt_create_andx+ac () |
smbsrv:smb_dispatch_request+687 () | smbsrv:smb_session_worker+a0 () |
genunix:taskq_d_thread+b7 () | unix:thread_start+8 () |

Luckily, it wasn't hard to identify the steps to reproduce the problem.
Accessing a particular directory from a Mac OS X system causes this panic
every time, but only on the secondary (zfs send/receive target) system.
Accessing the same directory on the primary system does not cause a panic.

I have tested this on other systems and have been able to reproduce the
panic on the zfs send/receive target every time. Also, I have been able to
reproduce it with OmniOS versions 151010, 151012 and the latest 151014.
Replicating between two separate system and replicating to the local system
both exhibit the same behavior.

While I have been able to reliably pin the down a particular file or
file/directory combination that is causing the problem and can easily
reproduce the panic, I am at a loss of where to go from here. Are there any
known issues with zfs send/receive? Any help would be appreciated.

Aaron
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss