Re: [OpenIndiana-discuss] NFS Automount from FreeNAS Server

2019-12-12 Thread Jonathan Adams
Hi Tim,

thanks, I'll look into that ... weirdly I get messages to the log file only
the first time I try to access the drive, which is why I didn't get more
information by putting the debug on ... however I made a mistake and will
now need to wait for it to time-out (hopefully!) before checking again.

Jon

On Tue, 10 Dec 2019 at 22:55, Tim Mooney  wrote:

> In regard to: Re: [OpenIndiana-discuss] NFS Automount from FreeNAS
> Server,...:
>
> > same issue, but with the latest versions of both OpenIndiana and FreeNAS
> ...
> > new hardware, on both counts:
> >
> > Dec 10 16:03:19 ascamrouter automountd[978]: [ID 784820 daemon.error]
> > server ascamnfs01 not responding
> >
> > I can mount the filesystem manually, I can mount on Solaris 10, but I
> can't
> > use autofs.
> >
> > It doesn't appear to be any different if I'm using LDAP or not (I turned
> > off LDAP for testing) ...
> >
> > does anyone have any idea where to start looking?
>
> I haven't used the automounter in ages, but this Oracle developer blog
> post has a very interesting debug trick in it:
>
> https://blogs.oracle.com/cwb/debugging-automounter-problems
>
> Tim
>
> > On Mon, 23 Feb 2015 at 11:14, Jonathan Adams 
> wrote:
> >
> >> Hi, thanks for keeping on trying ...
> >>
> >> I was optimistic, till I discovered that our Infrastructure guy had put
> >> comments on almost all the shares.
> >>
> >> Just for giggles I've added the snoop output from failed (snoopy.out)
> and
> >> working (snoopy2.out) in case that helps anyone.
> >>
> >> Jon
> >>
> >> On 20 February 2015 at 19:14, Till Wegmüller 
> wrote:
> >>
> >>> Hmm ok i've run out of ideas.
> >>>
> >>> It looks like a bug or a problematic Setting in FreeNAS.
> >>> My Automount Works and can mount shares very reliably with /dev and
> >>> /Hipster
> >>> (newest 2015)
> >>> Yours seems to work as well, atleast with solaris.
> >>>
> >>> Just for fun I had a little look around google to see if there are some
> >>> fun
> >>> FreeNAS Bugs. Was not disapointed http://vtricks.com/?p=2031
> >>> It Looks like FreeNas wants its shares to be commented :)
> >>>
> >>> Greetings Till
> >>>
> >>>
> >>> On Friday 20 February 2015 16.15:05 Jonathan Adams wrote:
> >>>> On 20 February 2015 at 15:56, Till Wegmüller 
> >>> wrote:
> >>>>> no problem :)
> >>>>>
> >>>>> ...
> >>>>>
> >>>>> The No such file or directory Error Happens when automount can't find
> >>> a
> >>>>> directory on the Server and thus does not create and mount the
> >>> Directory.
> >>>>
> >>>> jadams@jadlaptop:~$ dfshares mansalnfs01
> >>>> RESOURCE  SERVER ACCESSTRANSPORT
> >>>> mansalnfs01:/mnt/datapool mansalnfs01  - -
> >>>> mansalnfs01:/mnt/datapool/accountsmansalnfs01  - -
> >>>> mansalnfs01:/mnt/datapool/analystsmansalnfs01  - -
> >>>> mansalnfs01:/mnt/datapool/inorganics  mansalnfs01  - -
> >>>> mansalnfs01:/mnt/datapool/organicsmansalnfs01  - -
> >>>> mansalnfs01:/mnt/datapool/technician  mansalnfs01  - -
> >>>> mansalnfs01:/mnt/datapool1/PM mansalnfs01  - -
> >>>> mansalnfs01:/mnt/datapool1/bdmmansalnfs01  - -
> >>>> mansalnfs01:/mnt/datapool1/qualitymansalnfs01  - -
> >>>> mansalnfs01:/mnt/datapool1/reception  mansalnfs01  - -
> >>>> mansalnfs01:/mnt/datapool2/IT mansalnfs01  - -
> >>>> mansalnfs01:/mnt/datapool2/airmansalnfs01  - -
> >>>> mansalnfs01:/mnt/datapool2/health mansalnfs01  - -
> >>>> mansalnfs01:/mnt/datapool2/metals mansalnfs01  - -
> >>>> mansalnfs01:/mnt/datapool2/sr mansalnfs01  - -
> >>>>
> >>>> in this case all the directories I'm trying to access are on
> >>> datapool2/IT
> >>>> ... it's a ZFS share.
> >>>>
> >>>>> My Question is from where does the truss output of the successful
> >>>>> automount come from?
> >>>>
> >>>> root@jadlaptop:~# ps -ef | grep automount
> >>&g

Re: [OpenIndiana-discuss] NFS Automount from FreeNAS Server

2019-12-11 Thread Judah Richardson
On Wed, Dec 11, 2019, 15:54 Tim Mooney  wrote:

> In regard to: Re: [OpenIndiana-discuss] NFS Automount from FreeNAS
> Server,...:
>
> > As an OI newcomer, I'm somewhat confused. I thought mounting was done via
> > /etc/vfstab entries? Or is something else/more being attempted here?
>
> There's a bit more going on here.
>
> vfstab entries for traditional filesystems are closer to the "static" end
> of the filesystem mount "spectrum".  Long before the days of an OS
> automatically mounting a USB stick when you inserted it, the
> autofs/automountd stack could do something kind of like that, but
> generally aimed at network filesystems (NFS).
>
> The classic example that Sun used to champion was a cluster of diskless
> workstations that auto-mounted NFS shares (like your home directory)
> automatically when needed.  automountd typically played a part in that.
>
> You can read more about it in
>
> https://en.wikipedia.org/wiki/Automounter
>
> and the OI/Solaris-ancestry bits in automount(1M), automountd(1M),
> autofs(4).
>
> Tim
> --
> Tim Mooney tim.moo...@ndsu.edu
> Enterprise Computing & Infrastructure /
> Division of Information Technology/701-231-1076 (Voice)
> North Dakota State University, Fargo, ND 58105-5164
>
> ___
> openindiana-discuss mailing list
> openindiana-discuss@openindiana.org
> https://openindiana.org/mailman/listinfo/openindiana-discuss

Oh, I see. Thanks!

>
>
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] NFS Automount from FreeNAS Server

2019-12-11 Thread Tim Mooney

In regard to: Re: [OpenIndiana-discuss] NFS Automount from FreeNAS Server,...:


As an OI newcomer, I'm somewhat confused. I thought mounting was done via
/etc/vfstab entries? Or is something else/more being attempted here?


There's a bit more going on here.

vfstab entries for traditional filesystems are closer to the "static" end
of the filesystem mount "spectrum".  Long before the days of an OS
automatically mounting a USB stick when you inserted it, the
autofs/automountd stack could do something kind of like that, but
generally aimed at network filesystems (NFS).

The classic example that Sun used to champion was a cluster of diskless
workstations that auto-mounted NFS shares (like your home directory)
automatically when needed.  automountd typically played a part in that.

You can read more about it in

https://en.wikipedia.org/wiki/Automounter

and the OI/Solaris-ancestry bits in automount(1M), automountd(1M), autofs(4).

Tim
--
Tim Mooney tim.moo...@ndsu.edu
Enterprise Computing & Infrastructure /
Division of Information Technology/701-231-1076 (Voice)
North Dakota State University, Fargo, ND 58105-5164

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] NFS Automount from FreeNAS Server

2019-12-10 Thread Judah Richardson
On Tue, Dec 10, 2019 at 4:55 PM Tim Mooney  wrote:

> In regard to: Re: [OpenIndiana-discuss] NFS Automount from FreeNAS
> Server,...:
>
> > same issue, but with the latest versions of both OpenIndiana and FreeNAS
> ...
> > new hardware, on both counts:
> >
> > Dec 10 16:03:19 ascamrouter automountd[978]: [ID 784820 daemon.error]
> > server ascamnfs01 not responding
> >
> > I can mount the filesystem manually, I can mount on Solaris 10, but I
> can't
> > use autofs.
> >
> > It doesn't appear to be any different if I'm using LDAP or not (I turned
> > off LDAP for testing) ...
> >
> > does anyone have any idea where to start looking?
>
> I haven't used the automounter in ages, but this Oracle developer blog
> post has a very interesting debug trick in it:
>
> https://blogs.oracle.com/cwb/debugging-automounter-problems
>
> Tim
>
> > On Mon, 23 Feb 2015 at 11:14, Jonathan Adams 
> wrote:
> >
> >> Hi, thanks for keeping on trying ...
> >>
> >> I was optimistic, till I discovered that our Infrastructure guy had put
> >> comments on almost all the shares.
> >>
> >> Just for giggles I've added the snoop output from failed (snoopy.out)
> and
> >> working (snoopy2.out) in case that helps anyone.
> >>
> >> Jon
> >>
> >> On 20 February 2015 at 19:14, Till Wegmüller 
> wrote:
> >>
> >>> Hmm ok i've run out of ideas.
> >>>
> >>> It looks like a bug or a problematic Setting in FreeNAS.
> >>> My Automount Works and can mount shares very reliably with /dev and
> >>> /Hipster
> >>> (newest 2015)
> >>> Yours seems to work as well, atleast with solaris.
> >>>
> >>> Just for fun I had a little look around google to see if there are some
> >>> fun
> >>> FreeNAS Bugs. Was not disapointed http://vtricks.com/?p=2031
> >>> It Looks like FreeNas wants its shares to be commented :)
> >>>
> >>> Greetings Till
> >>>
> >>>
> >>> On Friday 20 February 2015 16.15:05 Jonathan Adams wrote:
> >>>> On 20 February 2015 at 15:56, Till Wegmüller 
> >>> wrote:
> >>>>> no problem :)
> >>>>>
> >>>>> ...
> >>>>>
> >>>>> The No such file or directory Error Happens when automount can't find
> >>> a
> >>>>> directory on the Server and thus does not create and mount the
> >>> Directory.
> >>>>
> >>>> jadams@jadlaptop:~$ dfshares mansalnfs01
> >>>> RESOURCE  SERVER ACCESSTRANSPORT
> >>>> mansalnfs01:/mnt/datapool mansalnfs01  - -
> >>>> mansalnfs01:/mnt/datapool/accountsmansalnfs01  - -
> >>>> mansalnfs01:/mnt/datapool/analystsmansalnfs01  - -
> >>>> mansalnfs01:/mnt/datapool/inorganics  mansalnfs01  - -
> >>>> mansalnfs01:/mnt/datapool/organicsmansalnfs01  - -
> >>>> mansalnfs01:/mnt/datapool/technician  mansalnfs01  - -
> >>>> mansalnfs01:/mnt/datapool1/PM mansalnfs01  - -
> >>>> mansalnfs01:/mnt/datapool1/bdmmansalnfs01  - -
> >>>> mansalnfs01:/mnt/datapool1/qualitymansalnfs01  - -
> >>>> mansalnfs01:/mnt/datapool1/reception  mansalnfs01  - -
> >>>> mansalnfs01:/mnt/datapool2/IT mansalnfs01  - -
> >>>> mansalnfs01:/mnt/datapool2/airmansalnfs01  - -
> >>>> mansalnfs01:/mnt/datapool2/health mansalnfs01  - -
> >>>> mansalnfs01:/mnt/datapool2/metals mansalnfs01  - -
> >>>> mansalnfs01:/mnt/datapool2/sr mansalnfs01  - -
> >>>>
> >>>> in this case all the directories I'm trying to access are on
> >>> datapool2/IT
> >>>> ... it's a ZFS share.
> >>>>
> >>>>> My Question is from where does the truss output of the successful
> >>>>> automount come from?
> >>>>
> >>>> root@jadlaptop:~# ps -ef | grep automount
> >>>> root  3599  3597   0 10:00:09 ?   0:00
> >>>> /usr/lib/autofs/automountd
> >>>> root  3597 1   0 10:00:09 ?   0:00
> >>>> /usr/lib/autofs/automountd
> >>>> root  4441  4387   0 16:14:15 p

Re: [OpenIndiana-discuss] NFS Automount from FreeNAS Server

2019-12-10 Thread Tim Mooney

In regard to: Re: [OpenIndiana-discuss] NFS Automount from FreeNAS Server,...:


same issue, but with the latest versions of both OpenIndiana and FreeNAS ...
new hardware, on both counts:

Dec 10 16:03:19 ascamrouter automountd[978]: [ID 784820 daemon.error]
server ascamnfs01 not responding

I can mount the filesystem manually, I can mount on Solaris 10, but I can't
use autofs.

It doesn't appear to be any different if I'm using LDAP or not (I turned
off LDAP for testing) ...

does anyone have any idea where to start looking?


I haven't used the automounter in ages, but this Oracle developer blog
post has a very interesting debug trick in it:

https://blogs.oracle.com/cwb/debugging-automounter-problems

Tim


On Mon, 23 Feb 2015 at 11:14, Jonathan Adams  wrote:


Hi, thanks for keeping on trying ...

I was optimistic, till I discovered that our Infrastructure guy had put
comments on almost all the shares.

Just for giggles I've added the snoop output from failed (snoopy.out) and
working (snoopy2.out) in case that helps anyone.

Jon

On 20 February 2015 at 19:14, Till Wegmüller  wrote:


Hmm ok i've run out of ideas.

It looks like a bug or a problematic Setting in FreeNAS.
My Automount Works and can mount shares very reliably with /dev and
/Hipster
(newest 2015)
Yours seems to work as well, atleast with solaris.

Just for fun I had a little look around google to see if there are some
fun
FreeNAS Bugs. Was not disapointed http://vtricks.com/?p=2031
It Looks like FreeNas wants its shares to be commented :)

Greetings Till


On Friday 20 February 2015 16.15:05 Jonathan Adams wrote:

On 20 February 2015 at 15:56, Till Wegmüller 

wrote:

no problem :)

...

The No such file or directory Error Happens when automount can't find

a

directory on the Server and thus does not create and mount the

Directory.


jadams@jadlaptop:~$ dfshares mansalnfs01
RESOURCE  SERVER ACCESSTRANSPORT
mansalnfs01:/mnt/datapool mansalnfs01  - -
mansalnfs01:/mnt/datapool/accountsmansalnfs01  - -
mansalnfs01:/mnt/datapool/analystsmansalnfs01  - -
mansalnfs01:/mnt/datapool/inorganics  mansalnfs01  - -
mansalnfs01:/mnt/datapool/organicsmansalnfs01  - -
mansalnfs01:/mnt/datapool/technician  mansalnfs01  - -
mansalnfs01:/mnt/datapool1/PM mansalnfs01  - -
mansalnfs01:/mnt/datapool1/bdmmansalnfs01  - -
mansalnfs01:/mnt/datapool1/qualitymansalnfs01  - -
mansalnfs01:/mnt/datapool1/reception  mansalnfs01  - -
mansalnfs01:/mnt/datapool2/IT mansalnfs01  - -
mansalnfs01:/mnt/datapool2/airmansalnfs01  - -
mansalnfs01:/mnt/datapool2/health mansalnfs01  - -
mansalnfs01:/mnt/datapool2/metals mansalnfs01  - -
mansalnfs01:/mnt/datapool2/sr mansalnfs01  - -

in this case all the directories I'm trying to access are on

datapool2/IT

... it's a ZFS share.


My Question is from where does the truss output of the successful
automount come from?


root@jadlaptop:~# ps -ef | grep automount
root  3599  3597   0 10:00:09 ?   0:00
/usr/lib/autofs/automountd
root  3597 1   0 10:00:09 ?   0:00
/usr/lib/autofs/automountd
root  4441  4387   0 16:14:15 pts/16  0:00 grep automount
root@jadlaptop:~# truss -f -p 3599

Is it from the same Computer as the failed one? Also are you running


/hipster or /dev on this Computer?


Hipster last updated  2015-01-15 (problems with Intel driver if I

update at

the moment)


Other Tests you could do.
What happens if you manually create the home?


?

What happens if you replace the * and & in auto_home with some real


usernames?


no difference.


What gets logged in syslog when Automount gets its config from LDAP?


nothing.

As I said, I can still access all the LDAP automounted shares from the
Solaris 10 servers, and the Solaris 10 machines can access all the

shares

automounted from the FreeNAS box
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss



___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss





___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss



--
Tim Mooney tim.moo...@ndsu.edu
Enterprise Computing & Infrastructure /
Division of Information Technology/701-231-1076 (Voice)
North Dakota State University, Fargo, ND 58105-5164
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] NFS Automount from FreeNAS Server

2019-12-10 Thread Jonathan Adams
Hi People,

same issue, but with the latest versions of both OpenIndiana and FreeNAS ...

new hardware, on both counts:

Dec 10 16:03:19 ascamrouter automountd[978]: [ID 784820 daemon.error]
server ascamnfs01 not responding

I can mount the filesystem manually, I can mount on Solaris 10, but I can't
use autofs.

It doesn't appear to be any different if I'm using LDAP or not (I turned
off LDAP for testing) ...

does anyone have any idea where to start looking?

Jon

On Mon, 23 Feb 2015 at 11:14, Jonathan Adams  wrote:

> Hi, thanks for keeping on trying ...
>
> I was optimistic, till I discovered that our Infrastructure guy had put
> comments on almost all the shares.
>
> Just for giggles I've added the snoop output from failed (snoopy.out) and
> working (snoopy2.out) in case that helps anyone.
>
> Jon
>
> On 20 February 2015 at 19:14, Till Wegmüller  wrote:
>
>> Hmm ok i've run out of ideas.
>>
>> It looks like a bug or a problematic Setting in FreeNAS.
>> My Automount Works and can mount shares very reliably with /dev and
>> /Hipster
>> (newest 2015)
>> Yours seems to work as well, atleast with solaris.
>>
>> Just for fun I had a little look around google to see if there are some
>> fun
>> FreeNAS Bugs. Was not disapointed http://vtricks.com/?p=2031
>> It Looks like FreeNas wants its shares to be commented :)
>>
>> Greetings Till
>>
>>
>> On Friday 20 February 2015 16.15:05 Jonathan Adams wrote:
>> > On 20 February 2015 at 15:56, Till Wegmüller 
>> wrote:
>> > > no problem :)
>> > >
>> > > ...
>> > >
>> > > The No such file or directory Error Happens when automount can't find
>> a
>> > > directory on the Server and thus does not create and mount the
>> Directory.
>> >
>> > jadams@jadlaptop:~$ dfshares mansalnfs01
>> > RESOURCE  SERVER ACCESSTRANSPORT
>> > mansalnfs01:/mnt/datapool mansalnfs01  - -
>> > mansalnfs01:/mnt/datapool/accountsmansalnfs01  - -
>> > mansalnfs01:/mnt/datapool/analystsmansalnfs01  - -
>> > mansalnfs01:/mnt/datapool/inorganics  mansalnfs01  - -
>> > mansalnfs01:/mnt/datapool/organicsmansalnfs01  - -
>> > mansalnfs01:/mnt/datapool/technician  mansalnfs01  - -
>> > mansalnfs01:/mnt/datapool1/PM mansalnfs01  - -
>> > mansalnfs01:/mnt/datapool1/bdmmansalnfs01  - -
>> > mansalnfs01:/mnt/datapool1/qualitymansalnfs01  - -
>> > mansalnfs01:/mnt/datapool1/reception  mansalnfs01  - -
>> > mansalnfs01:/mnt/datapool2/IT mansalnfs01  - -
>> > mansalnfs01:/mnt/datapool2/airmansalnfs01  - -
>> > mansalnfs01:/mnt/datapool2/health mansalnfs01  - -
>> > mansalnfs01:/mnt/datapool2/metals mansalnfs01  - -
>> > mansalnfs01:/mnt/datapool2/sr mansalnfs01  - -
>> >
>> > in this case all the directories I'm trying to access are on
>> datapool2/IT
>> > ... it's a ZFS share.
>> >
>> > > My Question is from where does the truss output of the successful
>> > > automount come from?
>> >
>> > root@jadlaptop:~# ps -ef | grep automount
>> > root  3599  3597   0 10:00:09 ?   0:00
>> > /usr/lib/autofs/automountd
>> > root  3597 1   0 10:00:09 ?   0:00
>> > /usr/lib/autofs/automountd
>> > root  4441  4387   0 16:14:15 pts/16  0:00 grep automount
>> > root@jadlaptop:~# truss -f -p 3599
>> >
>> > Is it from the same Computer as the failed one? Also are you running
>> >
>> > > /hipster or /dev on this Computer?
>> >
>> > Hipster last updated  2015-01-15 (problems with Intel driver if I
>> update at
>> > the moment)
>> >
>> > > Other Tests you could do.
>> > > What happens if you manually create the home?
>> >
>> > ?
>> >
>> > What happens if you replace the * and & in auto_home with some real
>> >
>> > > usernames?
>> >
>> > no difference.
>> >
>> > > What gets logged in syslog when Automount gets its config from LDAP?
>> >
>> > nothing.
>> >
>> > As I said, I can still access all the LDAP automounted shares from the
>> > Solaris 10 servers, and the Solaris 10 machines can access all the
>> shares
>> > automounted from the FreeNAS box
>> > ___
>> > openindiana-discuss mailing list
>> > openindiana-discuss@openindiana.org
>> > http://openindiana.org/mailman/listinfo/openindiana-discuss
>>
>>
>> ___
>> openindiana-discuss mailing list
>> openindiana-discuss@openindiana.org
>> http://openindiana.org/mailman/listinfo/openindiana-discuss
>>
>
>
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] NFS Automount from FreeNAS Server

2015-02-23 Thread Jonathan Adams
Hi, thanks for keeping on trying ...

I was optimistic, till I discovered that our Infrastructure guy had put
comments on almost all the shares.

Just for giggles I've added the snoop output from failed (snoopy.out) and
working (snoopy2.out) in case that helps anyone.

Jon

On 20 February 2015 at 19:14, Till Wegmüller toaster...@gmail.com wrote:

 Hmm ok i've run out of ideas.

 It looks like a bug or a problematic Setting in FreeNAS.
 My Automount Works and can mount shares very reliably with /dev and
 /Hipster
 (newest 2015)
 Yours seems to work as well, atleast with solaris.

 Just for fun I had a little look around google to see if there are some fun
 FreeNAS Bugs. Was not disapointed http://vtricks.com/?p=2031
 It Looks like FreeNas wants its shares to be commented :)

 Greetings Till


 On Friday 20 February 2015 16.15:05 Jonathan Adams wrote:
  On 20 February 2015 at 15:56, Till Wegmüller toaster...@gmail.com
 wrote:
   no problem :)
  
   ...
  
   The No such file or directory Error Happens when automount can't find a
   directory on the Server and thus does not create and mount the
 Directory.
 
  jadams@jadlaptop:~$ dfshares mansalnfs01
  RESOURCE  SERVER ACCESSTRANSPORT
  mansalnfs01:/mnt/datapool mansalnfs01  - -
  mansalnfs01:/mnt/datapool/accountsmansalnfs01  - -
  mansalnfs01:/mnt/datapool/analystsmansalnfs01  - -
  mansalnfs01:/mnt/datapool/inorganics  mansalnfs01  - -
  mansalnfs01:/mnt/datapool/organicsmansalnfs01  - -
  mansalnfs01:/mnt/datapool/technician  mansalnfs01  - -
  mansalnfs01:/mnt/datapool1/PM mansalnfs01  - -
  mansalnfs01:/mnt/datapool1/bdmmansalnfs01  - -
  mansalnfs01:/mnt/datapool1/qualitymansalnfs01  - -
  mansalnfs01:/mnt/datapool1/reception  mansalnfs01  - -
  mansalnfs01:/mnt/datapool2/IT mansalnfs01  - -
  mansalnfs01:/mnt/datapool2/airmansalnfs01  - -
  mansalnfs01:/mnt/datapool2/health mansalnfs01  - -
  mansalnfs01:/mnt/datapool2/metals mansalnfs01  - -
  mansalnfs01:/mnt/datapool2/sr mansalnfs01  - -
 
  in this case all the directories I'm trying to access are on datapool2/IT
  ... it's a ZFS share.
 
   My Question is from where does the truss output of the successful
   automount come from?
 
  root@jadlaptop:~# ps -ef | grep automount
  root  3599  3597   0 10:00:09 ?   0:00
  /usr/lib/autofs/automountd
  root  3597 1   0 10:00:09 ?   0:00
  /usr/lib/autofs/automountd
  root  4441  4387   0 16:14:15 pts/16  0:00 grep automount
  root@jadlaptop:~# truss -f -p 3599
 
  Is it from the same Computer as the failed one? Also are you running
 
   /hipster or /dev on this Computer?
 
  Hipster last updated  2015-01-15 (problems with Intel driver if I update
 at
  the moment)
 
   Other Tests you could do.
   What happens if you manually create the home?
 
  ?
 
  What happens if you replace the * and  in auto_home with some real
 
   usernames?
 
  no difference.
 
   What gets logged in syslog when Automount gets its config from LDAP?
 
  nothing.
 
  As I said, I can still access all the LDAP automounted shares from the
  Solaris 10 servers, and the Solaris 10 machines can access all the shares
  automounted from the FreeNAS box
  ___
  openindiana-discuss mailing list
  openindiana-discuss@openindiana.org
  http://openindiana.org/mailman/listinfo/openindiana-discuss


 ___
 openindiana-discuss mailing list
 openindiana-discuss@openindiana.org
 http://openindiana.org/mailman/listinfo/openindiana-discuss

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] NFS Automount from FreeNAS Server

2015-02-20 Thread Jonathan Adams
thanks for getting back to me :)

...

now it says no such file or directory instead of permission denied ...

setting the verbose logging in sharectl doesn't seem to produce much extra
output, nothing in syslog, and only a little extra in
/var/svc/log/system-filesystem-autofs:default.log (no error messages)

Jon

On 19 February 2015 at 20:03, Till Wegmüller toaster...@gmail.com wrote:

 Looking at that truss output it seems that automount fails shortly after
 stat
 on the home directory without even contacting nfs. Could it be that
 automount
 has problems connecting to LDAP?

 I would like to see what happens if we disable LDAP for automounts.

 To do so Coment out all Lines starting with a plus '+' in /etc/auto_home
 and
 /etc/auto_master.

 Then Insert the following little Magic Line and reload automount
 *   -fstype=nfs myserver:/myhomes/
 Replace myserver and myhomes acordingly.

 This will instruct automount to search all the subdirectories of /home
 under
 myserver:/myhomes and mount hem if needed.

 If for example we have a user test and would do cd /home/test automount
 would
 map myserver:/myuser/test to /home/test automagically replacing  and *
 with
 test.

 If you now get a Permission denied error try to look in syslog, automount
 is
 very silent about failing to mount.
 To Increase automounts volume use:
 sharect set -p automountd_verbose=true autofs
 sharect set -p automount_verbose=true autofs

 Greetings Till

 On Thursday 19 February 2015 09.57:57 Jonathan Adams wrote:
  via automount, it never gets mounted at all.
 
  hard mounted:
 
  root@jadlaptop:~# mount mansalnfs01:/mnt/datapool2/IT /mnt
  root@jadlaptop:~# mount | grep mansalnfs01
  /mnt on mansalnfs01:/mnt/datapool2/IT
  remote/read/write/setuid/devices/xattr/dev=9080001 on Thu Feb 19 09:49:24
  2015
 
  not sure if it helps, but attached is a section of via truss of the
  automountd when accessing 2 different mount points from /home
 
  Jon
 

 ___
 openindiana-discuss mailing list
 openindiana-discuss@openindiana.org
 http://openindiana.org/mailman/listinfo/openindiana-discuss

3313/4: door_return(0x08087970, 24, 0x, 0xFEA3EE00, 1007360) = 0
3313/4: time()  = 1424426158
3313/4: open64(/etc/auto_home, O_RDONLY)  = 10
3313/4: stat64(/etc/auto_home, 0xFEA3A298)= 0
3313/4: fstat64(10, 0xFEA3A034) = 0
3313/4: fstat64(10, 0xFEA39F34) = 0
3313/4: ioctl(10, TCGETA, 0xFEA39FF2)   Err#25 ENOTTY
3313/4: read(10,  #\n #   C o p y r i g h.., 1536)= 
3313/4: llseek(10, 0xFFF4, SEEK_CUR)= 1099
3313/4: close(10)   = 0
3313/4: door_return(0x08087970, 24, 0x, 0xFEA3EE00, 1007360) = 0
3313/4: time()  = 1424426158
3313/4: open64(/etc/auto_home, O_RDONLY)  = 10
3313/4: stat64(/etc/auto_home, 0xFEA39298)= 0
3313/4: fstat64(10, 0xFEA39034) = 0
3313/4: fstat64(10, 0xFEA38F34) = 0
3313/4: ioctl(10, TCGETA, 0xFEA38FF2)   Err#25 ENOTTY
3313/4: read(10,  #\n #   C o p y r i g h.., 1536)= 
3313/4: llseek(10, 0xFFF4, SEEK_CUR)= 1099
3313/4: close(10)   = 0
3313/4: open(/etc/svc/volatile/repository_door, O_RDONLY) = 10
3313/4: getpid()= 3313 [3311]
3313/4: door_call(10, 0xFEA3B1BC)   = 0
3313/4: close(10)   = 0
3313/4: fcntl(11, F_SETFD, 0x0001)  = 0
3313/4: door_info(11, 0xFEA3B244)   = 0
3313/4: getpid()= 3313 [3311]
3313/4: getpid()= 3313 [3311]
3313/4: door_call(11, 0xFEA3B17C)   = 0
3313/4: getpid()= 3313 [3311]
3313/4: getpid()= 3313 [3311]
3313/4: door_call(11, 0xFEA3B17C)   = 0
3313/4: getpid()= 3313 [3311]
3313/4: getpid()= 3313 [3311]
3313/4: door_call(11, 0xFEA3B17C)   = 0
3313/4: getpid()= 3313 [3311]
3313/4: getpid()= 3313 [3311]
3313/4: door_call(11, 0xFEA3B17C)   = 0
3313/4: getpid()= 3313 [3311]
3313/4: getpid()

Re: [OpenIndiana-discuss] NFS Automount from FreeNAS Server

2015-02-20 Thread Till Wegmüller
no problem :)

...

The No such file or directory Error Happens when automount can't find a 
directory on the Server and thus does not create and mount the Directory.

My Question is from where does the truss output of the successful automount 
come from?
Is it from the same Computer as the failed one? Also are you running /hipster 
or /dev on this Computer?

Other Tests you could do.
What happens if you manually create the home?
What happens if you replace the * and  in auto_home with some real usernames?
What gets logged in syslog when Automount gets its config from LDAP?

Greetings Till

Jonathan Adams schrieb am Friday 20 February 2015 10.05:42:
 thanks for getting back to me :)
 
 ...
 
 now it says no such file or directory instead of permission denied ...
 
 setting the verbose logging in sharectl doesn't seem to produce much extra
 output, nothing in syslog, and only a little extra in
 /var/svc/log/system-filesystem-autofs:default.log (no error messages)
 
 Jon
 
 On 19 February 2015 at 20:03, Till Wegmüller toaster...@gmail.com wrote:
 
  Looking at that truss output it seems that automount fails shortly after
  stat
  on the home directory without even contacting nfs. Could it be that
  automount
  has problems connecting to LDAP?
 
  I would like to see what happens if we disable LDAP for automounts.
 
  To do so Coment out all Lines starting with a plus '+' in /etc/auto_home
  and
  /etc/auto_master.
 
  Then Insert the following little Magic Line and reload automount
  *   -fstype=nfs myserver:/myhomes/
  Replace myserver and myhomes acordingly.
 
  This will instruct automount to search all the subdirectories of /home
  under
  myserver:/myhomes and mount hem if needed.
 
  If for example we have a user test and would do cd /home/test automount
  would
  map myserver:/myuser/test to /home/test automagically replacing  and *
  with
  test.
 
  If you now get a Permission denied error try to look in syslog, automount
  is
  very silent about failing to mount.
  To Increase automounts volume use:
  sharect set -p automountd_verbose=true autofs
  sharect set -p automount_verbose=true autofs
 
  Greetings Till
 
  On Thursday 19 February 2015 09.57:57 Jonathan Adams wrote:
   via automount, it never gets mounted at all.
  
   hard mounted:
  
   root@jadlaptop:~# mount mansalnfs01:/mnt/datapool2/IT /mnt
   root@jadlaptop:~# mount | grep mansalnfs01
   /mnt on mansalnfs01:/mnt/datapool2/IT
   remote/read/write/setuid/devices/xattr/dev=9080001 on Thu Feb 19 09:49:24
   2015
  
   not sure if it helps, but attached is a section of via truss of the
   automountd when accessing 2 different mount points from /home
  
   Jon
  
 
  ___
  openindiana-discuss mailing list
  openindiana-discuss@openindiana.org
  http://openindiana.org/mailman/listinfo/openindiana-discuss
 

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] NFS Automount from FreeNAS Server

2015-02-20 Thread Jonathan Adams
On 20 February 2015 at 15:56, Till Wegmüller toaster...@gmail.com wrote:

 no problem :)

 ...

 The No such file or directory Error Happens when automount can't find a
 directory on the Server and thus does not create and mount the Directory.


jadams@jadlaptop:~$ dfshares mansalnfs01
RESOURCE  SERVER ACCESSTRANSPORT
mansalnfs01:/mnt/datapool mansalnfs01  - -
mansalnfs01:/mnt/datapool/accountsmansalnfs01  - -
mansalnfs01:/mnt/datapool/analystsmansalnfs01  - -
mansalnfs01:/mnt/datapool/inorganics  mansalnfs01  - -
mansalnfs01:/mnt/datapool/organicsmansalnfs01  - -
mansalnfs01:/mnt/datapool/technician  mansalnfs01  - -
mansalnfs01:/mnt/datapool1/PM mansalnfs01  - -
mansalnfs01:/mnt/datapool1/bdmmansalnfs01  - -
mansalnfs01:/mnt/datapool1/qualitymansalnfs01  - -
mansalnfs01:/mnt/datapool1/reception  mansalnfs01  - -
mansalnfs01:/mnt/datapool2/IT mansalnfs01  - -
mansalnfs01:/mnt/datapool2/airmansalnfs01  - -
mansalnfs01:/mnt/datapool2/health mansalnfs01  - -
mansalnfs01:/mnt/datapool2/metals mansalnfs01  - -
mansalnfs01:/mnt/datapool2/sr mansalnfs01  - -

in this case all the directories I'm trying to access are on datapool2/IT
... it's a ZFS share.



 My Question is from where does the truss output of the successful
 automount come from?

root@jadlaptop:~# ps -ef | grep automount
root  3599  3597   0 10:00:09 ?   0:00
/usr/lib/autofs/automountd
root  3597 1   0 10:00:09 ?   0:00
/usr/lib/autofs/automountd
root  4441  4387   0 16:14:15 pts/16  0:00 grep automount
root@jadlaptop:~# truss -f -p 3599

Is it from the same Computer as the failed one? Also are you running
 /hipster or /dev on this Computer?


Hipster last updated  2015-01-15 (problems with Intel driver if I update at
the moment)


 Other Tests you could do.
 What happens if you manually create the home?

?

What happens if you replace the * and  in auto_home with some real
 usernames?


no difference.


 What gets logged in syslog when Automount gets its config from LDAP?

nothing.

As I said, I can still access all the LDAP automounted shares from the
Solaris 10 servers, and the Solaris 10 machines can access all the shares
automounted from the FreeNAS box
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] NFS Automount from FreeNAS Server

2015-02-20 Thread Till Wegmüller
Hmm ok i've run out of ideas.

It looks like a bug or a problematic Setting in FreeNAS.
My Automount Works and can mount shares very reliably with /dev and /Hipster 
(newest 2015)
Yours seems to work as well, atleast with solaris.

Just for fun I had a little look around google to see if there are some fun 
FreeNAS Bugs. Was not disapointed http://vtricks.com/?p=2031
It Looks like FreeNas wants its shares to be commented :)

Greetings Till


On Friday 20 February 2015 16.15:05 Jonathan Adams wrote:
 On 20 February 2015 at 15:56, Till Wegmüller toaster...@gmail.com wrote:
  no problem :)
  
  ...
  
  The No such file or directory Error Happens when automount can't find a
  directory on the Server and thus does not create and mount the Directory.
 
 jadams@jadlaptop:~$ dfshares mansalnfs01
 RESOURCE  SERVER ACCESSTRANSPORT
 mansalnfs01:/mnt/datapool mansalnfs01  - -
 mansalnfs01:/mnt/datapool/accountsmansalnfs01  - -
 mansalnfs01:/mnt/datapool/analystsmansalnfs01  - -
 mansalnfs01:/mnt/datapool/inorganics  mansalnfs01  - -
 mansalnfs01:/mnt/datapool/organicsmansalnfs01  - -
 mansalnfs01:/mnt/datapool/technician  mansalnfs01  - -
 mansalnfs01:/mnt/datapool1/PM mansalnfs01  - -
 mansalnfs01:/mnt/datapool1/bdmmansalnfs01  - -
 mansalnfs01:/mnt/datapool1/qualitymansalnfs01  - -
 mansalnfs01:/mnt/datapool1/reception  mansalnfs01  - -
 mansalnfs01:/mnt/datapool2/IT mansalnfs01  - -
 mansalnfs01:/mnt/datapool2/airmansalnfs01  - -
 mansalnfs01:/mnt/datapool2/health mansalnfs01  - -
 mansalnfs01:/mnt/datapool2/metals mansalnfs01  - -
 mansalnfs01:/mnt/datapool2/sr mansalnfs01  - -
 
 in this case all the directories I'm trying to access are on datapool2/IT
 ... it's a ZFS share.
 
  My Question is from where does the truss output of the successful
  automount come from?
 
 root@jadlaptop:~# ps -ef | grep automount
 root  3599  3597   0 10:00:09 ?   0:00
 /usr/lib/autofs/automountd
 root  3597 1   0 10:00:09 ?   0:00
 /usr/lib/autofs/automountd
 root  4441  4387   0 16:14:15 pts/16  0:00 grep automount
 root@jadlaptop:~# truss -f -p 3599
 
 Is it from the same Computer as the failed one? Also are you running
 
  /hipster or /dev on this Computer?
 
 Hipster last updated  2015-01-15 (problems with Intel driver if I update at
 the moment)
 
  Other Tests you could do.
  What happens if you manually create the home?
 
 ?
 
 What happens if you replace the * and  in auto_home with some real
 
  usernames?
 
 no difference.
 
  What gets logged in syslog when Automount gets its config from LDAP?
 
 nothing.
 
 As I said, I can still access all the LDAP automounted shares from the
 Solaris 10 servers, and the Solaris 10 machines can access all the shares
 automounted from the FreeNAS box
 ___
 openindiana-discuss mailing list
 openindiana-discuss@openindiana.org
 http://openindiana.org/mailman/listinfo/openindiana-discuss


___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] NFS Automount from FreeNAS Server

2015-02-19 Thread Till Wegmüller
Looking at that truss output it seems that automount fails shortly after stat 
on the home directory without even contacting nfs. Could it be that automount 
has problems connecting to LDAP?

I would like to see what happens if we disable LDAP for automounts.

To do so Coment out all Lines starting with a plus '+' in /etc/auto_home and 
/etc/auto_master.

Then Insert the following little Magic Line and reload automount
*   -fstype=nfs myserver:/myhomes/
Replace myserver and myhomes acordingly.

This will instruct automount to search all the subdirectories of /home under 
myserver:/myhomes and mount hem if needed.

If for example we have a user test and would do cd /home/test automount would 
map myserver:/myuser/test to /home/test automagically replacing  and * with 
test.

If you now get a Permission denied error try to look in syslog, automount is 
very silent about failing to mount.
To Increase automounts volume use:
sharect set -p automountd_verbose=true autofs
sharect set -p automount_verbose=true autofs

Greetings Till

On Thursday 19 February 2015 09.57:57 Jonathan Adams wrote:
 via automount, it never gets mounted at all.
 
 hard mounted:
 
 root@jadlaptop:~# mount mansalnfs01:/mnt/datapool2/IT /mnt
 root@jadlaptop:~# mount | grep mansalnfs01
 /mnt on mansalnfs01:/mnt/datapool2/IT
 remote/read/write/setuid/devices/xattr/dev=9080001 on Thu Feb 19 09:49:24
 2015
 
 not sure if it helps, but attached is a section of via truss of the
 automountd when accessing 2 different mount points from /home
 
 Jon
 

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] NFS Automount from FreeNAS Server

2015-02-19 Thread Jonathan Adams
via automount, it never gets mounted at all.

hard mounted:

root@jadlaptop:~# mount mansalnfs01:/mnt/datapool2/IT /mnt
root@jadlaptop:~# mount | grep mansalnfs01
/mnt on mansalnfs01:/mnt/datapool2/IT
remote/read/write/setuid/devices/xattr/dev=9080001 on Thu Feb 19 09:49:24
2015

not sure if it helps, but attached is a section of via truss of the
automountd when accessing 2 different mount points from /home

Jon


On 18 February 2015 at 19:11, Till Wegmüller toaster...@gmail.com wrote:

 In That case OI will use its defaults.
 OI Mounts per default as nfs4 so if you didnt specify -o vers=3 the shares
 got
 mounted as nfs4.

 Have a look at the output of the command mount in OI when the home is
 automounted and when the home is mounted manually.

 mount shows you the options the nfsclient used to mount the share.
 You can then try the options through to find the one that causes the error.

 Greetings Till

 On Wednesday 18 February 2015 14.00:17 Jonathan Adams wrote:
  no real options specified ...
 
  jadams@jadlaptop:~$ grep -v '^#' /etc/auto_master
  +auto_master
  /net-hosts-nosuid,nobrowse
  /homeauto_home-nobrowse
 
  jadams@jadlaptop:~$ more auto_master.ldif
  version: 1
  DN: nisMapName=auto_master,dc=domain,dc=com
  objectClass: top
  objectClass: nisMap
  nisMapName: auto_master
 
  DN: cn=/net,nisMapName=auto_master,dc=domain,dc=com
  objectClass: nisObject
  cn: /net
  nismapentry: -hosts -nosuid,nobrowse
  nisMapName: auto_master
 
  DN: cn=/home,nisMapName=auto_master,dc=domain,dc=com
  objectClass: nisObject
  cn: /home
  nismapentry: auto_home -nobrowse
  nisMapName: auto_master
 
  (slightly sanitised, yes I know that nisMap is the deprecated version
 of
  the automount stuff)
 
  the issue is with the automount, not with mounting in general ... I can
  hard mount it as root and access as my user with no trouble, I just can't
  access it via automount.
 
  The Linux laptops are usually running nfs v3, but only because we beat
 them
  with sticks till they behave ... our Riverbeds cannot optimise nfs v4,
 and
  permissions/ownership can go haywire when talking between Linux and
 Solaris
  10 as nfs v4.
 
  Jon
 
 
  On 18 February 2015 at 13:13, Predrag Zecevic [Unix Systems
 Administrator] 
  predrag.zece...@2e-systems.com wrote:
   Hi,
  
   not sure if it is relevant, but in Solaris user home dirs were on
   /export/home while /home was reserved for automounter.
  
   Not knowing FreeNAS (but some Linux systems), /home is location of user
   home dirs.
   Maybe problem solution can be looked in that direction?
  
   Regards.
  
   On 02/18/15 01:12 PM, Jonathan Adams wrote:
   I have an OpenIndiana laptop, it's running a copy of OpenLDAP as a
   replica
   of an OpenLDAP on our work server (syncs whenever it connects) ...
 and it
   uses this OpenLDAP database to power it's automount. (not sure if
 this is
   of any relevance, but included for completeness)
  
   I've been using this system for years in this configuration, but
 recently
   some of our users had their home directories moved from a Solaris 10
   server
   to a FreeNAS server.  I noticed that I couldn't access their home
   directories using the automounted /home directory, it comes up
   permission denied.  I can however access the users home directory
 if I
   hard mount as root.
  
   Our Solaris 10 servers have no trouble accessing the automounted
   directories, our Linux laptops have no issue either ...
  
   Does anyone else have issues like this, or know of any checks that I
 can
   perform to see if I can find the fault?
  
   Ta
  
   Jon
   ___
   openindiana-discuss mailing list
   openindiana-discuss@openindiana.org
   http://openindiana.org/mailman/listinfo/openindiana-discuss
  
   --
   Predrag Zečević
   Technical Support Analyst
   2e Systems GmbH
  
   Telephone: +49 6196 9505 815, Facsimile: +49 6196 9505 894
   Mobile:+49 174 3109 288, Skype: predrag.zecevic
   E-mail:predrag.zece...@2e-systems.com
  
   Headquarter:  2e Systems GmbH, Königsteiner Str. 87,
  
 65812 Bad Soden am Taunus, Germany
  
   Company registration: Amtsgericht Königstein (Germany), HRB 7303
   Managing director:Phil Douglas
  
   http://www.2e-systems.com/ - Making your business fly!
  
  
   ___
   openindiana-discuss mailing list
   openindiana-discuss@openindiana.org
   http://openindiana.org/mailman/listinfo/openindiana-discuss
 
  ___
  openindiana-discuss mailing list
  openindiana-discuss@openindiana.org
  http://openindiana.org/mailman/listinfo/openindiana-discuss


 ___
 openindiana-discuss mailing list
 openindiana-discuss@openindiana.org
 http://openindiana.org/mailman/listinfo/openindiana-discuss

1112/4: door_return(0x08087970, 24, 0x, 0xFEA3EE00, 

Re: [OpenIndiana-discuss] NFS Automount from FreeNAS Server

2015-02-18 Thread Till Wegmüller
In That case OI will use its defaults.
OI Mounts per default as nfs4 so if you didnt specify -o vers=3 the shares got 
mounted as nfs4. 

Have a look at the output of the command mount in OI when the home is 
automounted and when the home is mounted manually.

mount shows you the options the nfsclient used to mount the share.
You can then try the options through to find the one that causes the error.

Greetings Till

On Wednesday 18 February 2015 14.00:17 Jonathan Adams wrote:
 no real options specified ...
 
 jadams@jadlaptop:~$ grep -v '^#' /etc/auto_master
 +auto_master
 /net-hosts-nosuid,nobrowse
 /homeauto_home-nobrowse
 
 jadams@jadlaptop:~$ more auto_master.ldif
 version: 1
 DN: nisMapName=auto_master,dc=domain,dc=com
 objectClass: top
 objectClass: nisMap
 nisMapName: auto_master
 
 DN: cn=/net,nisMapName=auto_master,dc=domain,dc=com
 objectClass: nisObject
 cn: /net
 nismapentry: -hosts -nosuid,nobrowse
 nisMapName: auto_master
 
 DN: cn=/home,nisMapName=auto_master,dc=domain,dc=com
 objectClass: nisObject
 cn: /home
 nismapentry: auto_home -nobrowse
 nisMapName: auto_master
 
 (slightly sanitised, yes I know that nisMap is the deprecated version of
 the automount stuff)
 
 the issue is with the automount, not with mounting in general ... I can
 hard mount it as root and access as my user with no trouble, I just can't
 access it via automount.
 
 The Linux laptops are usually running nfs v3, but only because we beat them
 with sticks till they behave ... our Riverbeds cannot optimise nfs v4, and
 permissions/ownership can go haywire when talking between Linux and Solaris
 10 as nfs v4.
 
 Jon
 
 
 On 18 February 2015 at 13:13, Predrag Zecevic [Unix Systems Administrator] 
 predrag.zece...@2e-systems.com wrote:
  Hi,
  
  not sure if it is relevant, but in Solaris user home dirs were on
  /export/home while /home was reserved for automounter.
  
  Not knowing FreeNAS (but some Linux systems), /home is location of user
  home dirs.
  Maybe problem solution can be looked in that direction?
  
  Regards.
  
  On 02/18/15 01:12 PM, Jonathan Adams wrote:
  I have an OpenIndiana laptop, it's running a copy of OpenLDAP as a
  replica
  of an OpenLDAP on our work server (syncs whenever it connects) ... and it
  uses this OpenLDAP database to power it's automount. (not sure if this is
  of any relevance, but included for completeness)
  
  I've been using this system for years in this configuration, but recently
  some of our users had their home directories moved from a Solaris 10
  server
  to a FreeNAS server.  I noticed that I couldn't access their home
  directories using the automounted /home directory, it comes up
  permission denied.  I can however access the users home directory if I
  hard mount as root.
  
  Our Solaris 10 servers have no trouble accessing the automounted
  directories, our Linux laptops have no issue either ...
  
  Does anyone else have issues like this, or know of any checks that I can
  perform to see if I can find the fault?
  
  Ta
  
  Jon
  ___
  openindiana-discuss mailing list
  openindiana-discuss@openindiana.org
  http://openindiana.org/mailman/listinfo/openindiana-discuss
  
  --
  Predrag Zečević
  Technical Support Analyst
  2e Systems GmbH
  
  Telephone: +49 6196 9505 815, Facsimile: +49 6196 9505 894
  Mobile:+49 174 3109 288, Skype: predrag.zecevic
  E-mail:predrag.zece...@2e-systems.com
  
  Headquarter:  2e Systems GmbH, Königsteiner Str. 87,
  
65812 Bad Soden am Taunus, Germany
  
  Company registration: Amtsgericht Königstein (Germany), HRB 7303
  Managing director:Phil Douglas
  
  http://www.2e-systems.com/ - Making your business fly!
  
  
  ___
  openindiana-discuss mailing list
  openindiana-discuss@openindiana.org
  http://openindiana.org/mailman/listinfo/openindiana-discuss
 
 ___
 openindiana-discuss mailing list
 openindiana-discuss@openindiana.org
 http://openindiana.org/mailman/listinfo/openindiana-discuss


___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] NFS Automount from FreeNAS Server

2015-02-18 Thread Till Wegmüller
This sounds like the NFS Settings in the LDAP differ from those you used to 
mount the share manually.
NFS version 4's LDAP Access and NFS version 3's rootsquash option can prevent 
root from having access to other Users Homes. 
FreeNas has some security Settings inplace to forbid root to access other Users 
homes.
It May be that the solaris 10 servers and the Linux Laptops still use nfs3 and 
thus dont have those Security Measures inplace.
Or dont have rootsquash in their default configs.

If you need more help, I would need to know the Options NFS Client Uses to 
mount the shares.

Greetings Till

Jonathan Adams schrieb am Wednesday 18 February 2015 12.12:23:
 I have an OpenIndiana laptop, it's running a copy of OpenLDAP as a replica
 of an OpenLDAP on our work server (syncs whenever it connects) ... and it
 uses this OpenLDAP database to power it's automount. (not sure if this is
 of any relevance, but included for completeness)
 
 I've been using this system for years in this configuration, but recently
 some of our users had their home directories moved from a Solaris 10 server
 to a FreeNAS server.  I noticed that I couldn't access their home
 directories using the automounted /home directory, it comes up
 permission denied.  I can however access the users home directory if I
 hard mount as root.
 
 Our Solaris 10 servers have no trouble accessing the automounted
 directories, our Linux laptops have no issue either ...
 
 Does anyone else have issues like this, or know of any checks that I can
 perform to see if I can find the fault?
 
 Ta
 
 Jon
 ___
 openindiana-discuss mailing list
 openindiana-discuss@openindiana.org
 http://openindiana.org/mailman/listinfo/openindiana-discuss

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


[OpenIndiana-discuss] NFS Automount from FreeNAS Server

2015-02-18 Thread Jonathan Adams
I have an OpenIndiana laptop, it's running a copy of OpenLDAP as a replica
of an OpenLDAP on our work server (syncs whenever it connects) ... and it
uses this OpenLDAP database to power it's automount. (not sure if this is
of any relevance, but included for completeness)

I've been using this system for years in this configuration, but recently
some of our users had their home directories moved from a Solaris 10 server
to a FreeNAS server.  I noticed that I couldn't access their home
directories using the automounted /home directory, it comes up
permission denied.  I can however access the users home directory if I
hard mount as root.

Our Solaris 10 servers have no trouble accessing the automounted
directories, our Linux laptops have no issue either ...

Does anyone else have issues like this, or know of any checks that I can
perform to see if I can find the fault?

Ta

Jon
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] NFS Automount from FreeNAS Server

2015-02-18 Thread Jonathan Adams
no real options specified ...

jadams@jadlaptop:~$ grep -v '^#' /etc/auto_master
+auto_master
/net-hosts-nosuid,nobrowse
/homeauto_home-nobrowse

jadams@jadlaptop:~$ more auto_master.ldif
version: 1
DN: nisMapName=auto_master,dc=domain,dc=com
objectClass: top
objectClass: nisMap
nisMapName: auto_master

DN: cn=/net,nisMapName=auto_master,dc=domain,dc=com
objectClass: nisObject
cn: /net
nismapentry: -hosts -nosuid,nobrowse
nisMapName: auto_master

DN: cn=/home,nisMapName=auto_master,dc=domain,dc=com
objectClass: nisObject
cn: /home
nismapentry: auto_home -nobrowse
nisMapName: auto_master

(slightly sanitised, yes I know that nisMap is the deprecated version of
the automount stuff)

the issue is with the automount, not with mounting in general ... I can
hard mount it as root and access as my user with no trouble, I just can't
access it via automount.

The Linux laptops are usually running nfs v3, but only because we beat them
with sticks till they behave ... our Riverbeds cannot optimise nfs v4, and
permissions/ownership can go haywire when talking between Linux and Solaris
10 as nfs v4.

Jon


On 18 February 2015 at 13:13, Predrag Zecevic [Unix Systems Administrator] 
predrag.zece...@2e-systems.com wrote:

 Hi,

 not sure if it is relevant, but in Solaris user home dirs were on
 /export/home while /home was reserved for automounter.

 Not knowing FreeNAS (but some Linux systems), /home is location of user
 home dirs.
 Maybe problem solution can be looked in that direction?

 Regards.


 On 02/18/15 01:12 PM, Jonathan Adams wrote:

 I have an OpenIndiana laptop, it's running a copy of OpenLDAP as a replica
 of an OpenLDAP on our work server (syncs whenever it connects) ... and it
 uses this OpenLDAP database to power it's automount. (not sure if this is
 of any relevance, but included for completeness)

 I've been using this system for years in this configuration, but recently
 some of our users had their home directories moved from a Solaris 10
 server
 to a FreeNAS server.  I noticed that I couldn't access their home
 directories using the automounted /home directory, it comes up
 permission denied.  I can however access the users home directory if I
 hard mount as root.

 Our Solaris 10 servers have no trouble accessing the automounted
 directories, our Linux laptops have no issue either ...

 Does anyone else have issues like this, or know of any checks that I can
 perform to see if I can find the fault?

 Ta

 Jon
 ___
 openindiana-discuss mailing list
 openindiana-discuss@openindiana.org
 http://openindiana.org/mailman/listinfo/openindiana-discuss


 --
 Predrag Zečević
 Technical Support Analyst
 2e Systems GmbH

 Telephone: +49 6196 9505 815, Facsimile: +49 6196 9505 894
 Mobile:+49 174 3109 288, Skype: predrag.zecevic
 E-mail:predrag.zece...@2e-systems.com

 Headquarter:  2e Systems GmbH, Königsteiner Str. 87,
   65812 Bad Soden am Taunus, Germany
 Company registration: Amtsgericht Königstein (Germany), HRB 7303
 Managing director:Phil Douglas

 http://www.2e-systems.com/ - Making your business fly!


 ___
 openindiana-discuss mailing list
 openindiana-discuss@openindiana.org
 http://openindiana.org/mailman/listinfo/openindiana-discuss

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] NFS Automount from FreeNAS Server

2015-02-18 Thread Predrag Zecevic [Unix Systems Administrator]

Hi,

not sure if it is relevant, but in Solaris user home dirs were on /export/home 
while /home was reserved for automounter.

Not knowing FreeNAS (but some Linux systems), /home is location of user home 
dirs.
Maybe problem solution can be looked in that direction?

Regards.

On 02/18/15 01:12 PM, Jonathan Adams wrote:

I have an OpenIndiana laptop, it's running a copy of OpenLDAP as a replica
of an OpenLDAP on our work server (syncs whenever it connects) ... and it
uses this OpenLDAP database to power it's automount. (not sure if this is
of any relevance, but included for completeness)

I've been using this system for years in this configuration, but recently
some of our users had their home directories moved from a Solaris 10 server
to a FreeNAS server.  I noticed that I couldn't access their home
directories using the automounted /home directory, it comes up
permission denied.  I can however access the users home directory if I
hard mount as root.

Our Solaris 10 servers have no trouble accessing the automounted
directories, our Linux laptops have no issue either ...

Does anyone else have issues like this, or know of any checks that I can
perform to see if I can find the fault?

Ta

Jon
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss



--
Predrag Zečević
Technical Support Analyst
2e Systems GmbH

Telephone: +49 6196 9505 815, Facsimile: +49 6196 9505 894
Mobile:+49  174 3109 288, Skype: predrag.zecevic
E-mail:predrag.zece...@2e-systems.com

Headquarter:  2e Systems GmbH, Königsteiner Str. 87,
  65812 Bad Soden am Taunus, Germany
Company registration: Amtsgericht Königstein (Germany), HRB 7303
Managing director:Phil Douglas

http://www.2e-systems.com/ - Making your business fly!

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] nfs oi server - linux clients

2014-09-18 Thread Mark



I've used linux (centos)  nfs4 to OI servers for a few years with better 
results than nfs3.

You do need to set the linux clients up carefully.

These are my notes:


NFS 4 setup


Linux

/etc/idmapd.conf

[General]

Verbosity = 0
Pipefs-Directory = /var/lib/nfs/rpc_pipefs
Domain = mydomain.local

[Mapping]

Nobody-User = nobody
Nobody-Group = nobody

[Translation]
Method = nsswitch

==
OpenIndiana


svcadm enable nfs/status
svcadm enable nfs/server
svcadm enable nfs/nlockmgr
svcadm enable nfs/mapid


sharectl set -p nfsmapid_domain=mydomain.local nfs


Then set the appropriate nfs permissions on the zfs file system.
(I'll need to find the details on this bit if you need it)


Mark.


On 17/09/2014 1:39 a.m., Gregory Youngblood wrote:







 I don't know if things have changed but a couple of years ago I had to 
force Linux to v3 or things would hang or otherwise not work reliably. Sadly I 
don't recall the details only the lesson to force v3 on Linux clients. 
Something about the Linux v4 implementation at the time only worked Linux to 
Linux but not Linux client to Solaris server.
Hopefully I am out of date and that's been fixed by now.
Greg



-- Original message--From: Hugh McIntyreDate: Tue, Sep 16, 2014 1:29 AMTo: 
openindiana-discuss@openindiana.org;Subject:Re: [OpenIndiana-discuss] nfs oi server - linux clientsYou don't 
even need NIS or LDAP.  Plain /etc/passwd works fine, either by making sure the necessary user/uid mappings and 
passwd files are the same on all systems (if using NFS v2/v3) or not even bothering with the uid's matching if 
using NFSv4.(Non-matching uid's is kind of the point of the NFSv4 idmap complexity).Hugh.On 09/16/2014 12:11 
AM, Alex Smith (K4RNT) wrote: I used NIS when I was doing this, while I was beta testing Solaris 9 and 
had a Linux client to work with, and that managed to work pretty well, given I didn't have any connectivity 
issues between the hosts. I know that solution is kinda deprecated, but it's pretty complicated to 
set up LDAP comparatively.  'With the first link, the chain is forged. The first speech censured, 
the first thought forbidden, the first freedom denied

, chains us all irrevocably.' Those words were uttered by Judge Aaron Satie as wisdom and warning... The first time any man's freedom is trodden on, we’re all damaged. - Jean-Luc Picard, quoting Judge Aaron Satie, Star Trek: 
TNG episode The Drumhead - Alex Smith - Huntsville, Alabama metropolitan area USA On Tue, Sep 16, 2014 at 1:07 AM, Hugh McIntyre  wrote: Hi Harry, It's possible you have somehow 
mounted the filesystem locally with noexec (unlikely, but you can check with mount | grep /projects/dv and make sure noexec is not in the options). But at a guess, it's more likely you may have the wrong 
username mapping since NFSv4 may need configuration for this. The easiest way to check the user mapping is: 1. Create a directory on the server with permissions 777 (wide open) 2. On the client, 
touch somefile 3. Then check on both the server and client which user/group the file is created as.  Do not proceed until the usernames match.

If you have a user/group mismatch, then the fix depends on which version of NFS you are running.  In traditional NFS (v3 or v2), the client just sends the numeric uid/gid over the 
wire, and the assumption is that the server has the same passwd/group file mapping.  In your case though you seem to be using nfs4, which works differently. In NFS v4 (the new 
default) the configuration is more complex.  NFSv4 uses names instead of numbers (so you don't need the same UID on all boxes), but the complexity is that there's a nfsmapid 
service on Solaris that translates from NFS username to local uid/names.  This relies on a nfsmapid_domain and if this is misconfigured, you get access problems. Similarly, 
rpc.idmapd.conf on Linux. For the Solaris/Illumos end, Oracle has some info at http://docs.oracle.com/cd/E23824_01/html/821-1462/nfsmapid-1m.html but the summary for the 
Solaris end is: 1. You can specify the NFSMAPID_DOMAIN parameter in nfs(4) or using

sharectl. 2. Or specify the _nfsv4idmapdomain DNS resource record.  This is what I do since I have a local DNS server and then it works for all hosts. 3. Or if neither of these, Solaris will attempt to auto-determine 
the domain based on the domainname command, which may or may not give correct results. Meanwhile on Linux, the daemon (for client and server) is rpc.idmapd. In particular, you can configure the domain in 
/etc/idmapd.conf:  # default is FQDN minus hostname  Domain = local.lan You just have to make sure the client and server use the same domain. If not, requests will probably be 
treated as user=nobody and you may get permission errors. As you can see, in theory if everyone uses a default of FQDN minus hostname you may get correct operation by default.  But this does not always 
work, so might

Re: [OpenIndiana-discuss] nfs oi server - linux clients

2014-09-16 Thread Hugh McIntyre


Hi Harry,

It's possible you have somehow mounted the filesystem locally with 
noexec (unlikely, but you can check with mount | grep /projects/dv and 
make sure noexec is not in the options).


But at a guess, it's more likely you may have the wrong username mapping 
since NFSv4 may need configuration for this.


The easiest way to check the user mapping is:

1. Create a directory on the server with permissions 777 (wide open)
2. On the client, touch somefile
3. Then check on both the server and client which user/group the file is 
created as.  Do not proceed until the usernames match.


If you have a user/group mismatch, then the fix depends on which version 
of NFS you are running.  In traditional NFS (v3 or v2), the client 
just sends the numeric uid/gid over the wire, and the assumption is that 
the server has the same passwd/group file mapping.  In your case though 
you seem to be using nfs4, which works differently.


In NFS v4 (the new default) the configuration is more complex.  NFSv4 
uses names instead of numbers (so you don't need the same UID on all 
boxes), but the complexity is that there's a nfsmapid service on 
Solaris that translates from NFS username to local uid/names.  This 
relies on a nfsmapid_domain and if this is misconfigured, you get access 
problems.  Similarly, rpc.idmapd.conf on Linux.


For the Solaris/Illumos end, Oracle has some info at 
http://docs.oracle.com/cd/E23824_01/html/821-1462/nfsmapid-1m.html but 
the summary for the Solaris end is:


1. You can specify the NFSMAPID_DOMAIN parameter in nfs(4) or using 
sharectl.
2. Or specify the _nfsv4idmapdomain DNS resource record.  This is what I 
do since I have a local DNS server and then it works for all hosts.
3. Or if neither of these, Solaris will attempt to auto-determine the 
domain based on the domainname command, which may or may not give 
correct results.


Meanwhile on Linux, the daemon (for client and server) is rpc.idmapd. 
In particular, you can configure the domain in /etc/idmapd.conf:


# default is FQDN minus hostname
Domain = local.lan

You just have to make sure the client and server use the same domain. 
If not, requests will probably be treated as user=nobody and you may get 
permission errors.


As you can see, in theory if everyone uses a default of FQDN minus 
hostname you may get correct operation by default.  But this does not 
always work, so might be your issue.


Hugh.


On 09/15/2014 03:46 PM, Harry Putnam wrote:

I need a current modern walk thru of what it takes to setup nfs
serving, so that the clients' users are able to access rw and run
scripts or binaries.

First a quite description of the situation:
First and formost I am a terrrible green horn.  I've rarely used nfs.
And never with solaris as server

   oi server on home lan of a few linux hosts and a few windows hosts.
   Very small network with really only single user usage of the nfs
   shares.

   Home lan is in 10.0.0.0/24 network (All hosts concerned
   here).

Googling can lead to major confusion since the direction are a bit
different (as I understand it) from earlier versions of solaris/oi and
now.

As I understand it, it should take no more than zfs set sharenfs=on.
I guess the old export list etc isn't necessary.

I have set ' zfs set sharenfs=on ' on this share.

But using that theory, what I'm ending up with on the client side:
Is a mounted share that will not allow my user to execute anything.
Where as rw and delete work fine.

When I mount an nfs share on linux (Debian jessie) the mount is setup
in /etc/fstab like so:

# [HP 140814_122407  NFS MOUNTS
   # 2x.local.lan:/rpub/dv_pub /nfs/rpub-dv-pub  nfs4   rw 0 0
   2x.local.lan:/projects/dv /nfs/projects/dv  nfs4   rw,user,exec 0 0
# ]

You can see the two versions I've tried on different shares.

I've used both of the syntax shown above on the current uncommented
share with the same result as stated above... no execution.

I can read/write/delete but cannot execute a script.

a simple perl script:

cat p.pl:
   #!/usr/local/bin/perl

   use strict;
   use warnings;

   print hello world\n;

set with chmod 755

And yes... the perl binary referenced is in place and working.

Gets this error output:

./p.pl
bash: ./p.pl: Permission denied

And the permissions + owner:group look weird.

OK, where to from here?


___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss



___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] nfs oi server - linux clients

2014-09-16 Thread Alex Smith (K4RNT)
I used NIS when I was doing this, while I was beta testing Solaris 9 and
had a Linux client to work with, and that managed to work pretty well,
given I didn't have any connectivity issues between the hosts.

I know that solution is kinda deprecated, but it's pretty complicated to
set up LDAP comparatively.

 'With the first link, the chain is forged. The first speech censured, the
first thought forbidden, the first freedom denied, chains us all
irrevocably.' Those words were uttered by Judge Aaron Satie as wisdom and
warning... The first time any man's freedom is trodden on, we’re all
damaged. - Jean-Luc Picard, quoting Judge Aaron Satie, Star Trek: TNG
episode The Drumhead
- Alex Smith
- Huntsville, Alabama metropolitan area USA

On Tue, Sep 16, 2014 at 1:07 AM, Hugh McIntyre li...@mcintyreweb.com
wrote:


 Hi Harry,

 It's possible you have somehow mounted the filesystem locally with noexec
 (unlikely, but you can check with mount | grep /projects/dv and make sure
 noexec is not in the options).

 But at a guess, it's more likely you may have the wrong username mapping
 since NFSv4 may need configuration for this.

 The easiest way to check the user mapping is:

 1. Create a directory on the server with permissions 777 (wide open)
 2. On the client, touch somefile
 3. Then check on both the server and client which user/group the file is
 created as.  Do not proceed until the usernames match.

 If you have a user/group mismatch, then the fix depends on which version
 of NFS you are running.  In traditional NFS (v3 or v2), the client just
 sends the numeric uid/gid over the wire, and the assumption is that the
 server has the same passwd/group file mapping.  In your case though you
 seem to be using nfs4, which works differently.

 In NFS v4 (the new default) the configuration is more complex.  NFSv4 uses
 names instead of numbers (so you don't need the same UID on all boxes), but
 the complexity is that there's a nfsmapid service on Solaris that
 translates from NFS username to local uid/names.  This relies on a
 nfsmapid_domain and if this is misconfigured, you get access problems.
 Similarly, rpc.idmapd.conf on Linux.

 For the Solaris/Illumos end, Oracle has some info at
 http://docs.oracle.com/cd/E23824_01/html/821-1462/nfsmapid-1m.html but
 the summary for the Solaris end is:

 1. You can specify the NFSMAPID_DOMAIN parameter in nfs(4) or using
 sharectl.
 2. Or specify the _nfsv4idmapdomain DNS resource record.  This is what I
 do since I have a local DNS server and then it works for all hosts.
 3. Or if neither of these, Solaris will attempt to auto-determine the
 domain based on the domainname command, which may or may not give correct
 results.

 Meanwhile on Linux, the daemon (for client and server) is rpc.idmapd. In
 particular, you can configure the domain in /etc/idmapd.conf:

 # default is FQDN minus hostname
 Domain = local.lan

 You just have to make sure the client and server use the same domain. If
 not, requests will probably be treated as user=nobody and you may get
 permission errors.

 As you can see, in theory if everyone uses a default of FQDN minus
 hostname you may get correct operation by default.  But this does not
 always work, so might be your issue.

 Hugh.



 On 09/15/2014 03:46 PM, Harry Putnam wrote:

 I need a current modern walk thru of what it takes to setup nfs
 serving, so that the clients' users are able to access rw and run
 scripts or binaries.

 First a quite description of the situation:
 First and formost I am a terrrible green horn.  I've rarely used nfs.
 And never with solaris as server

oi server on home lan of a few linux hosts and a few windows hosts.
Very small network with really only single user usage of the nfs
shares.

Home lan is in 10.0.0.0/24 network (All hosts concerned
here).

 Googling can lead to major confusion since the direction are a bit
 different (as I understand it) from earlier versions of solaris/oi and
 now.

 As I understand it, it should take no more than zfs set sharenfs=on.
 I guess the old export list etc isn't necessary.

 I have set ' zfs set sharenfs=on ' on this share.

 But using that theory, what I'm ending up with on the client side:
 Is a mounted share that will not allow my user to execute anything.
 Where as rw and delete work fine.

 When I mount an nfs share on linux (Debian jessie) the mount is setup
 in /etc/fstab like so:

 # [HP 140814_122407  NFS MOUNTS
# 2x.local.lan:/rpub/dv_pub /nfs/rpub-dv-pub  nfs4   rw 0 0
2x.local.lan:/projects/dv /nfs/projects/dv  nfs4   rw,user,exec 0
0
 # ]

 You can see the two versions I've tried on different shares.

 I've used both of the syntax shown above on the current uncommented
 share with the same result as stated above... no execution.

 I can read/write/delete but cannot execute a script.

 a simple perl script:

 cat p.pl:
#!/usr/local/bin/perl

use strict;
use warnings;

print hello world\n;

 

Re: [OpenIndiana-discuss] nfs oi server - linux clients

2014-09-16 Thread Hugh McIntyre


You don't even need NIS or LDAP.  Plain /etc/passwd works fine, either 
by making sure the necessary user/uid mappings and passwd files are the 
same on all systems (if using NFS v2/v3) or not even bothering with the 
uid's matching if using NFSv4.


(Non-matching uid's is kind of the point of the NFSv4 idmap complexity).

Hugh.


On 09/16/2014 12:11 AM, Alex Smith (K4RNT) wrote:

I used NIS when I was doing this, while I was beta testing Solaris 9 and
had a Linux client to work with, and that managed to work pretty well,
given I didn't have any connectivity issues between the hosts.

I know that solution is kinda deprecated, but it's pretty complicated to
set up LDAP comparatively.

 'With the first link, the chain is forged. The first speech censured, the
first thought forbidden, the first freedom denied, chains us all
irrevocably.' Those words were uttered by Judge Aaron Satie as wisdom and
warning... The first time any man's freedom is trodden on, we’re all
damaged. - Jean-Luc Picard, quoting Judge Aaron Satie, Star Trek: TNG
episode The Drumhead
- Alex Smith
- Huntsville, Alabama metropolitan area USA

On Tue, Sep 16, 2014 at 1:07 AM, Hugh McIntyre li...@mcintyreweb.com
wrote:



Hi Harry,

It's possible you have somehow mounted the filesystem locally with noexec
(unlikely, but you can check with mount | grep /projects/dv and make sure
noexec is not in the options).

But at a guess, it's more likely you may have the wrong username mapping
since NFSv4 may need configuration for this.

The easiest way to check the user mapping is:

1. Create a directory on the server with permissions 777 (wide open)
2. On the client, touch somefile
3. Then check on both the server and client which user/group the file is
created as.  Do not proceed until the usernames match.

If you have a user/group mismatch, then the fix depends on which version
of NFS you are running.  In traditional NFS (v3 or v2), the client just
sends the numeric uid/gid over the wire, and the assumption is that the
server has the same passwd/group file mapping.  In your case though you
seem to be using nfs4, which works differently.

In NFS v4 (the new default) the configuration is more complex.  NFSv4 uses
names instead of numbers (so you don't need the same UID on all boxes), but
the complexity is that there's a nfsmapid service on Solaris that
translates from NFS username to local uid/names.  This relies on a
nfsmapid_domain and if this is misconfigured, you get access problems.
Similarly, rpc.idmapd.conf on Linux.

For the Solaris/Illumos end, Oracle has some info at
http://docs.oracle.com/cd/E23824_01/html/821-1462/nfsmapid-1m.html but
the summary for the Solaris end is:

1. You can specify the NFSMAPID_DOMAIN parameter in nfs(4) or using
sharectl.
2. Or specify the _nfsv4idmapdomain DNS resource record.  This is what I
do since I have a local DNS server and then it works for all hosts.
3. Or if neither of these, Solaris will attempt to auto-determine the
domain based on the domainname command, which may or may not give correct
results.

Meanwhile on Linux, the daemon (for client and server) is rpc.idmapd. In
particular, you can configure the domain in /etc/idmapd.conf:

 # default is FQDN minus hostname
 Domain = local.lan

You just have to make sure the client and server use the same domain. If
not, requests will probably be treated as user=nobody and you may get
permission errors.

As you can see, in theory if everyone uses a default of FQDN minus
hostname you may get correct operation by default.  But this does not
always work, so might be your issue.

Hugh.



On 09/15/2014 03:46 PM, Harry Putnam wrote:


I need a current modern walk thru of what it takes to setup nfs
serving, so that the clients' users are able to access rw and run
scripts or binaries.

First a quite description of the situation:
First and formost I am a terrrible green horn.  I've rarely used nfs.
And never with solaris as server

oi server on home lan of a few linux hosts and a few windows hosts.
Very small network with really only single user usage of the nfs
shares.

Home lan is in 10.0.0.0/24 network (All hosts concerned
here).

Googling can lead to major confusion since the direction are a bit
different (as I understand it) from earlier versions of solaris/oi and
now.

As I understand it, it should take no more than zfs set sharenfs=on.
I guess the old export list etc isn't necessary.

I have set ' zfs set sharenfs=on ' on this share.

But using that theory, what I'm ending up with on the client side:
Is a mounted share that will not allow my user to execute anything.
Where as rw and delete work fine.

When I mount an nfs share on linux (Debian jessie) the mount is setup
in /etc/fstab like so:

# [HP 140814_122407  NFS MOUNTS
# 2x.local.lan:/rpub/dv_pub /nfs/rpub-dv-pub  nfs4   rw 0 0
2x.local.lan:/projects/dv /nfs/projects/dv  nfs4   rw,user,exec 0
0
# ]

You can see the two versions I've tried on 

Re: [OpenIndiana-discuss] nfs oi server - linux clients

2014-09-16 Thread Gregory Youngblood






I don't know if things have changed but a couple of years ago I had to 
force Linux to v3 or things would hang or otherwise not work reliably. Sadly I 
don't recall the details only the lesson to force v3 on Linux clients. 
Something about the Linux v4 implementation at the time only worked Linux to 
Linux but not Linux client to Solaris server. 
Hopefully I am out of date and that's been fixed by now.
Greg



-- Original message--From: Hugh McIntyreDate: Tue, Sep 16, 2014 1:29 
AMTo: openindiana-discuss@openindiana.org;Subject:Re: [OpenIndiana-discuss] nfs 
oi server - linux clientsYou don't even need NIS or LDAP.  Plain /etc/passwd 
works fine, either by making sure the necessary user/uid mappings and passwd 
files are the same on all systems (if using NFS v2/v3) or not even bothering 
with the uid's matching if using NFSv4.(Non-matching uid's is kind of the point 
of the NFSv4 idmap complexity).Hugh.On 09/16/2014 12:11 AM, Alex Smith (K4RNT) 
wrote: I used NIS when I was doing this, while I was beta testing Solaris 9 
and had a Linux client to work with, and that managed to work pretty well, 
given I didn't have any connectivity issues between the hosts. I know that 
solution is kinda deprecated, but it's pretty complicated to set up LDAP 
comparatively.  'With the first link, the chain is forged. The first speech 
censured, the first thought forbidden, the first freedom denied, chains us 
all irrevocably.' Those words were uttered by Judge Aaron Satie as wisdom and 
warning... The first time any man's freedom is trodden on, we’re all damaged. 
- Jean-Luc Picard, quoting Judge Aaron Satie, Star Trek: TNG episode The 
Drumhead - Alex Smith - Huntsville, Alabama metropolitan area USA On Tue, 
Sep 16, 2014 at 1:07 AM, Hugh McIntyre  wrote: Hi Harry, It's 
possible you have somehow mounted the filesystem locally with noexec 
(unlikely, but you can check with mount | grep /projects/dv and make sure 
noexec is not in the options). But at a guess, it's more likely you may 
have the wrong username mapping since NFSv4 may need configuration for 
this. The easiest way to check the user mapping is: 1. Create a 
directory on the server with permissions 777 (wide open) 2. On the client, 
touch somefile 3. Then check on both the server and client which user/group 
the file is created as.  Do not proceed until the usernames match. If you 
have a user/group mismatch, then the fix depends on which version of NFS you 
are running.  In traditional NFS (v3 or v2), the client just sends the 
numeric uid/gid over the wire, and the assumption is that the server has the 
same passwd/group file mapping.  In your case though you seem to be using 
nfs4, which works differently. In NFS v4 (the new default) the 
configuration is more complex.  NFSv4 uses names instead of numbers (so you 
don't need the same UID on all boxes), but the complexity is that there's a 
nfsmapid service on Solaris that translates from NFS username to local 
uid/names.  This relies on a nfsmapid_domain and if this is misconfigured, 
you get access problems. Similarly, rpc.idmapd.conf on Linux. For the 
Solaris/Illumos end, Oracle has some info at 
http://docs.oracle.com/cd/E23824_01/html/821-1462/nfsmapid-1m.html but the 
summary for the Solaris end is: 1. You can specify the NFSMAPID_DOMAIN 
parameter in nfs(4) or using sharectl. 2. Or specify the _nfsv4idmapdomain 
DNS resource record.  This is what I do since I have a local DNS server and 
then it works for all hosts. 3. Or if neither of these, Solaris will attempt 
to auto-determine the domain based on the domainname command, which may or 
may not give correct results. Meanwhile on Linux, the daemon (for client 
and server) is rpc.idmapd. In particular, you can configure the domain in 
/etc/idmapd.conf:  # default is FQDN minus hostname  
Domain = local.lan You just have to make sure the client and server use the 
same domain. If not, requests will probably be treated as user=nobody and you 
may get permission errors. As you can see, in theory if everyone uses a 
default of FQDN minus hostname you may get correct operation by default.  But 
this does not always work, so might be your issue. Hugh. On 
09/15/2014 03:46 PM, Harry Putnam wrote: I need a current modern walk thru 
of what it takes to setup nfs serving, so that the clients' users are able 
to access rw and run scripts or binaries. First a quite description of 
the situation: First and formost I am a terrrible green horn.  I've rarely 
used nfs. And never with solaris as server oi server on home lan 
of a few linux hosts and a few windows hosts. Very small network with 
really only single user usage of the nfs shares. Home lan is 
in 10.0.0.0/24 network (All hosts concerned here). Googling can 
lead to major confusion since the direction are a bit different (as I 
understand it) from earlier versions of solaris/oi and now. As I 
understand it, it should take no more than zfs set sharenfs

Re: [OpenIndiana-discuss] nfs oi server - linux clients

2014-09-16 Thread Harry Putnam
Hugh McIntyre li...@mcintyreweb.com writes:

 Hi Harry,

 It's possible you have somehow mounted the filesystem locally with
 noexec (unlikely, but you can check with mount | grep /projects/dv
 and make sure noexec is not in the options).

Host naming convention for clarity
solsrv is my oi server
lincli is the linux client

Before going on with all the rest... there IS a NOEXEC option set.
Where would that setting be coming from?

Oh, and you mentioned to test `touch file':
You may notice that my OP reports that r/w is working (both ways)

mount |grep dv

solsrv.local.lan:/projects/dv on /nfs/projects/dv type nfs4 
(rw,nosuid,nodev,noexec,relatime,vers=4.0,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=10.0.0.27,local_lock=none,addr=10.0.0.8)

Where is that getting set?  Far as I recall, I have not made any
settings that would do that.  Not on lincli or solsrv

Additional info:

During my preparations for nfs I did ensure that my users shared uid
and gid.

I haven't investigated the rest of your suggestions, so back to work


___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] nfs oi server - linux clients

2014-09-16 Thread Harry Putnam
Harry Putnam rea...@newsguy.com writes:

 Hugh McIntyre li...@mcintyreweb.com writes:

 Hi Harry,

 It's possible you have somehow mounted the filesystem locally with
 noexec (unlikely, but you can check with mount | grep /projects/dv
 and make sure noexec is not in the options).

 Host naming convention for clarity
 solsrv is my oi server
 lincli is the linux client

 Before going on with all the rest... there IS a NOEXEC option set.
 Where would that setting be coming from?

[...]

Problem is solved, to a degree.  Still lots of room for improving my
understanding of why it does what it does.

I was mounting thru /etc/fstab and posted the line:

  srvHOST:/share/mount/point  nfs4   rw,user,exec 0 0

With the line above, execution did not work
---   ---   ---=---   ---   --- 

  srvHOST:/share/mount/point  nfs4   rw 0 0

Execution also failed with the above line

---   ---   ---=---   ---   --- 

  srvHOST:/share/mount/point  nfs4   defaults 0 0  

However, with the line above in fstab... execution now works

==

Anyone with more to say about this please feel free to comment.  I can
use all the input I can get.


___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


[OpenIndiana-discuss] nfs oi server - linux clients

2014-09-15 Thread Harry Putnam
I need a current modern walk thru of what it takes to setup nfs
serving, so that the clients' users are able to access rw and run
scripts or binaries.

First a quite description of the situation:
First and formost I am a terrrible green horn.  I've rarely used nfs. 
And never with solaris as server

  oi server on home lan of a few linux hosts and a few windows hosts.
  Very small network with really only single user usage of the nfs
  shares.

  Home lan is in 10.0.0.0/24 network (All hosts concerned
  here).

Googling can lead to major confusion since the direction are a bit
different (as I understand it) from earlier versions of solaris/oi and
now.

As I understand it, it should take no more than zfs set sharenfs=on.
I guess the old export list etc isn't necessary.

I have set ' zfs set sharenfs=on ' on this share.

But using that theory, what I'm ending up with on the client side:
Is a mounted share that will not allow my user to execute anything.
Where as rw and delete work fine.

When I mount an nfs share on linux (Debian jessie) the mount is setup
in /etc/fstab like so:

# [HP 140814_122407  NFS MOUNTS 
  # 2x.local.lan:/rpub/dv_pub /nfs/rpub-dv-pub  nfs4   rw 0 0
  2x.local.lan:/projects/dv /nfs/projects/dv  nfs4   rw,user,exec 0 0
# ]

You can see the two versions I've tried on different shares.

I've used both of the syntax shown above on the current uncommented
share with the same result as stated above... no execution.

I can read/write/delete but cannot execute a script.

a simple perl script:

cat p.pl:
  #!/usr/local/bin/perl
  
  use strict;
  use warnings;
  
  print hello world\n;

set with chmod 755

And yes... the perl binary referenced is in place and working.  

Gets this error output:

./p.pl
bash: ./p.pl: Permission denied

And the permissions + owner:group look weird.

OK, where to from here?


___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


[OpenIndiana-discuss] NFS fix in OI ?

2014-06-04 Thread Udo Grabowski (IMK)

Hi,

does anyone know if commmit aea676fd5203d93bed37515d4f405d6a31c21509
Feb 1, 2013,
https://github.com/illumos/illumos-gate/commit/aea676fd5203d93bed37515d4f405d6a31c21509

is in /dev oi_151a9 ?

How can one find out which fixes have been included in a oi /dev
version ?
--
Dr.Udo GrabowskiInst.f.Meteorology a.Climate Research IMK-ASF-SAT
http://www.imk-asf.kit.edu/english/sat.php
KIT - Karlsruhe Institute of Technologyhttp://www.kit.edu
Postfach 3640,76021 Karlsruhe,Germany  T:(+49)721 608-26026 F:-926026

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] NFS fix in OI ?

2014-06-04 Thread Alexander Pyhalov

On 06/04/2014 16:27, Udo Grabowski (IMK) wrote:

Hi,

does anyone know if commmit aea676fd5203d93bed37515d4f405d6a31c21509
Feb 1, 2013,
https://github.com/illumos/illumos-gate/commit/aea676fd5203d93bed37515d4f405d6a31c21509


is in /dev oi_151a9 ?

How can one find out which fixes have been included in a oi /dev
version ?


Hi, you can look at 
https://hg.openindiana.org/sustaining/oi_151a/illumos-gate/


--
Best regards,
Alexander Pyhalov,
system administrator of Computer Center of Southern Federal University

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] NFS fix in OI ?

2014-06-04 Thread Udo Grabowski (IMK)

On 04/06/2014 14:34, Alexander Pyhalov wrote:

On 06/04/2014 16:27, Udo Grabowski (IMK) wrote:

Hi,

does anyone know if commmit aea676fd5203d93bed37515d4f405d6a31c21509
Feb 1, 2013,
https://github.com/illumos/illumos-gate/commit/aea676fd5203d93bed37515d4f405d6a31c21509


is in /dev oi_151a9 ?

How can one find out which fixes have been included in a oi /dev
version ?


Hi, you can look at
https://hg.openindiana.org/sustaining/oi_151a/illumos-gate/


Yep, thanks, that's the page I was looking for !
So the fix is indeed included.
--
Dr.Udo GrabowskiInst.f.Meteorology a.Climate Research IMK-ASF-SAT
http://www.imk-asf.kit.edu/english/sat.php
KIT - Karlsruhe Institute of Technologyhttp://www.kit.edu
Postfach 3640,76021 Karlsruhe,Germany  T:(+49)721 608-26026 F:-926026


___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


[OpenIndiana-discuss] NFS server parameters refresh

2014-02-15 Thread Alberto Picón Couselo
Hello everybody.

We would like to change several NFS server parameters in a production
server using sharectl tool:

sharectl set -p servers=1024 nfs

Is it necessary to restart the NFS server using svcadm tool to refresh
these values por área they loaded on realtime?

Best regards and thank you for your support.



-- 
Alberto Picón Couselo
Director General
Teléfono: 914990049 - Fax: 914990358 - Móvil: 615227755
email: albertopi...@picon-networks.com
web: www.picon-networks.com
___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] NFS server parameters refresh

2014-02-15 Thread Alberto Picón Couselo
Thank you very much for your help, Marcel.

Do you think that sharectl set -p servers=1024 nfs execution will
disconnect already connected NFS clients or will produce any NFS service
disruption?

El 15/02/2014 17:53, Marcel Telka escribió:

On Sat, Feb 15, 2014 at 03:48:53PM +0100, Alberto Picón Couselo wrote:

We would like to change several NFS server parameters in a production
server using sharectl tool:

sharectl set -p servers=1024 nfs

Is it necessary to restart the NFS server using svcadm tool to refresh
these values por área they loaded on realtime?

No. sharectl will restart all required services for you automatically.
___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] NFS server parameters refresh

2014-02-15 Thread Alberto Picón Couselo
However, I have read the following information from OpenSolaris

http://books.google.es/books?id=y8qaxiZNvqACpg=PT341lpg=PT341ots=7y3KM-T-Hpfocus=viewportdq=sharectl+servers+nfshl=esoutput=html_text

Where it stands that for NFS version selection:

sharectl set -p version_max=3 nfs

it is required to restart the service using svcadm restart nfs/server

Do you think that we will achieve NFS server thread modification sharectl
set -p servers=1024 nfs without server restarting?

Thank you very much for your help once again.

El 15/02/2014 17:53, Marcel Telka escribió:

On Sat, Feb 15, 2014 at 03:48:53PM +0100, Alberto Picón Couselo wrote:

We would like to change several NFS server parameters in a production
server using sharectl tool:

sharectl set -p servers=1024 nfs

Is it necessary to restart the NFS server using svcadm tool to refresh
these values por área they loaded on realtime?

No. sharectl will restart all required services for you automatically.
___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] NFS server parameters refresh

2014-02-15 Thread Marcel Telka
On Sat, Feb 15, 2014 at 06:55:21PM +0100, Alberto Picón Couselo wrote:
 Do you think that sharectl set -p servers=1024 nfs execution will
 disconnect already connected NFS clients or will produce any NFS service
 disruption?

Yes, the clients will be disconnected (the TCP connections will be closed), but
that shouldn't cause any issue to clients, because they should reconnect
automatically.

-- 
+---+
| Marcel Telka   e-mail:   mar...@telka.sk  |
|homepage: http://telka.sk/ |
|jabber:   mar...@jabber.sk |
+---+

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] NFS server parameters refresh

2014-02-15 Thread Marcel Telka
On Sat, Feb 15, 2014 at 07:04:55PM +0100, Alberto Picón Couselo wrote:
 However, I have read the following information from OpenSolaris
 
 http://books.google.es/books?id=y8qaxiZNvqACpg=PT341lpg=PT341ots=7y3KM-T-Hpfocus=viewportdq=sharectl+servers+nfshl=esoutput=html_text
 
 Where it stands that for NFS version selection:
 
 sharectl set -p version_max=3 nfs
 
 it is required to restart the service using svcadm restart nfs/server

That information is either obsolete or plain wrong. For example there is no
version_max property for nfs.

 
 Do you think that we will achieve NFS server thread modification sharectl
 set -p servers=1024 nfs without server restarting?

Yes, I said so. If not, then it is a bug.

-- 
+---+
| Marcel Telka   e-mail:   mar...@telka.sk  |
|homepage: http://telka.sk/ |
|jabber:   mar...@jabber.sk |
+---+

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] NFS

2014-01-31 Thread Ryan John
Being reliant on NFS myself, I decided to test this. I just updated one test
machine from OI_a8 to OI_a9, and another from OI_a7 to OI_a9
Both machines give the same results, IE: NFSv3 works okay.
My clients are RetHat EL6.

OpenIndiana (powered by illumos)SunOS 5.11oi_151a9November 2013
root@ openindiana:~# zfs create dataPool/nfstest
root@ openindiana :~# zfs set sharenfs=on dataPool/nfstest
root@ openindiana:~# sharectl set -p server_versmax=3 nfs

On RH client:
# mount -v -t nfs openindiana:/export/nfstest /mnt/tmp1
mount.nfs: timeout set for Fri Jan 31 07:56:15 2014
mount.nfs: trying text-based options
'vers=4,addr=192.168.122.21,clientaddr=192.168.27.27'
mount.nfs: mount(2): Protocol not supported
mount.nfs: trying text-based options 'addr=192.168.122.21'
mount.nfs: prog 13, trying vers=3, prot=6
mount.nfs: trying 192.168.122.21 prog 13 vers 3 prot TCP port 2049
mount.nfs: prog 15, trying vers=3, prot=17
mount.nfs: trying 192.168.122.21 prog 15 vers 3 prot UDP port 49932

]# mount
..
openindiana:/export/nfstest on /mnt/tmp1 type nfs (rw,addr=192.168.122.21)

Regards
John

 -Original Message-
 From: Edward Ned Harvey (openindiana)
 [mailto:openindi...@nedharvey.com]
 Sent: Thursday, 30 January 2014 11:55 PM
 To: Discussion list for OpenIndiana
 Subject: Re: [OpenIndiana-discuss] NFS
 
  From: Edward Ned Harvey (openindiana)
 
 It *appears* that NFSv4 is fine in both 151a7 and 151a9.
 It *appears* that NFSv3 is broken in 151a9.  Which was unfortunately,
 necessary to support ESXi client and Ubuntu 10.04 client.
 
 ___
 OpenIndiana-discuss mailing list
 OpenIndiana-discuss@openindiana.org
 http://openindiana.org/mailman/listinfo/openindiana-discuss
___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] NFS

2014-01-31 Thread Edward Ned Harvey (openindiana)
 From: Ryan John [mailto:john.r...@bsse.ethz.ch]
 
 Being reliant on NFS myself, I decided to test this. I just updated one test
 machine from OI_a8 to OI_a9, and another from OI_a7 to OI_a9
 Both machines give the same results, IE: NFSv3 works okay.
 My clients are RetHat EL6.

I wonder what version of nfs ships with rhel/centos6?  If you don't post back, 
I'll look it up later today and post.  It seems also, I'll have to start 
questioning if there's simply something wrong with my *individual* system, 
rather than the distro.  In other words, verify checksum of installation media, 
repeat on another system...

Also, I wonder if there's a difference between your system and mine, resulting 
from the fact you did an upgrade, whereas I installed fresh.  

Are you using OI desktop, or OI server?   I'm using OI server 151a9...   And 
I'm using OI desktop 151a7.


 OpenIndiana (powered by illumos)SunOS 5.11oi_151a9November 2013
 root@ openindiana:~# zfs create dataPool/nfstest
 root@ openindiana :~# zfs set sharenfs=on dataPool/nfstest
 root@ openindiana:~# sharectl set -p server_versmax=3 nfs

Did you remember to restart nfs server after making that change?

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] NFS

2014-01-31 Thread John Ryan

On 31/01/2014 3:25 PM, Edward Ned Harvey (openindiana) wrote:

From: Ryan John [mailto:john.r...@bsse.ethz.ch]

Being reliant on NFS myself, I decided to test this. I just updated one test
machine from OI_a8 to OI_a9, and another from OI_a7 to OI_a9
Both machines give the same results, IE: NFSv3 works okay.
My clients are RetHat EL6.

I wonder what version of nfs ships with rhel/centos6?  If you don't post back, 
I'll look it up later today and post.  It seems also, I'll have to start 
questioning if there's simply something wrong with my *individual* system, 
rather than the distro.  In other words, verify checksum of installation media, 
repeat on another system...

# yum info nfs-utils
.
Name: nfs-utils
Arch: x86_64
Epoch   : 1
Version : 1.2.3
Release : 39.el6
.


Also, I wonder if there's a difference between your system and mine, resulting 
from the fact you did an upgrade, whereas I installed fresh.

Are you using OI desktop, or OI server?   I'm using OI server 151a9...   And 
I'm using OI desktop 151a7.

I'm also using OI server




OpenIndiana (powered by illumos)SunOS 5.11oi_151a9November 2013
root@ openindiana:~# zfs create dataPool/nfstest
root@ openindiana :~# zfs set sharenfs=on dataPool/nfstest
root@ openindiana:~# sharectl set -p server_versmax=3 nfs

Did you remember to restart nfs server after making that change?

No need to, but yes I did.

However, I also endorse Marcel Telka's comment:
The simplest way how to start the real root causing of your issue is to 
run snoop or tcpdump at either/both NFS client and/or NFS server. All 
other approaches might be funny, sometimes successful, but usually not 
very productive.
Someone else suggested doing `rpcinfo -p {server}` from your client. - 
did you try that?


___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss



___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] NFS

2014-01-30 Thread Jonathan Adams
If a share was mounted on the client and you change the underlying NFS
version on the server then you will need to get the client to unmount all
shares from the server before they can see the version 3 shares ... is this
the case in your instance?

Are your shares auto-mounted? if so it depends on which system you're using
but it might be quicker to reboot ... :(


On 30 January 2014 07:15, Marcel Telka mar...@telka.sk wrote:

 On Thu, Jan 30, 2014 at 03:01:56AM +, Edward Ned Harvey (openindiana)
 wrote:
  At home, I have oi_151a7 and ESXi 5.1.
  I wrote down precisely how to share NFS, and mount from the ESXi machine.
  sudo zfs set sharenfs=rw=@
 192.168.5.5/32,root=@192.168.5.5/32 mypool/somefilesystem
 
  I recall it was a pain to get the syntax correct, especially thanks to
 some
  innacuracy in the man page.  But I got it.

 Please file a bug report. Thanks.

 
  Now at work I have oi_151a9 and ESXi 5.5.  I also have oi_151a7 and some
  ubuntu 12.04 servers and centos 5  6 servers.
 
  On the oi_151a9 machine, I do the precise above sharenfs command.  Then
 the
  oi_151a7 machine can mount, but both centos, ubuntu, and ESXi clients all
  fail to mount.  So I think, ah-hah!  That sounds like a NFS v3 conflict
  with v4!
 
  So then I do this:
  sudo sharectl set -p client_versmax=3 nfs

 The above command is useless at the NFS server.

  sudo sharectl set -p server_versmax=3 nfs
  sudo svcadm refresh  svc:/network/nfs/server:default
 
  Now, *none* of the clients can mount.  So I put it back to 4, and the
  openindiana client can mount.
 
  Is NFS v3 simply broken in the latest OI?

 I don't think so. There is no reason for that. I usually use latest (or
 almost
 latest) code from illumos-gate (so hipster) and I've no problem with NFS.

 
  When I give the -v option to mount, I get nothing useful.
  There is also nothing useful in the nfsd logs.
 
  The only thing I have left to test...  I could try sharing NFS from the
 *old*
  oi server, and see if the new ESXi is able to mount it.  If ESXi is able
 to
  mount the old one, and not able to mount the new one, that would be a
 pretty
  solid indicator v3 is broken in 151_a9.

 You should look at the communication between the NFS client and the NFS
 server
 when the mount fails to see what exactly is the problem.

 --
 +---+
 | Marcel Telka   e-mail:   mar...@telka.sk  |
 |homepage: http://telka.sk/ |
 |jabber:   mar...@jabber.sk |
 +---+

 ___
 OpenIndiana-discuss mailing list
 OpenIndiana-discuss@openindiana.org
 http://openindiana.org/mailman/listinfo/openindiana-discuss

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] NFS

2014-01-30 Thread Edward Ned Harvey (openindiana)
 From: Jonathan Adams [mailto:t12nsloo...@gmail.com]
 Sent: Thursday, January 30, 2014 4:06 AM
 
 If a share was mounted on the client and you change the underlying NFS
 version on the server then you will need to get the client to unmount all
 shares from the server before they can see the version 3 shares ... is this
 the case in your instance?
 
 Are your shares auto-mounted? if so it depends on which system you're
 using
 but it might be quicker to reboot ... :(

For now, I'm just trying to make it work.  Later I can make it automount or 
whatever, so at present, it goes like this:

(On server)
sudo zfs set sharenfs=rw=@10.10.10.14/32,root=@10.10.10.14/32 storage/ad1

(on ubuntu 10.04 client)
root@orion:~# mount -v -t nfs storage1:/storage/ad1 /mnt
mount.nfs: timeout set for Thu Jan 30 09:01:24 2014
mount.nfs: text-based options: 'addr=10.10.10.13'
mount.nfs: mount(2): Input/output error
mount.nfs: mount system call failed

Since that didn't work, I try with ESXi 5.5 client, I repeat, eliminating the @ 
symbols, eliminating the /32, and putting them back in there...  Set the 
versions as follows:
sudo sharectl set -p server_versmax=3 nfs
sudo svcadm refresh  svc:/network/nfs/server:default

Retry all the variations of set sharenfs and repeat trying to mount...  Still 
nothing works...

I wondered if maybe I have firewall enabled on the server.  So I used nc and 
telnet from the client to confirm the port is open.  (111 and 2049).  No 
problem.

The only thing that *does* work:  When I have the 151a7 box mount the 151a9 box 
using nfs v4, then it works.  But if I reduce the server and client both to v3, 
then even THEY fail to mount too.

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] NFS

2014-01-30 Thread Marcel Telka
On Thu, Jan 30, 2014 at 02:57:14PM +, Edward Ned Harvey (openindiana) wrote:
 I wondered if maybe I have firewall enabled on the server.  So I used nc
 and telnet from the client to confirm the port is open.  (111 and 2049).
 No problem.

In addition to ports 111 and 2049 you need a port for mountd too (when we are
talking about NFSv3).

The simplest way how to start the real root causing of your issue is to run
snoop or tcpdump at either/both NFS client and/or NFS server. All other
approaches might be funny, sometimes successful, but usually not very
productive.


Regards.

-- 
+---+
| Marcel Telka   e-mail:   mar...@telka.sk  |
|homepage: http://telka.sk/ |
|jabber:   mar...@jabber.sk |
+---+

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] NFS

2014-01-30 Thread Jonathan Adams
to test if it is a permissions problem, can you just set sharenfs=on? and
then try to access from the other machines?


On 30 January 2014 14:57, Edward Ned Harvey (openindiana) 
openindi...@nedharvey.com wrote:

  From: Jonathan Adams [mailto:t12nsloo...@gmail.com]
  Sent: Thursday, January 30, 2014 4:06 AM
 
  If a share was mounted on the client and you change the underlying NFS
  version on the server then you will need to get the client to unmount all
  shares from the server before they can see the version 3 shares ... is
 this
  the case in your instance?
 
  Are your shares auto-mounted? if so it depends on which system you're
  using
  but it might be quicker to reboot ... :(

 For now, I'm just trying to make it work.  Later I can make it automount
 or whatever, so at present, it goes like this:

 (On server)
 sudo zfs set sharenfs=rw=@10.10.10.14/32,root=@10.10.10.14/32 storage/ad1

 (on ubuntu 10.04 client)
 root@orion:~# mount -v -t nfs storage1:/storage/ad1 /mnt
 mount.nfs: timeout set for Thu Jan 30 09:01:24 2014
 mount.nfs: text-based options: 'addr=10.10.10.13'
 mount.nfs: mount(2): Input/output error
 mount.nfs: mount system call failed

 Since that didn't work, I try with ESXi 5.5 client, I repeat, eliminating
 the @ symbols, eliminating the /32, and putting them back in there...  Set
 the versions as follows:
 sudo sharectl set -p server_versmax=3 nfs
 sudo svcadm refresh  svc:/network/nfs/server:default

 Retry all the variations of set sharenfs and repeat trying to mount...
  Still nothing works...

 I wondered if maybe I have firewall enabled on the server.  So I used nc
 and telnet from the client to confirm the port is open.  (111 and 2049).
  No problem.

 The only thing that *does* work:  When I have the 151a7 box mount the
 151a9 box using nfs v4, then it works.  But if I reduce the server and
 client both to v3, then even THEY fail to mount too.

 ___
 OpenIndiana-discuss mailing list
 OpenIndiana-discuss@openindiana.org
 http://openindiana.org/mailman/listinfo/openindiana-discuss

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] NFS

2014-01-30 Thread Edward Ned Harvey (openindiana)
 From: Jonathan Adams [mailto:t12nsloo...@gmail.com]
 
 to test if it is a permissions problem, can you just set sharenfs=on? and
 then try to access from the other machines?

Thanks for the help everyone.  I decided to take it a step further than that:

On both the 151a7 (homer) and 151a9 (marge) machines:
sudo sharectl set -p client_versmax=4 nfs
sudo sharectl set -p server_versmax=4 nfs
sudo svcadm restart nfs/server

On 151a9 (marge)
sudo zfs create storage/NfsExport
sudo zfs set sharenfs=on storage/NfsExport

On 151a7 (homer)
sudo zfs create storagepool/NfsExport
sudo zfs set sharenfs=on storagepool/NfsExport

Now, from ubuntu 10.04, try mounting them both:
(as root)
mkdir /mnt/151a7
mkdir /mnt/151a9

mount -v -t nfs marge:/storage/NfsExport /mnt/151a9
mount.nfs: timeout set for Thu Jan 30 16:00:50 2014
mount.nfs: text-based options: 'addr=10.10.10.13'
mount.nfs: mount(2): Input/output error
mount.nfs: mount system call failed

mount -v -t nfs homer:/storagepool/NfsExport /mnt/151a7
mount.nfs: timeout set for Thu Jan 30 16:01:29 2014
mount.nfs: text-based options: 'addr=10.10.10.242'
homer:/storagepool/NfsExport on /mnt/151a7 type nfs (rw)

(Notice, it worked for 151a7, and failed for 151a9)

Now, I'll have the two OI machines mount each other - which should pretty well 
answer any questions about firewall and RPC ports, etc.

(151a7 machine mounting 151a9)  (Success)
sudo mount -F nfs marge:/storage/NfsExport /mnt
eharvey@homer:~$ df -h /mnt
FilesystemSize  Used Avail Use% Mounted on
marge:/storage/NfsExport
   11T   31K   11T   1% /mnt

(151a9 machine mounting 151a7)  (Success)
sudo mount -F nfs homer:/storagepool/NfsExport /mnt
eharvey@marge:~$ df -h /mnt
FilesystemSize  Used Avail Use% Mounted on
homer:/storagepool/NfsExport
  4.3T   44K  4.3T   1% /mnt

Now dismount them all, on all machines.

Reduce the versions to 3, (on both 151a7 and 151a9)
sudo sharectl set -p client_versmax=3 nfs
sudo sharectl set -p server_versmax=3 nfs
sudo svcadm restart nfs/server

And try again.

Attempt to mount again from ubuntu client.  Once again, 151a7 works and 151a9 
fails.

Attempt to mutually mount 151a7 to 151a9 and vice-versa...

151a7 cannot mount 151a9
sudo mount -F nfs marge:/storage/NfsExport /mnt
nfs mount: marge: : RPC: Rpcbind failure - RPC: Authentication error
nfs mount: retrying: /mnt
nfs mount: marge: : RPC: Rpcbind failure - RPC: Authentication error

151a9 mounts 151a7 just fine
sudo mount -F nfs homer:/storagepool/NfsExport /mnt
eharvey@marge:~$ df -h /mnt
FilesystemSize  Used Avail Use% Mounted on
homer:/storagepool/NfsExport
  4.3T   44K  4.3T   1% /mnt

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] NFS

2014-01-30 Thread Edward Ned Harvey (openindiana)
 From: Edward Ned Harvey (openindiana)

It *appears* that NFSv4 is fine in both 151a7 and 151a9.
It *appears* that NFSv3 is broken in 151a9.  Which was unfortunately, necessary 
to support ESXi client and Ubuntu 10.04 client.

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] NFS

2014-01-30 Thread Schweiss, Chip
I ran into a similar issue on OmniOS 151008j recently.

When I ran 'rpcinfo -p {nfs_server}' it returned access denied.

Restarting the rpc service fixed it:

svcadm restart svc:/network/rpc/bind:default

I don't know what put the server in that state, but it's happened only once
on the heavily used NFS server.

-Chip


On Thu, Jan 30, 2014 at 4:54 PM, Edward Ned Harvey (openindiana) 
openindi...@nedharvey.com wrote:

  From: Edward Ned Harvey (openindiana)

 It *appears* that NFSv4 is fine in both 151a7 and 151a9.
 It *appears* that NFSv3 is broken in 151a9.  Which was unfortunately,
 necessary to support ESXi client and Ubuntu 10.04 client.

 ___
 OpenIndiana-discuss mailing list
 OpenIndiana-discuss@openindiana.org
 http://openindiana.org/mailman/listinfo/openindiana-discuss

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


[OpenIndiana-discuss] NFS

2014-01-29 Thread Edward Ned Harvey (openindiana)
At home, I have oi_151a7 and ESXi 5.1.
I wrote down precisely how to share NFS, and mount from the ESXi machine.
sudo zfs set sharenfs=rw=@192.168.5.5/32,root=@192.168.5.5/32 
mypool/somefilesystem

I recall it was a pain to get the syntax correct, especially thanks to some 
innacuracy in the man page.  But I got it.

Now at work I have oi_151a9 and ESXi 5.5.  I also have oi_151a7 and some ubuntu 
12.04 servers and centos 5  6 servers.

On the oi_151a9 machine, I do the precise above sharenfs command.  Then the 
oi_151a7 machine can mount, but both centos, ubuntu, and ESXi clients all fail 
to mount.  So I think, ah-hah!  That sounds like a NFS v3 conflict with v4!

So then I do this:
sudo sharectl set -p client_versmax=3 nfs
sudo sharectl set -p server_versmax=3 nfs
sudo svcadm refresh  svc:/network/nfs/server:default

Now, *none* of the clients can mount.  So I put it back to 4, and the 
openindiana client can mount.

Is NFS v3 simply broken in the latest OI?

When I give the -v option to mount, I get nothing useful.
There is also nothing useful in the nfsd logs.

The only thing I have left to test...  I could try sharing NFS from the *old* 
oi server, and see if the new ESXi is able to mount it.  If ESXi is able to 
mount the old one, and not able to mount the new one, that would be a pretty 
solid indicator v3 is broken in 151_a9.
___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] NFS

2014-01-29 Thread Marcel Telka
On Thu, Jan 30, 2014 at 03:01:56AM +, Edward Ned Harvey (openindiana) wrote:
 At home, I have oi_151a7 and ESXi 5.1.
 I wrote down precisely how to share NFS, and mount from the ESXi machine.
 sudo zfs set sharenfs=rw=@192.168.5.5/32,root=@192.168.5.5/32 
 mypool/somefilesystem
 
 I recall it was a pain to get the syntax correct, especially thanks to some
 innacuracy in the man page.  But I got it.

Please file a bug report. Thanks.

 
 Now at work I have oi_151a9 and ESXi 5.5.  I also have oi_151a7 and some
 ubuntu 12.04 servers and centos 5  6 servers.
 
 On the oi_151a9 machine, I do the precise above sharenfs command.  Then the
 oi_151a7 machine can mount, but both centos, ubuntu, and ESXi clients all
 fail to mount.  So I think, ah-hah!  That sounds like a NFS v3 conflict
 with v4!
 
 So then I do this:
 sudo sharectl set -p client_versmax=3 nfs

The above command is useless at the NFS server.

 sudo sharectl set -p server_versmax=3 nfs
 sudo svcadm refresh  svc:/network/nfs/server:default
 
 Now, *none* of the clients can mount.  So I put it back to 4, and the
 openindiana client can mount.
 
 Is NFS v3 simply broken in the latest OI?

I don't think so. There is no reason for that. I usually use latest (or almost
latest) code from illumos-gate (so hipster) and I've no problem with NFS.

 
 When I give the -v option to mount, I get nothing useful.
 There is also nothing useful in the nfsd logs.
 
 The only thing I have left to test...  I could try sharing NFS from the *old*
 oi server, and see if the new ESXi is able to mount it.  If ESXi is able to
 mount the old one, and not able to mount the new one, that would be a pretty
 solid indicator v3 is broken in 151_a9.

You should look at the communication between the NFS client and the NFS server
when the mount fails to see what exactly is the problem.

-- 
+---+
| Marcel Telka   e-mail:   mar...@telka.sk  |
|homepage: http://telka.sk/ |
|jabber:   mar...@jabber.sk |
+---+

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] NFS performance tuning

2013-08-30 Thread James Carlson
On 08/29/13 14:58, Francis Swasey wrote:
 Hi,
 
 I've trying to set up an OI box (using 151a8) and I'm using nuttcp 6.1.2 to 
 make sure I'm getting the most I can out of the Emulex One Connect (10GbE) 
 cards that are in the box they gave me to use.  Unfortunately, I'm seeing 
 asymmetrical numbers.
 
 The clients are all RHEL6 boxes (and that can't be changed), which happily 
 talk 9.8Gb between themselves on the exact same hardware I am having issues 
 with OI on.  However, when I test between the Linux boxen and the OI box - I 
 always get 9.8Gb from the OI box to the RHEL6 and around 5Gb from the RHEL6 
 box to the OI box.
 
 I've been googling for tunables and have found 
 https://blogs.oracle.com/dlutz/entry/maximizing_nfs_client_performance_on .   
 However, a lot of what it says to tune doesn't exist.  Is there a better 
 document for me to read?
 

That sounds like the standard NFS sync behavior.  See this discussion
for more details:

http://comments.gmane.org/gmane.os.solaris.opensolaris.zfs/47934

Basically, it's fast / cheap / robust -- pick any two.

-- 
James Carlson 42.703N 71.076W carls...@workingcode.com

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


[OpenIndiana-discuss] NFS performance tuning

2013-08-29 Thread Francis Swasey
Hi,

I've trying to set up an OI box (using 151a8) and I'm using nuttcp 6.1.2 to 
make sure I'm getting the most I can out of the Emulex One Connect (10GbE) 
cards that are in the box they gave me to use.  Unfortunately, I'm seeing 
asymmetrical numbers.

The clients are all RHEL6 boxes (and that can't be changed), which happily talk 
9.8Gb between themselves on the exact same hardware I am having issues with OI 
on.  However, when I test between the Linux boxen and the OI box - I always get 
9.8Gb from the OI box to the RHEL6 and around 5Gb from the RHEL6 box to the OI 
box.

I've been googling for tunables and have found 
https://blogs.oracle.com/dlutz/entry/maximizing_nfs_client_performance_on .   
However, a lot of what it says to tune doesn't exist.  Is there a better 
document for me to read?

Thanks,
--
Frank Swasey| http://www.uvm.edu/~fcs
Sr Systems Administrator| Always remember: You are UNIQUE,
University of Vermont   |just like everyone else.
 I am not young enough to know everything. - Oscar Wilde (1854-1900)



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] NFS exported dataset crashes the system

2013-04-25 Thread Peter Wood
 If we create local users in /etc/passwd and /etc/groups, can you please
 tell us how to refresh NFSv4 server to update the user mapping table in
 Openindiana?. How do you face this issue?. If we restart the NFS service in
 Openindiana, using /etc/init.d/nfs restart, will NFSv4 clients reconnect or
 will they enter in a unstable state?.


If you create the user with the same UID on your Debian boxes and on the OI
server there should be no need to do anything else. The mapping is handled
by idmapd (Linux) and svc:/network/nfs/mapid (OpenIndiana). Just make sure
they are configured to use the same mapid domain.
___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] NFS exported dataset crashes the system

2013-04-24 Thread Peter Wood
First thing I'll do is to go in the BIOS and disable CPU C states and
disable all power saving features. If that doesn't help then try NFSv4.

The reason I disable CPU C states is because of previous experience with
OpenSolaris on Dell boxes about 2yr ago. It will crash the system in
similar fashion. There are multiple reports on the Internet about this and
for sure that solution worked for us. To be on the safe side I do the same
on the Supermicro boxes.

We switched to NFSv4 about two days ago and so far no crash. I'll be more
confident that this is the fix for us after running for at least 5 days
with no crash.

I wish I had the resources to do more tests. Unfortunately all I can tell
right now is that crashes are happening on SuperMicro hardware but not
Dell, and the trigger is exporting one particular directory via NFSv3. I
don't think it is the high IOPS. More likely it is related to the way the
directory is used. What we do is we move files and directories around and
re-point symlinks while everything has been accessed from the clients and
we do this every 15min.
Something like: mv nfsdir/targetdir nfsdir/targetdir.old; mv
nfsdir/targetdir.new nfsdir/targetdir.

To me it looks more like locking issue then high IOPS issue.


On Tue, Apr 23, 2013 at 11:26 PM, Alberto Picón Couselo
alpic...@gmail.comwrote:

 Hi.

 We have almost the same hardware as yours and we had the same issue. We
 have exported a ZFS pool to three Xen VMs Debian 6.0 mounted using NFSv3.
 When one of these boxes launches a highio PHP process to create a backup,
 it creates, copies and deletes a large amount of files. The box just
 crashed the same way as yours, during the deletion process, no ping, no
 log, no response at all. We had to do a cold restart unplugging system
 power cords...

 We have changed to NFSv4 hoping to fix this issue. Can please comment your
 results regarding this issues?

 Any help would be kindly appreciated.

 Best Regards,

   I've asked the ZFS discussion list for help on this but now I have more
  information and it looks like a bug in the drivers or something.

  I have number of Dell PE R710 and PE 2950 servers running OpenSolaris, OI
  151a and OI 151a.7. All these systems are used as storage servers, clean
 OS
  install, no extra services running. The systems are NFS exporting a lot
 of
  ZFS datasets that are mounted on about ten CentOS-5.9 systems.

  The above setup has been working for 2+ years with no problem.

  Recently we bought two Supermicro systems:
   Supermicro X9DRH-iF
   Xeon E5-2620 @ 2.0 GHz 6-Core
   128GB RAM
   LSI SAS9211-8i HBA
   32x 3TB Hitachi HUS723030ALS640, SAS, 7.2K

  I installed OI151.a.7 on them and started migrating data from the old
 Dell
  servers (zfs send/receive).

  Things have been working great for about two months until I migrated one
  particular directory to one of the new Supermicro systems and after about
  two days the system crashed. No network connectivity, black console, no
  response to keyboard keys, no activity lights (no error lights either) on
  the chassis. The only way out is to hit the reset button. Nothing in the
  logs as far as I can tell. Log entries just stop when the system crashes.

  In the following two months I did a lot of testing and a lot of trips to
  the colo in the middle of the night and the observation is that
 regardless
  of the OS everything works on the Dell servers. As soon as I move that
  directory to any of the Supermicro servers with OI151.a.7 it will crash
  them within 2 hours up to 5 days.

  The Supermicro servers can be idle, exporting nothing, or can be
 exporting
  15+ other directories with high IOPS and working for months with no
  problems but as soon as I have them export that directory they'll crash
 in
  5 days the most.

  There is only one difference between that directory an all others
 exported
  directories. One of the client systems that mounts it and writes to it is
  an old Debian 5.0 system. No idea why that would crash a Supermicro
 system
  but not a Dell system.

  We worked directly with LSI developers and upgraded the firmware to some
  unpublished, prerelease development version to no avail. We disabled all
  power saving features and CPU C states in the BIOS and nothing changed.

  Any idea?I had a similar kind of problem where a VirtualBox Freebsd 9.1
 VM could hang

 the server.
 It had /usr/src and /usr/obj NFS mounted from the OI a7 box it was running
 on.
 The are separate NFS shared datasets in on of my 3 pools.

 When I ran a make buildworld in that VM it consistently locked up the OI
 host,
 no console access,
 no network access ( not even ping ).
 As a test I switched to NFSv4 instead of NFSv3 and I have not seen a hang
 since.


___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] NFS exported dataset crashes the system

2013-04-24 Thread Alberto Picón Couselo
I can confirm you that we have disabled all power saving features of the 
boxes. However, I can't assure that CPU C states are totally disabled.


Anyway, we have changed to NFSv4 to test the system stability. The PHP 
process reads a folder with a huge number of hashed files and folders 
and creates a tarball, deleting the copy afterwards. As you comment, we 
think it could be due to some kind of locking/highio NFSv3 related issue...


If we create local users in /etc/passwd and /etc/groups, can you please 
tell us how to refresh NFSv4 server to update the user mapping table in 
Openindiana?. How do you face this issue?. If we restart the NFS service 
in Openindiana, using /etc/init.d/nfs restart, will NFSv4 clients 
reconnect or will they enter in a unstable state?.


Thank you very much in advance,

El 24/04/2013 20:11, Peter Wood escribió:
First thing I'll do is to go in the BIOS and disable CPU C states and 
disable all power saving features. If that doesn't help then try NFSv4.


The reason I disable CPU C states is because of previous experience 
with OpenSolaris on Dell boxes about 2yr ago. It will crash the system 
in similar fashion. There are multiple reports on the Internet about 
this and for sure that solution worked for us. To be on the safe side 
I do the same on the Supermicro boxes.


We switched to NFSv4 about two days ago and so far no crash. I'll be 
more confident that this is the fix for us after running for at least 
5 days with no crash.


I wish I had the resources to do more tests. Unfortunately all I can 
tell right now is that crashes are happening on SuperMicro hardware 
but not Dell, and the trigger is exporting one particular directory 
via NFSv3. I don't think it is the high IOPS. More likely it is 
related to the way the directory is used. What we do is we move files 
and directories around and re-point symlinks while everything has been 
accessed from the clients and we do this every 15min.
Something like: mv nfsdir/targetdir nfsdir/targetdir.old; mv 
nfsdir/targetdir.new nfsdir/targetdir.


To me it looks more like locking issue then high IOPS issue.


On Tue, Apr 23, 2013 at 11:26 PM, Alberto Picón Couselo 
alpic...@gmail.com mailto:alpic...@gmail.com wrote:


Hi.

We have almost the same hardware as yours and we had the same
issue. We have exported a ZFS pool to three Xen VMs Debian 6.0
mounted using NFSv3. When one of these boxes launches a highio PHP
process to create a backup, it creates, copies and deletes a large
amount of files. The box just crashed the same way as yours,
during the deletion process, no ping, no log, no response at all.
We had to do a cold restart unplugging system power cords...

We have changed to NFSv4 hoping to fix this issue. Can please
comment your results regarding this issues?

Any help would be kindly appreciated.

Best Regards,

 I've asked the ZFS discussion list for help on this but now I
have more
 information and it looks like a bug in the drivers or something.

 I have number of Dell PE R710 and PE 2950 servers running
OpenSolaris, OI
 151a and OI 151a.7. All these systems are used as storage
servers, clean OS
 install, no extra services running. The systems are NFS
exporting a lot of
 ZFS datasets that are mounted on about ten CentOS-5.9 systems.

 The above setup has been working for 2+ years with no problem.

 Recently we bought two Supermicro systems:
  Supermicro X9DRH-iF
  Xeon E5-2620 @ 2.0 GHz 6-Core
  128GB RAM
  LSI SAS9211-8i HBA
  32x 3TB Hitachi HUS723030ALS640, SAS, 7.2K

 I installed OI151.a.7 on them and started migrating data from
the old Dell
 servers (zfs send/receive).

 Things have been working great for about two months until I
migrated one
 particular directory to one of the new Supermicro systems and
after about
 two days the system crashed. No network connectivity, black
console, no
 response to keyboard keys, no activity lights (no error
lights either) on
 the chassis. The only way out is to hit the reset button.
Nothing in the
 logs as far as I can tell. Log entries just stop when the
system crashes.

 In the following two months I did a lot of testing and a lot
of trips to
 the colo in the middle of the night and the observation is
that regardless
 of the OS everything works on the Dell servers. As soon as I
move that
 directory to any of the Supermicro servers with OI151.a.7 it
will crash
 them within 2 hours up to 5 days.

 The Supermicro servers can be idle, exporting nothing, or can
be exporting
 15+ other directories with high IOPS and working for months
with no
 problems but as soon as I 

Re: [OpenIndiana-discuss] NFS exported dataset crashes the system

2013-04-11 Thread Paul van der Zwan

On 11 Apr 2013, at 0:29 , Peter Wood peterwood...@gmail.com wrote:

 On Wed, Apr 10, 2013 at 7:35 AM, Paul van der Zwan 
 pa...@vanderzwan.orgwrote:
 
 
 On 9 Apr 2013, at 3:13 , Peter Wood peterwood...@gmail.com wrote:
 
 I've asked the ZFS discussion list for help on this but now I have more
 information and it looks like a bug in the drivers or something.
 
 I have number of Dell PE R710 and PE 2950 servers running OpenSolaris, OI
 151a and OI 151a.7. All these systems are used as storage servers, clean
 OS
 install, no extra services running. The systems are NFS exporting a lot
 of
 ZFS datasets that are mounted on about ten CentOS-5.9 systems.
 
 The above setup has been working for 2+ years with no problem.
 
 Recently we bought two Supermicro systems:
 Supermicro X9DRH-iF
 Xeon E5-2620 @ 2.0 GHz 6-Core
 128GB RAM
 LSI SAS9211-8i HBA
 32x 3TB Hitachi HUS723030ALS640, SAS, 7.2K
 
 I installed OI151.a.7 on them and started migrating data from the old
 Dell
 servers (zfs send/receive).
 
 Things have been working great for about two months until I migrated one
 particular directory to one of the new Supermicro systems and after about
 two days the system crashed. No network connectivity, black console, no
 response to keyboard keys, no activity lights (no error lights either) on
 the chassis. The only way out is to hit the reset button. Nothing in the
 logs as far as I can tell. Log entries just stop when the system crashes.
 
 In the following two months I did a lot of testing and a lot of trips to
 the colo in the middle of the night and the observation is that
 regardless
 of the OS everything works on the Dell servers. As soon as I move that
 directory to any of the Supermicro servers with OI151.a.7 it will crash
 them within 2 hours up to 5 days.
 
 The Supermicro servers can be idle, exporting nothing, or can be
 exporting
 15+ other directories with high IOPS and working for months with no
 problems but as soon as I have them export that directory they'll crash
 in
 5 days the most.
 
 There is only one difference between that directory an all others
 exported
 directories. One of the client systems that mounts it and writes to it is
 an old Debian 5.0 system. No idea why that would crash a Supermicro
 system
 but not a Dell system.
 
 We worked directly with LSI developers and upgraded the firmware to some
 unpublished, prerelease development version to no avail. We disabled all
 power saving features and CPU C states in the BIOS and nothing changed.
 
 Any idea?
 
 I had a similar kind of problem where a VirtualBox Freebsd 9.1 VM could
 hang the server.
 It had /usr/src and /usr/obj NFS mounted from the OI a7 box it was running
 on.
 The are separate NFS shared datasets in on of my 3 pools.
 
 When I ran a make buildworld in that VM it consistently locked up the OI
 host, no console access,
 no network access ( not even ping ).
 As a test I switched to NFSv4 instead of NFSv3 and I have not seen a hang
 since.
 So it looked like a heavy NFSv3 load was the issue.
 
Paul
 
 
 Make sense. I haven't tried that.
 
 If I'm correct ZFS on OI supports NFSv2,3 and 4.
 
 By switching to NFSv4 you mean that on your client machine (the FreeBSD VM)
 you setup the NFS client to use NFSv4 protocol. Do I understand this
 correctly? Or, did you do something on the OI server to accept only NFSv4
 connections?
 
 Could you please give more information.

I haven't changed the server but only the mount options on on the client.
It's /etc/fstab now has:
192.168.178.24:/data/ports  /usr/ports  nfs rw,nfsv4-   
-
192.168.178.24:/data/src/usr/srcnfs rw,nfsv4-   
-
192.168.178.24:/data/obj/usr/objnfs rw,nfsv4  -   -

A make buildworld does seem to take quite a bit longer than when I was using 
nfsv3 so it might just be a case
of a lighter load. I have no hard data but it feels like it takes twice as long.

Paul


___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] NFS exported dataset crashes the system

2013-04-11 Thread Paul van der Zwan

On 10 Apr 2013, at 22:03 , Ian Collins i...@ianshome.com wrote:

 Paul van der Zwan wrote:
 
 When it hung the system would not respond to anything at all.
 The only way out I could find was a hard reset or power cycle.
 
 I do have the following in /etc/system:
 set snooping=1
 set pcplusmp:apic_panic_on_nmi=1
 But that did not make a difference.
 
 BTW the hang was/is reproducable, everytime I ran a make buildworld inside 
 the VM it would hang.
 I have tried a few make buildworlds now that I use NFSv4 and no hangs so far.
 
 Had you tried decoupling the VM host from the NFS storage?
 

Haven't tried it yet but will try to run a VM on a remote system and see what 
happens 
when I run a make buildworld on both nfs v3 and v4.


Paul


___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] NFS exported dataset crashes the system

2013-04-11 Thread Paul van der Zwan

On 10 Apr 2013, at 22:03 , Ian Collins i...@ianshome.com wrote:

 Paul van der Zwan wrote:
 
 When it hung the system would not respond to anything at all.
 The only way out I could find was a hard reset or power cycle.
 
 I do have the following in /etc/system:
 set snooping=1
 set pcplusmp:apic_panic_on_nmi=1
 But that did not make a difference.
 
 BTW the hang was/is reproducable, everytime I ran a make buildworld inside 
 the VM it would hang.
 I have tried a few make buildworlds now that I use NFSv4 and no hangs so far.
 
 Had you tried decoupling the VM host from the NFS storage?
 

I have just tried copying one of the vms to my imac and ran a make buildworld 
from that.
I had the /usr/src and /usr/obj mounted on nfsv3 and it completely locked up 
the server during 
the make cleandir phase. So when it is deleting a lot of files.

I had to power off and restart the server.
Will try an nfsv4 mounted attempt next.
 
Paul


___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] NFS exported dataset crashes the system

2013-04-10 Thread Paul van der Zwan

On 9 Apr 2013, at 3:13 , Peter Wood peterwood...@gmail.com wrote:

 I've asked the ZFS discussion list for help on this but now I have more
 information and it looks like a bug in the drivers or something.
 
 I have number of Dell PE R710 and PE 2950 servers running OpenSolaris, OI
 151a and OI 151a.7. All these systems are used as storage servers, clean OS
 install, no extra services running. The systems are NFS exporting a lot of
 ZFS datasets that are mounted on about ten CentOS-5.9 systems.
 
 The above setup has been working for 2+ years with no problem.
 
 Recently we bought two Supermicro systems:
  Supermicro X9DRH-iF
  Xeon E5-2620 @ 2.0 GHz 6-Core
  128GB RAM
  LSI SAS9211-8i HBA
  32x 3TB Hitachi HUS723030ALS640, SAS, 7.2K
 
 I installed OI151.a.7 on them and started migrating data from the old Dell
 servers (zfs send/receive).
 
 Things have been working great for about two months until I migrated one
 particular directory to one of the new Supermicro systems and after about
 two days the system crashed. No network connectivity, black console, no
 response to keyboard keys, no activity lights (no error lights either) on
 the chassis. The only way out is to hit the reset button. Nothing in the
 logs as far as I can tell. Log entries just stop when the system crashes.
 
 In the following two months I did a lot of testing and a lot of trips to
 the colo in the middle of the night and the observation is that regardless
 of the OS everything works on the Dell servers. As soon as I move that
 directory to any of the Supermicro servers with OI151.a.7 it will crash
 them within 2 hours up to 5 days.
 
 The Supermicro servers can be idle, exporting nothing, or can be exporting
 15+ other directories with high IOPS and working for months with no
 problems but as soon as I have them export that directory they'll crash in
 5 days the most.
 
 There is only one difference between that directory an all others exported
 directories. One of the client systems that mounts it and writes to it is
 an old Debian 5.0 system. No idea why that would crash a Supermicro system
 but not a Dell system.
 
 We worked directly with LSI developers and upgraded the firmware to some
 unpublished, prerelease development version to no avail. We disabled all
 power saving features and CPU C states in the BIOS and nothing changed.
 
 Any idea?

I had a similar kind of problem where a VirtualBox Freebsd 9.1 VM could hang 
the server.
It had /usr/src and /usr/obj NFS mounted from the OI a7 box it was running on.
The are separate NFS shared datasets in on of my 3 pools.

When I ran a make buildworld in that VM it consistently locked up the OI host, 
no console access,
no network access ( not even ping ).
As a test I switched to NFSv4 instead of NFSv3 and I have not seen a hang since.
So it looked like a heavy NFSv3 load was the issue.

Paul


___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] NFS exported dataset crashes the system

2013-04-10 Thread Marcel Telka
On Wed, Apr 10, 2013 at 04:35:06PM +0200, Paul van der Zwan wrote:
 
 On 9 Apr 2013, at 3:13 , Peter Wood peterwood...@gmail.com wrote:
 
  I've asked the ZFS discussion list for help on this but now I have more
  information and it looks like a bug in the drivers or something.
  
  I have number of Dell PE R710 and PE 2950 servers running OpenSolaris, OI
  151a and OI 151a.7. All these systems are used as storage servers, clean OS
  install, no extra services running. The systems are NFS exporting a lot of
  ZFS datasets that are mounted on about ten CentOS-5.9 systems.
  
  The above setup has been working for 2+ years with no problem.
  
  Recently we bought two Supermicro systems:
   Supermicro X9DRH-iF
   Xeon E5-2620 @ 2.0 GHz 6-Core
   128GB RAM
   LSI SAS9211-8i HBA
   32x 3TB Hitachi HUS723030ALS640, SAS, 7.2K
  
  I installed OI151.a.7 on them and started migrating data from the old Dell
  servers (zfs send/receive).
  
  Things have been working great for about two months until I migrated one
  particular directory to one of the new Supermicro systems and after about
  two days the system crashed. No network connectivity, black console, no
  response to keyboard keys, no activity lights (no error lights either) on
  the chassis. The only way out is to hit the reset button. Nothing in the
  logs as far as I can tell. Log entries just stop when the system crashes.
  
  In the following two months I did a lot of testing and a lot of trips to
  the colo in the middle of the night and the observation is that regardless
  of the OS everything works on the Dell servers. As soon as I move that
  directory to any of the Supermicro servers with OI151.a.7 it will crash
  them within 2 hours up to 5 days.
  
  The Supermicro servers can be idle, exporting nothing, or can be exporting
  15+ other directories with high IOPS and working for months with no
  problems but as soon as I have them export that directory they'll crash in
  5 days the most.
  
  There is only one difference between that directory an all others exported
  directories. One of the client systems that mounts it and writes to it is
  an old Debian 5.0 system. No idea why that would crash a Supermicro system
  but not a Dell system.
  
  We worked directly with LSI developers and upgraded the firmware to some
  unpublished, prerelease development version to no avail. We disabled all
  power saving features and CPU C states in the BIOS and nothing changed.
  
  Any idea?
 
 I had a similar kind of problem where a VirtualBox Freebsd 9.1 VM could hang 
 the server.
 It had /usr/src and /usr/obj NFS mounted from the OI a7 box it was running on.
 The are separate NFS shared datasets in on of my 3 pools.
 
 When I ran a make buildworld in that VM it consistently locked up the OI 
 host, no console access,
 no network access ( not even ping ).
 As a test I switched to NFSv4 instead of NFSv3 and I have not seen a hang 
 since.
 So it looked like a heavy NFSv3 load was the issue.

Please try to get a crash dump file when the system is in hung state.
I'm interested to analyze the crash dump file.


Thanks.

-- 
+---+
| Marcel Telka   e-mail:   mar...@telka.sk  |
|homepage: http://telka.sk/ |
|jabber:   mar...@jabber.sk |
+---+

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] NFS exported dataset crashes the system

2013-04-10 Thread Paul van der Zwan

On 10 Apr 2013, at 16:46 , Marcel Telka mar...@telka.sk wrote:

 On Wed, Apr 10, 2013 at 04:35:06PM +0200, Paul van der Zwan wrote:
 
 On 9 Apr 2013, at 3:13 , Peter Wood peterwood...@gmail.com wrote:
 
 I've asked the ZFS discussion list for help on this but now I have more
 information and it looks like a bug in the drivers or something.
 
 I have number of Dell PE R710 and PE 2950 servers running OpenSolaris, OI
 151a and OI 151a.7. All these systems are used as storage servers, clean OS
 install, no extra services running. The systems are NFS exporting a lot of
 ZFS datasets that are mounted on about ten CentOS-5.9 systems.
 
 The above setup has been working for 2+ years with no problem.
 
 Recently we bought two Supermicro systems:
 Supermicro X9DRH-iF
 Xeon E5-2620 @ 2.0 GHz 6-Core
 128GB RAM
 LSI SAS9211-8i HBA
 32x 3TB Hitachi HUS723030ALS640, SAS, 7.2K
 
 I installed OI151.a.7 on them and started migrating data from the old Dell
 servers (zfs send/receive).
 
 Things have been working great for about two months until I migrated one
 particular directory to one of the new Supermicro systems and after about
 two days the system crashed. No network connectivity, black console, no
 response to keyboard keys, no activity lights (no error lights either) on
 the chassis. The only way out is to hit the reset button. Nothing in the
 logs as far as I can tell. Log entries just stop when the system crashes.
 
 In the following two months I did a lot of testing and a lot of trips to
 the colo in the middle of the night and the observation is that regardless
 of the OS everything works on the Dell servers. As soon as I move that
 directory to any of the Supermicro servers with OI151.a.7 it will crash
 them within 2 hours up to 5 days.
 
 The Supermicro servers can be idle, exporting nothing, or can be exporting
 15+ other directories with high IOPS and working for months with no
 problems but as soon as I have them export that directory they'll crash in
 5 days the most.
 
 There is only one difference between that directory an all others exported
 directories. One of the client systems that mounts it and writes to it is
 an old Debian 5.0 system. No idea why that would crash a Supermicro system
 but not a Dell system.
 
 We worked directly with LSI developers and upgraded the firmware to some
 unpublished, prerelease development version to no avail. We disabled all
 power saving features and CPU C states in the BIOS and nothing changed.
 
 Any idea?
 
 I had a similar kind of problem where a VirtualBox Freebsd 9.1 VM could hang 
 the server.
 It had /usr/src and /usr/obj NFS mounted from the OI a7 box it was running 
 on.
 The are separate NFS shared datasets in on of my 3 pools.
 
 When I ran a make buildworld in that VM it consistently locked up the OI 
 host, no console access,
 no network access ( not even ping ).
 As a test I switched to NFSv4 instead of NFSv3 and I have not seen a hang 
 since.
 So it looked like a heavy NFSv3 load was the issue.
 
 Please try to get a crash dump file when the system is in hung state.
 I'm interested to analyze the crash dump file.
 
 

When it hung the system would not respond to anything at all.
The only way out I could find was a hard reset or power cycle.

I do have the following in /etc/system:
set snooping=1
set pcplusmp:apic_panic_on_nmi=1
But that did not make a difference.

BTW the hang was/is reproducable, everytime I ran a make buildworld inside the 
VM it would hang.
I have tried a few make buildworlds now that I use NFSv4 and no hangs so far.

Regards, 

Paul


___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] NFS exported dataset crashes the system

2013-04-10 Thread Ian Collins

Paul van der Zwan wrote:


When it hung the system would not respond to anything at all.
The only way out I could find was a hard reset or power cycle.

I do have the following in /etc/system:
set snooping=1
set pcplusmp:apic_panic_on_nmi=1
But that did not make a difference.

BTW the hang was/is reproducable, everytime I ran a make buildworld inside the 
VM it would hang.
I have tried a few make buildworlds now that I use NFSv4 and no hangs so far.


Had you tried decoupling the VM host from the NFS storage?

--
Ian.


___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] NFS exported dataset crashes the system

2013-04-10 Thread Peter Wood
On Wed, Apr 10, 2013 at 7:35 AM, Paul van der Zwan pa...@vanderzwan.orgwrote:


 On 9 Apr 2013, at 3:13 , Peter Wood peterwood...@gmail.com wrote:

  I've asked the ZFS discussion list for help on this but now I have more
  information and it looks like a bug in the drivers or something.
 
  I have number of Dell PE R710 and PE 2950 servers running OpenSolaris, OI
  151a and OI 151a.7. All these systems are used as storage servers, clean
 OS
  install, no extra services running. The systems are NFS exporting a lot
 of
  ZFS datasets that are mounted on about ten CentOS-5.9 systems.
 
  The above setup has been working for 2+ years with no problem.
 
  Recently we bought two Supermicro systems:
   Supermicro X9DRH-iF
   Xeon E5-2620 @ 2.0 GHz 6-Core
   128GB RAM
   LSI SAS9211-8i HBA
   32x 3TB Hitachi HUS723030ALS640, SAS, 7.2K
 
  I installed OI151.a.7 on them and started migrating data from the old
 Dell
  servers (zfs send/receive).
 
  Things have been working great for about two months until I migrated one
  particular directory to one of the new Supermicro systems and after about
  two days the system crashed. No network connectivity, black console, no
  response to keyboard keys, no activity lights (no error lights either) on
  the chassis. The only way out is to hit the reset button. Nothing in the
  logs as far as I can tell. Log entries just stop when the system crashes.
 
  In the following two months I did a lot of testing and a lot of trips to
  the colo in the middle of the night and the observation is that
 regardless
  of the OS everything works on the Dell servers. As soon as I move that
  directory to any of the Supermicro servers with OI151.a.7 it will crash
  them within 2 hours up to 5 days.
 
  The Supermicro servers can be idle, exporting nothing, or can be
 exporting
  15+ other directories with high IOPS and working for months with no
  problems but as soon as I have them export that directory they'll crash
 in
  5 days the most.
 
  There is only one difference between that directory an all others
 exported
  directories. One of the client systems that mounts it and writes to it is
  an old Debian 5.0 system. No idea why that would crash a Supermicro
 system
  but not a Dell system.
 
  We worked directly with LSI developers and upgraded the firmware to some
  unpublished, prerelease development version to no avail. We disabled all
  power saving features and CPU C states in the BIOS and nothing changed.
 
  Any idea?

 I had a similar kind of problem where a VirtualBox Freebsd 9.1 VM could
 hang the server.
 It had /usr/src and /usr/obj NFS mounted from the OI a7 box it was running
 on.
 The are separate NFS shared datasets in on of my 3 pools.

 When I ran a make buildworld in that VM it consistently locked up the OI
 host, no console access,
 no network access ( not even ping ).
 As a test I switched to NFSv4 instead of NFSv3 and I have not seen a hang
 since.
 So it looked like a heavy NFSv3 load was the issue.

 Paul


Make sense. I haven't tried that.

If I'm correct ZFS on OI supports NFSv2,3 and 4.

By switching to NFSv4 you mean that on your client machine (the FreeBSD VM)
you setup the NFS client to use NFSv4 protocol. Do I understand this
correctly? Or, did you do something on the OI server to accept only NFSv4
connections?

Could you please give more information.

Thanks,

-- Peter
___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] NFS exported dataset crashes the system

2013-04-09 Thread Ram Chander
There could be corruption in that dir. Can you run a scrub on the pool

zpool scrub pool


On Tue, Apr 9, 2013 at 6:43 AM, Peter Wood peterwood...@gmail.com wrote:

 I've asked the ZFS discussion list for help on this but now I have more
 information and it looks like a bug in the drivers or something.

 I have number of Dell PE R710 and PE 2950 servers running OpenSolaris, OI
 151a and OI 151a.7. All these systems are used as storage servers, clean OS
 install, no extra services running. The systems are NFS exporting a lot of
 ZFS datasets that are mounted on about ten CentOS-5.9 systems.

 The above setup has been working for 2+ years with no problem.

 Recently we bought two Supermicro systems:
   Supermicro X9DRH-iF
   Xeon E5-2620 @ 2.0 GHz 6-Core
   128GB RAM
   LSI SAS9211-8i HBA
   32x 3TB Hitachi HUS723030ALS640, SAS, 7.2K

 I installed OI151.a.7 on them and started migrating data from the old Dell
 servers (zfs send/receive).

 Things have been working great for about two months until I migrated one
 particular directory to one of the new Supermicro systems and after about
 two days the system crashed. No network connectivity, black console, no
 response to keyboard keys, no activity lights (no error lights either) on
 the chassis. The only way out is to hit the reset button. Nothing in the
 logs as far as I can tell. Log entries just stop when the system crashes.

 In the following two months I did a lot of testing and a lot of trips to
 the colo in the middle of the night and the observation is that regardless
 of the OS everything works on the Dell servers. As soon as I move that
 directory to any of the Supermicro servers with OI151.a.7 it will crash
 them within 2 hours up to 5 days.

 The Supermicro servers can be idle, exporting nothing, or can be exporting
 15+ other directories with high IOPS and working for months with no
 problems but as soon as I have them export that directory they'll crash in
 5 days the most.

 There is only one difference between that directory an all others exported
 directories. One of the client systems that mounts it and writes to it is
 an old Debian 5.0 system. No idea why that would crash a Supermicro system
 but not a Dell system.

 We worked directly with LSI developers and upgraded the firmware to some
 unpublished, prerelease development version to no avail. We disabled all
 power saving features and CPU C states in the BIOS and nothing changed.

 Any idea?

 Thanks a lot.

 -- Peter
 ___
 OpenIndiana-discuss mailing list
 OpenIndiana-discuss@openindiana.org
 http://openindiana.org/mailman/listinfo/openindiana-discuss

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


[OpenIndiana-discuss] NFS exported dataset crashes the system

2013-04-08 Thread Peter Wood
I've asked the ZFS discussion list for help on this but now I have more
information and it looks like a bug in the drivers or something.

I have number of Dell PE R710 and PE 2950 servers running OpenSolaris, OI
151a and OI 151a.7. All these systems are used as storage servers, clean OS
install, no extra services running. The systems are NFS exporting a lot of
ZFS datasets that are mounted on about ten CentOS-5.9 systems.

The above setup has been working for 2+ years with no problem.

Recently we bought two Supermicro systems:
  Supermicro X9DRH-iF
  Xeon E5-2620 @ 2.0 GHz 6-Core
  128GB RAM
  LSI SAS9211-8i HBA
  32x 3TB Hitachi HUS723030ALS640, SAS, 7.2K

I installed OI151.a.7 on them and started migrating data from the old Dell
servers (zfs send/receive).

Things have been working great for about two months until I migrated one
particular directory to one of the new Supermicro systems and after about
two days the system crashed. No network connectivity, black console, no
response to keyboard keys, no activity lights (no error lights either) on
the chassis. The only way out is to hit the reset button. Nothing in the
logs as far as I can tell. Log entries just stop when the system crashes.

In the following two months I did a lot of testing and a lot of trips to
the colo in the middle of the night and the observation is that regardless
of the OS everything works on the Dell servers. As soon as I move that
directory to any of the Supermicro servers with OI151.a.7 it will crash
them within 2 hours up to 5 days.

The Supermicro servers can be idle, exporting nothing, or can be exporting
15+ other directories with high IOPS and working for months with no
problems but as soon as I have them export that directory they'll crash in
5 days the most.

There is only one difference between that directory an all others exported
directories. One of the client systems that mounts it and writes to it is
an old Debian 5.0 system. No idea why that would crash a Supermicro system
but not a Dell system.

We worked directly with LSI developers and upgraded the firmware to some
unpublished, prerelease development version to no avail. We disabled all
power saving features and CPU C states in the BIOS and nothing changed.

Any idea?

Thanks a lot.

-- Peter
___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] NFS fails to automount at boot

2012-08-07 Thread Bob Friesenhahn

On Mon, 6 Aug 2012, Daniel Kjar wrote:


Really?  What do you call that crap in etc under auto_master and auto_home?


Those are template sample files for you to edit.

Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


[OpenIndiana-discuss] NFS fails to automount at boot

2012-08-06 Thread Sašo Kiselkov
I've run into a bizzare issue. An NFS export I have in /etc/vfstab fails
to automount on an OI client at boot, even though the filesystem is
clearly marked as mount at boot:

#device device  mount   FS  fsckmount
mount
#to mount   to fsck point   typepassat boot
options
192.168.133.1:/etc/streamers-   /etc/streamers  nfs -
yes nodevices,nosetuid,ro

When I issue /sbin/mountall or mount -a, it mounts just fine. Needless
to say, its failure to mount at boot results in failure of dependent
services to start, which is quite bummer if I have to do it manually
each time. Nothing is logged to /var/adm/messages, so I have no idea why
it ignores my NFS mounts at boot. It simply does. Anybody got an idea on
how to track this down?

--
Saso

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] NFS fails to automount at boot

2012-08-06 Thread Udo Grabowski (IMK)

For nfsv3
svc:/network/nfs/client:default
should be enabled.

All these should be enabled for nfsv4
online Jun_29   svc:/network/nfs/cbd:default
online Jun_29   svc:/network/nfs/status:default
online Jun_29   svc:/network/nfs/mapid:default
online Jun_29   svc:/network/nfs/rquota:default
online Jun_29   svc:/network/nfs/nlockmgr:default

And, for automount, enable
svc:/system/filesystem/autofs:default

On 06/08/2012 13:56, Sašo Kiselkov wrote:

I've run into a bizzare issue. An NFS export I have in /etc/vfstab fails
to automount on an OI client at boot, even though the filesystem is
clearly marked as mount at boot:

#device device  mount   FS  fsckmount
mount
#to mount   to fsck point   typepassat boot
options
192.168.133.1:/etc/streamers-   /etc/streamers  nfs -
yes nodevices,nosetuid,ro

When I issue /sbin/mountall or mount -a, it mounts just fine. Needless
to say, its failure to mount at boot results in failure of dependent
services to start, which is quite bummer if I have to do it manually
each time. Nothing is logged to /var/adm/messages, so I have no idea why
it ignores my NFS mounts at boot. It simply does. Anybody got an idea on
how to track this down?


--
Dr.Udo GrabowskiInst.f.Meteorology a.Climate Research IMK-ASF-SAT
www-imk.fzk.de/asf/sat/grabowski/ www.imk-asf.kit.edu/english/sat.php
KIT - Karlsruhe Institute of Technologyhttp://www.kit.edu
Postfach 3640,76021 Karlsruhe,Germany  T:(+49)721 608-26026 F:-926026

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] NFS fails to automount at boot

2012-08-06 Thread James Carlson
Sašo Kiselkov wrote:
 I've run into a bizzare issue. An NFS export I have in /etc/vfstab fails
 to automount on an OI client at boot, even though the filesystem is
 clearly marked as mount at boot:
 
 #device device  mount   FS  fsckmount
 mount
 #to mount   to fsck point   typepassat boot
 options
 192.168.133.1:/etc/streamers-   /etc/streamers  nfs -
 yes nodevices,nosetuid,ro
 
 When I issue /sbin/mountall or mount -a, it mounts just fine. Needless
 to say, its failure to mount at boot results in failure of dependent
 services to start, which is quite bummer if I have to do it manually
 each time. Nothing is logged to /var/adm/messages, so I have no idea why
 it ignores my NFS mounts at boot. It simply does. Anybody got an idea on
 how to track this down?

It's never been possible to mount NFS at boot.  The normal mount -a
takes place well before any of the networking infrastructure --
interfaces, routes, and the like -- are available.

Making it work at boot poses a chicken-and-egg problem.  You need to
have the local file systems mounted in order to start up the networking
services ... but if the mount process itself requires networking, you're
stuck.

This is exactly what the automounter is for ... though I somewhat doubt
that it makes adminstrative sense to have anything remote-mounted under
/etc.  The whole point of /etc is that it's local.

-- 
James Carlson 42.703N 71.076W carls...@workingcode.com

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] NFS fails to automount at boot

2012-08-06 Thread Sašo Kiselkov
On 08/06/2012 02:03 PM, Udo Grabowski (IMK) wrote:
 For nfsv3
 svc:/network/nfs/client:default
 should be enabled.
 
 All these should be enabled for nfsv4
 online Jun_29   svc:/network/nfs/cbd:default
 online Jun_29   svc:/network/nfs/status:default
 online Jun_29   svc:/network/nfs/mapid:default
 online Jun_29   svc:/network/nfs/rquota:default
 online Jun_29   svc:/network/nfs/nlockmgr:default
 
 And, for automount, enable
 svc:/system/filesystem/autofs:default

A great many thanks, enabling these services solved it.

Cheers,
--
Saso

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] NFS fails to automount at boot

2012-08-06 Thread Sašo Kiselkov
On 08/06/2012 02:15 PM, James Carlson wrote:
 Sašo Kiselkov wrote:
 I've run into a bizzare issue. An NFS export I have in /etc/vfstab fails
 to automount on an OI client at boot, even though the filesystem is
 clearly marked as mount at boot:

 #device device  mount   FS  fsckmount
 mount
 #to mount   to fsck point   typepassat boot
 options
 192.168.133.1:/etc/streamers-   /etc/streamers  nfs -
 yes nodevices,nosetuid,ro

 When I issue /sbin/mountall or mount -a, it mounts just fine. Needless
 to say, its failure to mount at boot results in failure of dependent
 services to start, which is quite bummer if I have to do it manually
 each time. Nothing is logged to /var/adm/messages, so I have no idea why
 it ignores my NFS mounts at boot. It simply does. Anybody got an idea on
 how to track this down?
 
 It's never been possible to mount NFS at boot.

Well, apparent it has, since it's working after I enabled some NFS
client services.

Cheers,
--
Saso

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] NFS fails to automount at boot

2012-08-06 Thread Daniel Kjar
Thanks for asking about this. I had just assumed the same.  Machines I 
updated all the way from opensolaris still mounted nfs at boot but my 
clean 151a5s did not.  Very annoying when apache fails cause it didn't 
mount htdocs from the file server.


On 08/ 6/12 09:04 AM, Sašo Kiselkov wrote:

On 08/06/2012 02:15 PM, James Carlson wrote:

Sašo Kiselkov wrote:

I've run into a bizzare issue. An NFS export I have in /etc/vfstab fails
to automount on an OI client at boot, even though the filesystem is
clearly marked as mount at boot:

#device device  mount   FS  fsckmount
mount
#to mount   to fsck point   typepassat boot
options
192.168.133.1:/etc/streamers-   /etc/streamers  nfs -
yes nodevices,nosetuid,ro

When I issue /sbin/mountall or mount -a, it mounts just fine. Needless
to say, its failure to mount at boot results in failure of dependent
services to start, which is quite bummer if I have to do it manually
each time. Nothing is logged to /var/adm/messages, so I have no idea why
it ignores my NFS mounts at boot. It simply does. Anybody got an idea on
how to track this down?

It's never been possible to mount NFS at boot.

Well, apparent it has, since it's working after I enabled some NFS
client services.

Cheers,
--
Saso

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


--
Dr. Daniel Kjar
Assistant Professor of Biology
Division of Mathematics and Natural Sciences
Elmira College
1 Park Place
Elmira, NY 14901
607-735-1826
http://faculty.elmira.edu/dkjar

...humans send their young men to war; ants send their old ladies
-E. O. Wilson





___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] NFS fails to automount at boot

2012-08-06 Thread Michael Schuster
On Mon, Aug 6, 2012 at 3:04 PM, Sašo Kiselkov skiselkov...@gmail.com wrote:
 On 08/06/2012 02:15 PM, James Carlson wrote:
 Sašo Kiselkov wrote:
 I've run into a bizzare issue. An NFS export I have in /etc/vfstab fails
 to automount on an OI client at boot, even though the filesystem is
 clearly marked as mount at boot:

 #device device  mount   FS  fsckmount
 mount
 #to mount   to fsck point   typepassat boot
 options
 192.168.133.1:/etc/streamers-   /etc/streamers  nfs -
 yes nodevices,nosetuid,ro

 When I issue /sbin/mountall or mount -a, it mounts just fine. Needless
 to say, its failure to mount at boot results in failure of dependent
 services to start, which is quite bummer if I have to do it manually
 each time. Nothing is logged to /var/adm/messages, so I have no idea why
 it ignores my NFS mounts at boot. It simply does. Anybody got an idea on
 how to track this down?

 It's never been possible to mount NFS at boot.

 Well, apparent it has, since it's working after I enabled some NFS
 client services.

I'd say you're lucky - I wouldn't rely on this working every time.
Have you investigated using the automounter?

regards
Michael
-- 
Michael Schuster
http://recursiveramblings.wordpress.com/

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] NFS fails to automount at boot

2012-08-06 Thread James Carlson
Daniel Kjar wrote:
 Thanks for asking about this. I had just assumed the same.  Machines I
 updated all the way from opensolaris still mounted nfs at boot but my
 clean 151a5s did not.  Very annoying when apache fails cause it didn't
 mount htdocs from the file server.

OK.  It's possible that I'm just wrong about this.  The only case where
I've seen NFS entries in /etc/vfstab was with mount-at-boot set to no.
 (And since the introduction of ZFS, I don't really put much in vfstab
anymore.)

In any event, I think automount provides an easier-to-manage solution.
And unlike some other operating systems, it actually works reliably on
OpenIndiana.

But regardless of mounting mechanism, I don't think I'd consider using
NFS for anything under /etc.  It sort of defeats the point of that
directory to have anything remote there.  Even if I'm sharing
configuration between machines, I set up rsync to keep them up-to-date,
so that a failure (or unreachability) of the NFS server doesn't cause
otherwise-independent machines to fail.

-- 
James Carlson 42.703N 71.076W carls...@workingcode.com

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] NFS fails to automount at boot

2012-08-06 Thread Daniel Kjar
I would never use a remote etc.  Lose the network and the box becomes 
unusable.  Not good.  That automounter drives me batty.  I hate the 
whole export/home thing and remove the auto_home crap and reboot as soon 
as I set up a new server.



On 08/ 6/12 09:21 AM, James Carlson wrote:

Daniel Kjar wrote:

Thanks for asking about this. I had just assumed the same.  Machines I
updated all the way from opensolaris still mounted nfs at boot but my
clean 151a5s did not.  Very annoying when apache fails cause it didn't
mount htdocs from the file server.

OK.  It's possible that I'm just wrong about this.  The only case where
I've seen NFS entries in /etc/vfstab was with mount-at-boot set to no.
  (And since the introduction of ZFS, I don't really put much in vfstab
anymore.)

In any event, I think automount provides an easier-to-manage solution.
And unlike some other operating systems, it actually works reliably on
OpenIndiana.

But regardless of mounting mechanism, I don't think I'd consider using
NFS for anything under /etc.  It sort of defeats the point of that
directory to have anything remote there.  Even if I'm sharing
configuration between machines, I set up rsync to keep them up-to-date,
so that a failure (or unreachability) of the NFS server doesn't cause
otherwise-independent machines to fail.



--
Dr. Daniel Kjar
Assistant Professor of Biology
Division of Mathematics and Natural Sciences
Elmira College
1 Park Place
Elmira, NY 14901
607-735-1826
http://faculty.elmira.edu/dkjar

...humans send their young men to war; ants send their old ladies
-E. O. Wilson





___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] NFS fails to automount at boot

2012-08-06 Thread Udo Grabowski (IMK)

On 06/08/2012 15:18, Michael Schuster wrote:

On Mon, Aug 6, 2012 at 3:04 PM, Sašo Kiselkovskiselkov...@gmail.com  wrote:

On 08/06/2012 02:15 PM, James Carlson wrote:

Sašo Kiselkov wrote:

I've run into a bizzare issue. An NFS export I have in /etc/vfstab fails
to automount on an OI client at boot, even though the filesystem is
clearly marked as mount at boot:

#device device  mount   FS  fsckmount
mount
#to mount   to fsck point   typepassat boot
options
192.168.133.1:/etc/streamers-   /etc/streamers  nfs -
yes nodevices,nosetuid,ro

When I issue /sbin/mountall or mount -a, it mounts just fine. Needless
to say, its failure to mount at boot results in failure of dependent
services to start, which is quite bummer if I have to do it manually
each time. Nothing is logged to /var/adm/messages, so I have no idea why
it ignores my NFS mounts at boot. It simply does. Anybody got an idea on
how to track this down?


It's never been possible to mount NFS at boot.


Well, apparent it has, since it's working after I enabled some NFS
client services.


I'd say you're lucky - I wouldn't rely on this working every time.


If svc:nfs/client is active, it does a 'mountall -F nfs' after
milestone/network has been fired up, so that always works.
--
Dr.Udo GrabowskiInst.f.Meteorology a.Climate Research IMK-ASF-SAT
www-imk.fzk.de/asf/sat/grabowski/ www.imk-asf.kit.edu/english/sat.php
KIT - Karlsruhe Institute of Technologyhttp://www.kit.edu
Postfach 3640,76021 Karlsruhe,Germany  T:(+49)721 608-26026 F:-926026

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] NFS fails to automount at boot

2012-08-06 Thread Sašo Kiselkov
On 08/06/2012 03:21 PM, James Carlson wrote:
 Daniel Kjar wrote:
 Thanks for asking about this. I had just assumed the same.  Machines I
 updated all the way from opensolaris still mounted nfs at boot but my
 clean 151a5s did not.  Very annoying when apache fails cause it didn't
 mount htdocs from the file server.
 
 OK.  It's possible that I'm just wrong about this.  The only case where
 I've seen NFS entries in /etc/vfstab was with mount-at-boot set to no.
  (And since the introduction of ZFS, I don't really put much in vfstab
 anymore.)
 
 In any event, I think automount provides an easier-to-manage solution.
 And unlike some other operating systems, it actually works reliably on
 OpenIndiana.
 
 But regardless of mounting mechanism, I don't think I'd consider using
 NFS for anything under /etc.  It sort of defeats the point of that
 directory to have anything remote there.  Even if I'm sharing
 configuration between machines, I set up rsync to keep them up-to-date,
 so that a failure (or unreachability) of the NFS server doesn't cause
 otherwise-independent machines to fail.

While I agree with your advice in general, it doesn't in my case. Rest
assured, I have considered the possibility of a network failure and
using NFS is entirely suitable for my needs here. This is a network
streaming service, so if the network is down, the fact that it can't
start up due to an unavailable NFS mount is the least of my problems.

Cheers,
--
Saso

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] NFS fails to automount at boot

2012-08-06 Thread Daniel Kjar

Really?  What do you call that crap in etc under auto_master and auto_home?
On 08/ 6/12 09:31 AM, James Carlson wrote:

Daniel Kjar wrote:

I would never use a remote etc.  Lose the network and the box becomes
unusable.  Not good.  That automounter drives me batty.  I hate the
whole export/home thing and remove the auto_home crap and reboot as soon
as I set up a new server.

automounter != export/home



--
Dr. Daniel Kjar
Assistant Professor of Biology
Division of Mathematics and Natural Sciences
Elmira College
1 Park Place
Elmira, NY 14901
607-735-1826
http://faculty.elmira.edu/dkjar

...humans send their young men to war; ants send their old ladies
-E. O. Wilson





___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] NFS fails to automount at boot

2012-08-06 Thread Michael Schuster
On Mon, Aug 6, 2012 at 3:33 PM, Daniel Kjar dk...@elmira.edu wrote:
 Really?  What do you call that crap in etc under auto_master and auto_home?

ah ... every elephant is an animal, but not every animal is an elephant ;-)

(in other words: there's other applications of the automounter than
mounting peoples' home directories)

regards
Michael

 On 08/ 6/12 09:31 AM, James Carlson wrote:

 Daniel Kjar wrote:

 I would never use a remote etc.  Lose the network and the box becomes
 unusable.  Not good.  That automounter drives me batty.  I hate the
 whole export/home thing and remove the auto_home crap and reboot as soon
 as I set up a new server.

 automounter != export/home


 --
 Dr. Daniel Kjar
 Assistant Professor of Biology
 Division of Mathematics and Natural Sciences
 Elmira College
 1 Park Place
 Elmira, NY 14901
 607-735-1826
 http://faculty.elmira.edu/dkjar

 ...humans send their young men to war; ants send their old ladies
 -E. O. Wilson





 ___
 OpenIndiana-discuss mailing list
 OpenIndiana-discuss@openindiana.org
 http://openindiana.org/mailman/listinfo/openindiana-discuss



-- 
Michael Schuster
http://recursiveramblings.wordpress.com/

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] NFS fails to automount at boot

2012-08-06 Thread James Carlson
Daniel Kjar wrote:
 Really?  What do you call that crap in etc under auto_master and auto_home?

Read the man pages for the automounter.  Start with automount(1M).

Yes, the system comes by default with that crap, but (a) you certainly
are under no obligation to use /export/home if you don't like it and (b)
the mechanism that underlies it is far more general than just auto_home.
 It allows you to trigger configured mounts based on file system access,
and handles fail-over, platform-related variable expansion, and
directory service integration.

-- 
James Carlson 42.703N 71.076W carls...@workingcode.com

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] NFS fails to automount at boot

2012-08-06 Thread Daniel Kjar
I am sure it has great value, why would it be there otherwise?.  I just 
remember installing a fresh version  solaris  one day and trying to 
figure out why I couldn't just delete export and use /home like always.


Therefore, in my mind I associate it with being frustrated by a I am 
sorry Dave, I can't allow you to do that message until I figured out 
what had changed.


On 08/ 6/12 09:47 AM, James Carlson wrote:

Daniel Kjar wrote:

Really?  What do you call that crap in etc under auto_master and auto_home?

Read the man pages for the automounter.  Start with automount(1M).

Yes, the system comes by default with that crap, but (a) you certainly
are under no obligation to use /export/home if you don't like it and (b)
the mechanism that underlies it is far more general than just auto_home.
  It allows you to trigger configured mounts based on file system access,
and handles fail-over, platform-related variable expansion, and
directory service integration.



--
Dr. Daniel Kjar
Assistant Professor of Biology
Division of Mathematics and Natural Sciences
Elmira College
1 Park Place
Elmira, NY 14901
607-735-1826
http://faculty.elmira.edu/dkjar

...humans send their young men to war; ants send their old ladies
-E. O. Wilson





___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] NFS fails to automount at boot

2012-08-06 Thread Michael Stapleton
Hi,

I have been using Solaris only since 2.6 and /home has always been an
autofs mount point. What version of Solaris did not have /home as an
autofs mount point?
Or are you confusing OI with some other OS?


Mike

. On Mon, 2012-08-06 at 09:51 -0400, Daniel Kjar wrote:

 I am sure it has great value, why would it be there otherwise?.  I just 
 remember installing a fresh version  solaris  one day and trying to 
 figure out why I couldn't just delete export and use /home like always.
 
 Therefore, in my mind I associate it with being frustrated by a I am 
 sorry Dave, I can't allow you to do that message until I figured out 
 what had changed.
 
 On 08/ 6/12 09:47 AM, James Carlson wrote:
  Daniel Kjar wrote:
  Really?  What do you call that crap in etc under auto_master and auto_home?
  Read the man pages for the automounter.  Start with automount(1M).
 
  Yes, the system comes by default with that crap, but (a) you certainly
  are under no obligation to use /export/home if you don't like it and (b)
  the mechanism that underlies it is far more general than just auto_home.
It allows you to trigger configured mounts based on file system access,
  and handles fail-over, platform-related variable expansion, and
  directory service integration.
 
 


___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] NFS fails to automount at boot

2012-08-06 Thread Daniel Kjar
the first box I had was sol8 but it was inherited and apparently 'fixed' 
if this has been around for longer than that.  I did move to sol from 
linux so I may just be confusing my first automount experience.



On 08/ 6/12 10:12 AM, Michael Stapleton wrote:

Hi,

I have been using Solaris only since 2.6 and /home has always been an
autofs mount point. What version of Solaris did not have /home as an
autofs mount point?
Or are you confusing OI with some other OS?


Mike

. On Mon, 2012-08-06 at 09:51 -0400, Daniel Kjar wrote:


I am sure it has great value, why would it be there otherwise?.  I just
remember installing a fresh version  solaris  one day and trying to
figure out why I couldn't just delete export and use /home like always.

Therefore, in my mind I associate it with being frustrated by a I am
sorry Dave, I can't allow you to do that message until I figured out
what had changed.

On 08/ 6/12 09:47 AM, James Carlson wrote:

Daniel Kjar wrote:

Really?  What do you call that crap in etc under auto_master and auto_home?

Read the man pages for the automounter.  Start with automount(1M).

Yes, the system comes by default with that crap, but (a) you certainly
are under no obligation to use /export/home if you don't like it and (b)
the mechanism that underlies it is far more general than just auto_home.
   It allows you to trigger configured mounts based on file system access,
and handles fail-over, platform-related variable expansion, and
directory service integration.



___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


--
Dr. Daniel Kjar
Assistant Professor of Biology
Division of Mathematics and Natural Sciences
Elmira College
1 Park Place
Elmira, NY 14901
607-735-1826
http://faculty.elmira.edu/dkjar

...humans send their young men to war; ants send their old ladies
-E. O. Wilson





___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] NFS fails to automount at boot

2012-08-06 Thread Richard Elling
On Aug 6, 2012, at 5:15 AM, James Carlson wrote:
 
 It's never been possible to mount NFS at boot. 

Well, some of us old farts remember nd, and later, NFS-based diskless 
workstations :-)
The current lack of support for diskless leaves an empty feeling in my heart :-P
 -- richard

--
ZFS Performance and Training
richard.ell...@richardelling.com
+1-760-896-4422



___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] NFS fails to automount at boot

2012-08-06 Thread Sašo Kiselkov
On 08/06/2012 09:19 PM, Richard Elling wrote:
 On Aug 6, 2012, at 5:15 AM, James Carlson wrote:

 It's never been possible to mount NFS at boot. 
 
 Well, some of us old farts remember nd, and later, NFS-based diskless 
 workstations :-)
 The current lack of support for diskless leaves an empty feeling in my heart 
 :-P
  -- richard

I don't think of myself as an old fart and I remember and love diskless.
As technology trends rotate around in about a 10-15 year cycle, what
goes around, comes around. Diskless will be the new cloud!

Cheers,
--
Saso

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] NFS hidden files

2012-06-06 Thread James Carlson
Gabriele Bulfon wrote:
 Nice discussion.
 Even though I remember not being able to remove because of a bash waiting 
 there,
 but probably was a zfs destroy...and IMHO this is a more logic approach

Even there, you can still do it if you want.  The issue isn't the zfs
destroy operation itself, but rather the normal semantics of the
umount(2) system call -- it returns an error if the file system is still
busy.

You can forcibly unmount the file system, even if it's busy, and then go
ahead and destroy the zfs file system from underneath bash.  The next
time bash attempts to access the current directory, it'll get an error.

The two operations, though, are fundamentally different.  In the case of
removing a file or a whole directory, you're just removing the directory
entries representing those objects.  The directory entries themselves
are never in use in any meaningful way when a file is open.  In the
case of unmounting a file system, though, you're revoking access to a
set of structures that are actively in use, and that requires a choice;
either the one removing loses (EBUSY) or the one using loses (ENOENT or
similar).

-- 
James Carlson 42.703N 71.076W carls...@workingcode.com

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] NFS hidden files

2012-06-06 Thread James Carlson
Richard L. Hamilton wrote:
 A remote filesystem protocol by ATT (and present only in very early Solaris, 
 as I recall), called RFS, went to great lengths to provide all the usual 
 semantics.  You could even access remote device files (although presumably 
 both client and server had to support the same byte order, word size, 
 alignment, etc, AND ioctls, for ioctls to be packaged up properly to work 
 with remote devices - not sure how that worked!).  In fact, processes on the 
 client and server could communicate with each other through a FIFO on the 
 shared filesystem!

Yes, I remember RFS.  Besides using it myself, I worked on a related
protocol in the Annex terminal server called TSTTY that implemented
remote serial ports.

Basically, yes, the ioctls and (perhaps more critically) all of the data
structures passed along with the ioctls had to line up between the two
systems, or it wouldn't work.  But, then, all Real Machines are
big-endian, right?  ;-}

 The flip side was that unlike NFS, if the server crashed, the client didn't 
 just wait for the server to come back up, it got an error, not unlike you 
 would if someone unplugged a local disk.  That is, the server was NOT 
 stateless, but had state that was lost in a crash, which left returning an 
 error to the client as the only option.
 
 Compatible, reliable, high-performance: pick two.  (like a number of other 
 pick two cases)

There've been a number of other works in this area.  AFS is interesting
because you can still access your locally-cached files for a while after
the Vice server goes down.  It'll still grind to a halt like NFS when
something bad happens, it just tends to take longer.

In any event, remote file systems are different.  Sometimes subtly, and
sometimes brutally.  (File locking, not the silly .nfs files, is where
NFS users tend to get really tripped up.  NFS has advisory but not
mandatory locks, and crash recovery can be weird.)

For what it's worth, local file systems are sometimes different as well.
 UFS and ZFS aren't precisely the same -- check out the reported sizes
of your directories with ls.   Nor is UFS at all like PCFS (DOS) or
like HSFS (CD/DVD).

I think the .nfs thing is a bit of a molehill.  Working on multiple
platforms, you'll see things that are far more jarring than that.  For
example, on HP/UX, memory-mapped files (e.g., shared libraries) are
locked good and hard.  You can't write to them.  You can't remove them.
 You can't even rename them.  They end up with a big yellow police line
- do not cross ribbon around them, which sometimes makes software
upgrade interesting.

It's a weird world out there.

-- 
James Carlson 42.703N 71.076W carls...@workingcode.com

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


[OpenIndiana-discuss] NFS hidden files

2012-06-06 Thread Irek Szczesniak
On Tue, Jun 5, 2012 at 8:57 AM, Gabriele Bulfon gbul...@sonicle.com wrote:
 Hi,

 On NFS mounted file systems I often happen to find daemons of the client 
 complaining about the hidden .nfsxxx files appearing and disappearing.
 These are often annoying.

 Is there any way to let the server completely hide these files to the client, 
 and just keep them on the server file system for his own duties?

 Thanx for any help
 Gabriele.

Which NFS version do you use? I might be wrong but AFAIK NFSv4 dropped
the dreaded .nfs* files.

Irek

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] NFS hidden files

2012-06-05 Thread Jose-Marcio Martins da Cruz


Sorry for the top post...

These files shouldn't be accessed by daemons other than those daemons in the NFS system. If other 
daemons are doing so, they're not respecting the NFS rules of the game.


The only thing to do with these files is to remove them after after a system 
crash or similar event.

James Carlson wrote:

Gabriele Bulfon wrote:

Hi,

On NFS mounted file systems I often happen to find daemons of the client 
complaining about the hidden .nfsxxx files appearing and disappearing.
These are often annoying.

Is there any way to let the server completely hide these files to the client, 
and just keep them on the server file system for his own duties?


No.  They represent files that have been removed and are being held open
on the client.  They're a required part of NFS to provide UNIX file
semantics, and the whole point of them is that they're *NOT* hidden from
the client.  They wouldn't work at all if they were hidden.

Close the file, and the problem should go away, as the server will then
remove the .nfs file.

It's possible that if the system crashes, the files could be abandoned.
  I suspect that's rarely the case.  But you could use a cron job to find
them if you were concerned about it.

Have you tried a google search on .nfs files?




___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] NFS hidden files

2012-06-05 Thread James Carlson
Jose-Marcio Martins da Cruz wrote:
 
 Sorry for the top post...
 
 These files shouldn't be accessed by daemons other than those daemons in
 the NFS system. If other daemons are doing so, they're not respecting
 the NFS rules of the game.

Well ... sort of.  What do you say when rm -rf somedir fails because
some of the files within somedir, although owned by the invoker,
cannot be removed?  Or when the GUI Trash icon stays messy after
emptying because there are files that won't go away?

I certainly respect the original poster's opinion that these files get
in the way and that they're annoying.  They're nasty, but they're part
of the game.  Stick with local file systems if you really can't stand
them.  :-/

 The only thing to do with these files is to remove them after after a
 system crash or similar event.

+1

-- 
James Carlson 42.703N 71.076W carls...@workingcode.com

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] NFS hidden files

2012-06-05 Thread Jose-Marcio Martins da Cruz

James Carlson wrote:

Jose-Marcio Martins da Cruz wrote:


...


Well ... sort of.  What do you say when rm -rf somedir fails because
some of the files within somedir, although owned by the invoker,
cannot be removed?  Or when the GUI Trash icon stays messy after
emptying because there are files that won't go away?


That means that these files are still open and in use by some program, or some active process has 
its pwd inside somedir, or some process ended without cleanly close its open files. Am I wrong ?


Either way, IMHO, it's usually not the fault of NFS system.



___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] NFS hidden files

2012-06-05 Thread Jim Klimov

2012-06-05 16:31, Jose-Marcio Martins da Cruz wrote:

Well ... sort of.  What do you say when rm -rf somedir fails because
some of the files within somedir, although owned by the invoker,
cannot be removed?

That means that these files are still open and in use by some program,
or some active process has its pwd inside somedir, or some process
ended without cleanly close its open files. Am I wrong ?

Either way, IMHO, it's usually not the fault of NFS system.


I believe, there is also a scenario where a program opens a file,
deletes (unlinks) it from the filesystem, and uses the file handle.
Beside being a byproduct of a user deleting files still opened by
some program, this is often used for secure temporary files which
no-one can now break into - because there is no reference to the
file from any directory (but the FS inode exists until the file
handle is closed by the original program).

I wonder if in such cases the NFS server should hide the inode
(i.e. in case of ZFS - use some special directory under .zfs in
the dataset which contains the removed file) in order to allow
removal of directories as you outlined above.

QUESTION: I also wonder if there is an NFS-protocol action for
a server to send a hint to the NFS client, for example, if
the storage server is going to gracefully disable itself upon
poweroff, etc. Namely, compliant clients would flush their data
to disk and stop their server processes or failover to another
replica of the storage server, if available. This would allow
proper shutdowns of VMs with NFS-based disk images, databases
over iSCSI, etc. when the server goes down. Is that possible
or implementable as an RFE?

One scenario I do think of in particular is gracefully shutting
down a farm of servers upon an UPS on-battery event, where the
storage servers and the compute servers might rely on different
power sources, and the storage one happens to go down first
(before the VM hosts begin to shut down).

Thanks,
//Jim

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] NFS hidden files

2012-06-05 Thread Gabriele Bulfon
I understand your point.
But, my question is...shouldn't a network filesystem try to completely emulate 
a local file system,
trying to hide as much as possible the fact of being a network share?
In this case, how does a local filesystem like ZFS or UFS manage these 
situations?
Local file systems do not create .xxx files, and they never show up even in the 
situations you describe.
So, why NFS should be different? I believe that the NFS server may hold this 
files for himself and
the processes that already have opened sessions on them, not showing them in 
folder listings
(as zfs can do with its .zfs folder).
Also, is it correct by the file system to allow deletion of a file opened by 
someone else?
For example, an rm -r folder will fail in case I have a bash inside it's tree.
I believe an option to hide them would do no harm.
Thx anyway
Gabriele.
--
Da: James Carlson
A: Discussion list for OpenIndiana
Data: 5 giugno 2012 13.15.25 CEST
Oggetto: Re: [OpenIndiana-discuss] NFS hidden files
Gabriele Bulfon wrote:
Hi,
On NFS mounted file systems I often happen to find daemons of the client 
complaining about the hidden .nfsxxx files appearing and disappearing.
These are often annoying.
Is there any way to let the server completely hide these files to the client, 
and just keep them on the server file system for his own duties?
No.  They represent files that have been removed and are being held open
on the client.  They're a required part of NFS to provide UNIX file
semantics, and the whole point of them is that they're *NOT* hidden from
the client.  They wouldn't work at all if they were hidden.
Close the file, and the problem should go away, as the server will then
remove the .nfs file.
It's possible that if the system crashes, the files could be abandoned.
I suspect that's rarely the case.  But you could use a cron job to find
them if you were concerned about it.
Have you tried a google search on .nfs files?
--
James Carlson 42.703N 71.076W
___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss
___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] NFS hidden files

2012-06-05 Thread James Carlson
Gabriele Bulfon wrote:
 I understand your point.
 But, my question is...shouldn't a network filesystem try to completely 
 emulate a local file system,
 trying to hide as much as possible the fact of being a network share?

Sure, although try is certainly the operative word here.

 In this case, how does a local filesystem like ZFS or UFS manage these 
 situations?

A local file system doesn't have this problem, because the directory
entry is distinct from the handle that the program has on an open
file.  An open local file essentially references the inode for that
file.  If the directory entry (which points to the inode) is removed,
the program can keep on using the now-nameless inode without trouble.
When all references to the inode are dropped, the space is returned to
the free pool.

(Depending on the OS, you may even be able to use link(2) to reattach an
unlinked but still open file by using the procfs nodes.)

With NFS, the server doesn't know whether files are open or closed.  If
the client actually removed the file, that would immediately invalidate
any other client handles, and break the applications still using the
now-unreferenced file.

In short, this is just how NFS works.

 Local file systems do not create .xxx files, and they never show up even in 
 the situations you describe.

True.  Local file systems aren't remote.  :-/

 So, why NFS should be different? I believe that the NFS server may hold this 
 files for himself and
 the processes that already have opened sessions on them, not showing them in 
 folder listings
 (as zfs can do with its .zfs folder).

I suppose that NFS clients could be designed to hide these files from
the user optionally.  It may well make some operations a little (or
perhaps a lot) strange, but I think it could be made to work.  It's just
not how it's done today.

 Also, is it correct by the file system to allow deletion of a file opened by 
 someone else?

Sure; standard UNIX/POSIX semantics apply.

 For example, an rm -r folder will fail in case I have a bash inside it's 
 tree.

That's not true.  In one window:

% bash
carlsonj@carlson:/build/carlsonj$ mkdir foo
carlsonj@carlson:/build/carlsonj$ cd foo
carlsonj@carlson:/build/carlsonj/foo$

Now, in another window:

carlsonj@carlson:/build/carlsonj$ rm -r foo
carlsonj@carlson:/build/carlsonj$

Now back in the first window:

carlsonj@carlson:/build/carlsonj/foo$ /bin/pwd
pwd: cannot determine current directory!
carlsonj@carlson:/build/carlsonj/foo$

You can always remove anything you want to remove, assuming you have the
right permissions.  Directory entries are just that -- directory
entries.  They have the name of the object and a pointer to where the
object resides.  They're not the object itself.

 I believe an option to hide them would do no harm.

Fortunately, it's an open process.  Read RFCs 1813 and 3010 to start,
and propose your own design.

-- 
James Carlson 42.703N 71.076W carls...@workingcode.com

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] NFS mount of oi_148 filesystem by Linux fails

2012-05-09 Thread Jim Klimov

2012-05-09 5:13, Martin Frost wrote:

I'm trying to export a ZFS filesystem on oi_148 via NFS, but the NFS
mount fails.  The same ZFS filesystem is shared via CIFS, and that's
working.  I hope CIFS sharing doesn't interfere with NFS exporting.


Does your nfsserver's dmesg (/var/adm/messages) log have any
reports like this:

May  9 13:35:01 nfsserver mountd[9689]: [ID 770583 daemon.error] 
nfsclient.stanford.edu denied access to /nfsclient/filesys ?


I am thinking towards name resolution errors: in OpenSolaris
(maybe applicable to OI as well) the name service tended to
pick a primary name for the host that is connecting, i.e.
the first name on the line from /etc/hosts, or the PTR one
from DNS, and ignore other seemingly valid names of the client.
If you have NFS permissions for nfsclient.stanford.edu
but the server recognizes its IP address as nfsclient
for example, the permissions will be == denied.

You can try to use getent hosts nfsclient.stanford.edu
and getent hosts 123.45.67.89 (with the nfsclient's
IP address) on the server to check how it resolves names.

You can try to work around this by:
1) Listing all possible host names in the sharenfs line,
   (or having good consistency in host naming via all
   possible methods used by the server to resolve names),
2) Adding the numeric IP address to permissions

Either way, on the sharenfs command line you separate hosts
by the doublecolon, i.e.:
sharenfs='sec=sys,rw=nfsclient.stanford.edu:nfsclient:1.2.3.4,root=nfsclient.stanford.edu:nfsclient:1.2.3.4,anon=0'

(the root and anon parts may be needed if you need remote
root access not remapped to nobody)

Good luck,
//Jim


___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] NFS mount of oi_148 filesystem by Linux fails

2012-05-09 Thread Martin Frost
  Date: Wed, 09 May 2012 18:38:54 +0400
  From: Jim Klimov jimkli...@cos.ru
  
  2012-05-09 5:13, Martin Frost wrote:
   I'm trying to export a ZFS filesystem on oi_148 via NFS, but the NFS
   mount fails.  The same ZFS filesystem is shared via CIFS, and that's
   working.  I hope CIFS sharing doesn't interfere with NFS exporting.
  
  Does your nfsserver's dmesg (/var/adm/messages) log have any
  reports like this:
  
  May  9 13:35:01 nfsserver mountd[9689]: [ID 770583 daemon.error] 
  nfsclient.stanford.edu denied access to /nfsclient/filesys ?

Thanks for your reply.  The annoying thing is that nothing is being
logged at all.

Turned out that the mount was failing because of the permissions,
or more accurately, the ACLs of the directory being exported.
I had to turn on one bit of access for 'everyone':

 owner@:rwxpdDaARWcCos:---:allow
  everyone@:--a---:---:allow
  everyone@:rwxpdDaARWcCos:---:deny

 read_attributes (a) The ability to read basic attributes
 (non-ACLs) of a file.

Then I was able to mount.

Martin

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


[OpenIndiana-discuss] nfs permission denied

2012-05-08 Thread Tim Dunphy
Hello,

 I'm trying to setup an NFS server under oi 151. So far so good, but
there is one hurdle I'd like to overcome regarding security.

 The nfs service is running -

 root@openindiana:~# svcs -a | grep nfs | grep server
online 22:51:58 svc:/network/nfs/server:default


And I have one entry in dfstab to test this out -

root@openindiana:~# tail /etc/dfs/dfstab
# This file is reconstructed and only maintained for backward
# compatibility. Configuration lines could be lost.
#
#   share [-F fstype] [ -o options] [-d text] pathname [resource]
#   .e.g,
#   share  -F nfs  -o rw=engineering  -d home dirs  /export/home2
share -F nfs /tank/xen

From what I've read the default for entries in dfstab is that the
shares will be available rw (read/write).

If I go to the client  (FreeBSD 8.2) and test, I can see the mount -

[root@LBSD2:~] #showmount -e nas
Exports list on nas:
/tank/xen  Everyone

And.. I can mount the share -

[root@LBSD2:~] #mount nas:/tank/xen /mnt/xen

[root@LBSD2:~] #df -h /mnt/xen
Filesystem   SizeUsed   Avail Capacity  Mounted on
nas:/tank/xen1.3T 45K1.3T 0%/mnt/xen

However if I test my permissions on the mounted share volume (on the
client side as root) -

 [root@LBSD2:~] #touch /mnt/xen/test
touch: /mnt/xen/test: Permission denied

I get permission denied. I notice on the (oi) server, the permissions
look fine -

root@openindiana:~# ls -l /tank | grep xen
drwxr-xr-x   2 root root   2 May  7 22:58 xen

So I tried incrementally loosening up permissions -

server : root@openindiana:~# chmod 775 /tank/xen

once again on the client:

 [root@LBSD2:~] #touch /mnt/xen/test
touch: /mnt/xen/test: Permission denied

And it doesn't work until I open up the directory on the server to world -

server: root@openindiana:~# chmod 777 /tank/xen

[root@LBSD2:~] #touch /mnt/xen/test
[root@LBSD2:~] #echo hi  /mnt/xen/test
[root@LBSD2:~] #cat /mnt/xen/test
hi

Obviously this is a situation I should correct if I can. : )

Thanks in advance and best regards,
Tim

-- 
GPG me!!

gpg --keyserver pool.sks-keyservers.net --recv-keys F186197B

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] nfs permission denied

2012-05-08 Thread tomte
On Tue, May 08, 2012 at 11:43:29AM -0400, Tim Dunphy wrote:
 Hello,
 
  I'm trying to setup an NFS server under oi 151. So far so good, but
 there is one hurdle I'd like to overcome regarding security.
 
  The nfs service is running -
 
  root@openindiana:~# svcs -a | grep nfs | grep server
 online 22:51:58 svc:/network/nfs/server:default
 
 
 And I have one entry in dfstab to test this out -
 
 root@openindiana:~# tail /etc/dfs/dfstab
 # This file is reconstructed and only maintained for backward
 # compatibility. Configuration lines could be lost.
 #
 #   share [-F fstype] [ -o options] [-d text] pathname [resource]
 #   .e.g,
 #   share  -F nfs  -o rw=engineering  -d home dirs  /export/home2
 share -F nfs /tank/xen

[snip]

 However if I test my permissions on the mounted share volume (on the
 client side as root) -
 
  [root@LBSD2:~] #touch /mnt/xen/test
 touch: /mnt/xen/test: Permission denied

[snip]

From where I am standing you seem to have missed an option for
nfs... check the manpage for share_nfs.
specifically the option below.

root=access_list

Only  root  users  from  the  hosts   specified   in
access_list have root access. See access_list below.
By default, no host has root access, so  root  users
are mapped to an anonymous user ID (see the anon=uid
option described above). Netgroups can  be  used  if
the  file system shared is using UNIX authentication
( AUTH_SYS).

If you havent got that one, root on your bsdbox will be remapped to
anonymous and then it bites you in the rear ;)

// Richard

-- 
Its hard to be religious when certain people are never
incinerated by bolts of lightning.

- Calvin  Hobbes


___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] nfs permission denied

2012-05-08 Thread Tim Dunphy
Hi Richard,

Thanks for your input. I found that I can share the volume via zfs..
sorry I forgot to mention that this was a zfs pool.

I found that I was able to remove the entry from dfstab and use this
command to share the volume -

 zfs set sharenfs=rw tank/xen

And when I check the result it looks ok -

root@openindiana:~# zfs get sharenfs tank/xen
NAME  PROPERTY  VALUE SOURCE
tank/xen  sharenfs  rwlocal

and now if I look at the nfs server from the client I can see the
share, even tho it's no longer listed in dfstab -


[root@LBSD2:~] #showmount -e nas
Exports list on nas:
/tank/xen  Everyone

And then I try mounting the share from the client -

[root@LBSD2:~] #mount nas:/tank/xen /mnt/xen

[root@LBSD2:~] #df -h /mnt/xen
Filesystem   SizeUsed   Avail Capacity  Mounted on
nas:/tank/xen1.3T 46K1.3T 0%/mnt/xen

But I am still getting the same result when I try to create a file -

[root@LBSD2:~] #touch /mnt/xen/test
touch: /mnt/xen/test: Permission denied

Maybe I'm missing a flag on the zfs set command?

Thanks
Tim









On Tue, May 8, 2012 at 12:05 PM,  to...@ulkhyvlers.net wrote:
 On Tue, May 08, 2012 at 11:43:29AM -0400, Tim Dunphy wrote:
 Hello,

  I'm trying to setup an NFS server under oi 151. So far so good, but
 there is one hurdle I'd like to overcome regarding security.

  The nfs service is running -

  root@openindiana:~# svcs -a | grep nfs | grep server
 online         22:51:58 svc:/network/nfs/server:default


 And I have one entry in dfstab to test this out -

 root@openindiana:~# tail /etc/dfs/dfstab
 # This file is reconstructed and only maintained for backward
 # compatibility. Configuration lines could be lost.
 #
 #       share [-F fstype] [ -o options] [-d text] pathname [resource]
 #       .e.g,
 #       share  -F nfs  -o rw=engineering  -d home dirs  /export/home2
 share -F nfs /tank/xen

 [snip]

 However if I test my permissions on the mounted share volume (on the
 client side as root) -

  [root@LBSD2:~] #touch /mnt/xen/test
 touch: /mnt/xen/test: Permission denied

 [snip]

 From where I am standing you seem to have missed an option for
 nfs... check the manpage for share_nfs.
 specifically the option below.

        root=access_list

        Only  root  users  from  the  hosts   specified   in
        access_list have root access. See access_list below.
        By default, no host has root access, so  root  users
        are mapped to an anonymous user ID (see the anon=uid
        option described above). Netgroups can  be  used  if
        the  file system shared is using UNIX authentication
        ( AUTH_SYS).

 If you havent got that one, root on your bsdbox will be remapped to
 anonymous and then it bites you in the rear ;)

 // Richard

 --
 Its hard to be religious when certain people are never
 incinerated by bolts of lightning.

 - Calvin  Hobbes


 ___
 OpenIndiana-discuss mailing list
 OpenIndiana-discuss@openindiana.org
 http://openindiana.org/mailman/listinfo/openindiana-discuss



-- 
GPG me!!

gpg --keyserver pool.sks-keyservers.net --recv-keys F186197B

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] nfs permission denied

2012-05-08 Thread tomte
On Tue, May 08, 2012 at 01:07:23PM -0400, Tim Dunphy wrote:
 Hi Richard,
 
 Thanks for your input. I found that I can share the volume via zfs..
 sorry I forgot to mention that this was a zfs pool.
 
 I found that I was able to remove the entry from dfstab and use this
 command to share the volume -
 
  zfs set sharenfs=rw tank/xen
 
 And when I check the result it looks ok -
 
 root@openindiana:~# zfs get sharenfs tank/xen
 NAME  PROPERTY  VALUE SOURCE
 tank/xen  sharenfs  rwlocal
 
 and now if I look at the nfs server from the client I can see the
 share, even tho it's no longer listed in dfstab -
 
 
 [root@LBSD2:~] #showmount -e nas
 Exports list on nas:
 /tank/xen  Everyone
 
 And then I try mounting the share from the client -
 
 [root@LBSD2:~] #mount nas:/tank/xen /mnt/xen
 
 [root@LBSD2:~] #df -h /mnt/xen
 Filesystem   SizeUsed   Avail Capacity  Mounted on
 nas:/tank/xen1.3T 46K1.3T 0%/mnt/xen
 
 But I am still getting the same result when I try to create a file -
 
 [root@LBSD2:~] #touch /mnt/xen/test
 touch: /mnt/xen/test: Permission denied
 
 Maybe I'm missing a flag on the zfs set command?
 
 Thanks
 Tim
 
Feel free to correct me, but I think you still need the root=thebsdbox
as an option for the zfs command. 

Ie. something like zfs set sharenfs='rw,root=thebsdbox' tank/xen
Otherwise I suspect that the root remapping thing comes into play.

// Richard

-- 
Its hard to be religious when certain people are never
incinerated by bolts of lightning.

- Calvin  Hobbes


___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] nfs permission denied

2012-05-08 Thread Brian Wilson

On 05/ 8/12 12:19 PM, to...@ulkhyvlers.net wrote:

On Tue, May 08, 2012 at 01:07:23PM -0400, Tim Dunphy wrote:

Hi Richard,

Thanks for your input. I found that I can share the volume via zfs..
sorry I forgot to mention that this was a zfs pool.

I found that I was able to remove the entry from dfstab and use this
command to share the volume -

  zfs set sharenfs=rw tank/xen

And when I check the result it looks ok -

root@openindiana:~# zfs get sharenfs tank/xen
NAME  PROPERTY  VALUE SOURCE
tank/xen  sharenfs  rwlocal

and now if I look at the nfs server from the client I can see the
share, even tho it's no longer listed in dfstab -


[root@LBSD2:~] #showmount -e nas
Exports list on nas:
/tank/xen  Everyone

And then I try mounting the share from the client -

[root@LBSD2:~] #mount nas:/tank/xen /mnt/xen

[root@LBSD2:~] #df -h /mnt/xen
Filesystem   SizeUsed   Avail Capacity  Mounted on
nas:/tank/xen1.3T 46K1.3T 0%/mnt/xen

But I am still getting the same result when I try to create a file -

[root@LBSD2:~] #touch /mnt/xen/test
touch: /mnt/xen/test: Permission denied

Maybe I'm missing a flag on the zfs set command?

Thanks
Tim


Feel free to correct me, but I think you still need the root=thebsdbox
as an option for the zfs command.

Ie. something like zfs set sharenfs='rw,root=thebsdbox' tank/xen
Otherwise I suspect that the root remapping thing comes into play.

// Richard



Yes, if trying to touch the file as root - which the command prompt 
indicates is the case - then you need to allow root access to the mount 
point via the option Richard specified.


--
---
Brian Wilson, Solaris SE, UW-Madison DoIT
Room 3114 CSS608-263-8047
brian.wilson(a)doit.wisc.edu
'I try to save a life a day. Usually it's my own.' - John Crichton
---


___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] nfs permission denied

2012-05-08 Thread Tim Dunphy
ok, thanks for the tips .. I'll do a little more reading on NFS so I
can increase my understanding.

but in the meantime, this seemed to do the trick!

zfs set sharenfs='rw,root=thebsdbox' tank/xen

[root@LBSD2:~] #touch /mnt/xen/test
[root@LBSD2:~] #touch /mnt/xen/test2
[root@LBSD2:~] #touch /mnt/xen/test3
[root@LBSD2:~] #rm /mnt/xen/test
[root@LBSD2:~] #rm /mnt/xen/test2
[root@LBSD2:~] #rm /mnt/xen/test3

best,
tim

On Tue, May 8, 2012 at 1:37 PM, Brian Wilson bfwil...@wisc.edu wrote:
 On 05/ 8/12 12:19 PM, to...@ulkhyvlers.net wrote:

 On Tue, May 08, 2012 at 01:07:23PM -0400, Tim Dunphy wrote:

 Hi Richard,

 Thanks for your input. I found that I can share the volume via zfs..
 sorry I forgot to mention that this was a zfs pool.

 I found that I was able to remove the entry from dfstab and use this
 command to share the volume -

  zfs set sharenfs=rw tank/xen

 And when I check the result it looks ok -

 root@openindiana:~# zfs get sharenfs tank/xen
 NAME      PROPERTY  VALUE     SOURCE
 tank/xen  sharenfs  rw        local

 and now if I look at the nfs server from the client I can see the
 share, even tho it's no longer listed in dfstab -


 [root@LBSD2:~] #showmount -e nas
 Exports list on nas:
 /tank/xen                          Everyone

 And then I try mounting the share from the client -

 [root@LBSD2:~] #mount nas:/tank/xen /mnt/xen

 [root@LBSD2:~] #df -h /mnt/xen
 Filesystem       Size    Used   Avail Capacity  Mounted on
 nas:/tank/xen    1.3T     46K    1.3T     0%    /mnt/xen

 But I am still getting the same result when I try to create a file -

 [root@LBSD2:~] #touch /mnt/xen/test
 touch: /mnt/xen/test: Permission denied

 Maybe I'm missing a flag on the zfs set command?

 Thanks
 Tim


 Feel free to correct me, but I think you still need the root=thebsdbox
 as an option for the zfs command.

 Ie. something like zfs set sharenfs='rw,root=thebsdbox' tank/xen
 Otherwise I suspect that the root remapping thing comes into play.

 // Richard


 Yes, if trying to touch the file as root - which the command prompt
 indicates is the case - then you need to allow root access to the mount
 point via the option Richard specified.

 --
 ---
 Brian Wilson, Solaris SE, UW-Madison DoIT
 Room 3114 CSS            608-263-8047
 brian.wilson(a)doit.wisc.edu
 'I try to save a life a day. Usually it's my own.' - John Crichton
 ---



 ___
 OpenIndiana-discuss mailing list
 OpenIndiana-discuss@openindiana.org
 http://openindiana.org/mailman/listinfo/openindiana-discuss



-- 
GPG me!!

gpg --keyserver pool.sks-keyservers.net --recv-keys F186197B

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] nfs permission denied

2012-05-08 Thread James Carlson
Tim Dunphy wrote:
 ok, thanks for the tips .. I'll do a little more reading on NFS so I
 can increase my understanding.
 
 but in the meantime, this seemed to do the trick!
 
 zfs set sharenfs='rw,root=thebsdbox' tank/xen
 
 [root@LBSD2:~] #touch /mnt/xen/test
 [root@LBSD2:~] #touch /mnt/xen/test2
 [root@LBSD2:~] #touch /mnt/xen/test3
 [root@LBSD2:~] #rm /mnt/xen/test
 [root@LBSD2:~] #rm /mnt/xen/test2
 [root@LBSD2:~] #rm /mnt/xen/test3

You'll definitely want to do some more reading about it.  Allowing
remote root access via NFS isn't necessarily a very safe thing to do,
particularly with the default we trust the peer's notion of UID/GID
AUTH_SYS mode.

A better idea is to just live with the notion that one doesn't write to
files over NFS when running as root.  Or, if you do it anyway, then make
sure the directory in which those files exist is world-writable so that
user nobody can write to them.  Opening up root access isn't too
different from making everything world-writable.

If nobody isn't to your taste, you can set up root_mapping=uid to
change it to some other value.

See share_nfs(1M) and nfssec(5) for details.

-- 
James Carlson 42.703N 71.076W carls...@workingcode.com

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


[OpenIndiana-discuss] NFS mount of oi_148 filesystem by Linux fails

2012-05-08 Thread Martin Frost
I'm trying to export a ZFS filesystem on oi_148 via NFS, but the NFS
mount fails.  The same ZFS filesystem is shared via CIFS, and that's
working.  I hope CIFS sharing doesn't interfere with NFS exporting.

Here's the setup, where my nfs server is 'nfsserver', the filesystem
I'm trying to mount is '/nfsclient/filesys', and my nfs client is
'nfsclient' (which is running Linux):

  11;nfsserver/root# svcs -a | grep nfs | grep server
  online Feb_27   svc:/network/nfs/server:default

  12;nfsserver/root# zfs get sharenfs nfsclient/filesys
  NAME PROPERTY  VALUE  SOURCE
  nfsclient/filesys  sharenfs  sec=sys,rw=nfsclient.stanford.edu  local

  13;nfsserver/root# zfs get sharesmb nfsclient/filesys
  NAME PROPERTY  VALUE   SOURCE
  nfsclient/filesys  sharesmb  name=filesys  local


  37;nfsclient/etc# showmount -e nfsserver
  Export list for nfsserver:
  /nfsclient/fs100  (everyone)
  /var/smb/cvol (everyone)
  /nfsclient/filesys  nfsclient.stanford.edu

And here's the mount failure:

  40;nfsclient/etc# mount  nfsserver:/nfsclient/filesys /etc/testmount
  mount.nfs: access denied by server while mounting nfsserver:/nfsclient/filesys

I don't know if NFS in OI references /etc/hosts.allow on the NFS
server, but just in case I gave the nfsclient ALL access in that file
(to no avail).

I can't find any log entries relating to this NFS mount failure,
even after I enabled info and debug logging in syslog.conf.

The two machines are on the same subnet.  I can ssh between them.

What might I be missing here?


I have anther OI machine where NFS exporting to Linux machines is
working fine, and I even see a mountd error message in
/var/adm/messages on that machine.


(Side notes: This isn't a problem, but the filesystem /nfsclient/fs100
isn't exported or mountable, though it shows up in 'showmount -e'.
And I haven't exported /var/smb/cvol, but it is mountable -- maybe
that's a default related to using Samba; there's only one file in it:
/var/smb/cvol/windows/system32/eventlog.dll)


Thanks,
Martin

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


  1   2   >