again.
Jon
On Tue, 10 Dec 2019 at 22:55, Tim Mooney wrote:
> In regard to: Re: [OpenIndiana-discuss] NFS Automount from FreeNAS
> Server,...:
>
> > same issue, but with the latest versions of both OpenIndiana and FreeNAS
> ...
> > new hardware, on both counts:
> >
&
On Wed, Dec 11, 2019, 15:54 Tim Mooney wrote:
> In regard to: Re: [OpenIndiana-discuss] NFS Automount from FreeNAS
> Server,...:
>
> > As an OI newcomer, I'm somewhat confused. I thought mounting was done via
> > /etc/vfstab entries? Or is something else/more being attemp
In regard to: Re: [OpenIndiana-discuss] NFS Automount from FreeNAS Server,...:
As an OI newcomer, I'm somewhat confused. I thought mounting was done via
/etc/vfstab entries? Or is something else/more being attempted here?
There's a bit more going on here.
vfstab entries for traditional
On Tue, Dec 10, 2019 at 4:55 PM Tim Mooney wrote:
> In regard to: Re: [OpenIndiana-discuss] NFS Automount from FreeNAS
> Server,...:
>
> > same issue, but with the latest versions of both OpenIndiana and FreeNAS
> ...
> > new hardware, on both counts:
> >
&
In regard to: Re: [OpenIndiana-discuss] NFS Automount from FreeNAS Server,...:
same issue, but with the latest versions of both OpenIndiana and FreeNAS ...
new hardware, on both counts:
Dec 10 16:03:19 ascamrouter automountd[978]: [ID 784820 daemon.error]
server ascamnfs01 not responding
I
Hi People,
same issue, but with the latest versions of both OpenIndiana and FreeNAS ...
new hardware, on both counts:
Dec 10 16:03:19 ascamrouter automountd[978]: [ID 784820 daemon.error]
server ascamnfs01 not responding
I can mount the filesystem manually, I can mount on Solaris 10, but I
Hi, thanks for keeping on trying ...
I was optimistic, till I discovered that our Infrastructure guy had put
comments on almost all the shares.
Just for giggles I've added the snoop output from failed (snoopy.out) and
working (snoopy2.out) in case that helps anyone.
Jon
On 20 February 2015 at
thanks for getting back to me :)
...
now it says no such file or directory instead of permission denied ...
setting the verbose logging in sharectl doesn't seem to produce much extra
output, nothing in syslog, and only a little extra in
/var/svc/log/system-filesystem-autofs:default.log (no
no problem :)
...
The No such file or directory Error Happens when automount can't find a
directory on the Server and thus does not create and mount the Directory.
My Question is from where does the truss output of the successful automount
come from?
Is it from the same Computer as the failed
On 20 February 2015 at 15:56, Till Wegmüller toaster...@gmail.com wrote:
no problem :)
...
The No such file or directory Error Happens when automount can't find a
directory on the Server and thus does not create and mount the Directory.
jadams@jadlaptop:~$ dfshares mansalnfs01
RESOURCE
Hmm ok i've run out of ideas.
It looks like a bug or a problematic Setting in FreeNAS.
My Automount Works and can mount shares very reliably with /dev and /Hipster
(newest 2015)
Yours seems to work as well, atleast with solaris.
Just for fun I had a little look around google to see if there are
Looking at that truss output it seems that automount fails shortly after stat
on the home directory without even contacting nfs. Could it be that automount
has problems connecting to LDAP?
I would like to see what happens if we disable LDAP for automounts.
To do so Coment out all Lines
via automount, it never gets mounted at all.
hard mounted:
root@jadlaptop:~# mount mansalnfs01:/mnt/datapool2/IT /mnt
root@jadlaptop:~# mount | grep mansalnfs01
/mnt on mansalnfs01:/mnt/datapool2/IT
remote/read/write/setuid/devices/xattr/dev=9080001 on Thu Feb 19 09:49:24
2015
not sure if it
In That case OI will use its defaults.
OI Mounts per default as nfs4 so if you didnt specify -o vers=3 the shares got
mounted as nfs4.
Have a look at the output of the command mount in OI when the home is
automounted and when the home is mounted manually.
mount shows you the options the
This sounds like the NFS Settings in the LDAP differ from those you used to
mount the share manually.
NFS version 4's LDAP Access and NFS version 3's rootsquash option can prevent
root from having access to other Users Homes.
FreeNas has some security Settings inplace to forbid root to access
I have an OpenIndiana laptop, it's running a copy of OpenLDAP as a replica
of an OpenLDAP on our work server (syncs whenever it connects) ... and it
uses this OpenLDAP database to power it's automount. (not sure if this is
of any relevance, but included for completeness)
I've been using this
no real options specified ...
jadams@jadlaptop:~$ grep -v '^#' /etc/auto_master
+auto_master
/net-hosts-nosuid,nobrowse
/homeauto_home-nobrowse
jadams@jadlaptop:~$ more auto_master.ldif
version: 1
DN: nisMapName=auto_master,dc=domain,dc=com
objectClass: top
Hi,
not sure if it is relevant, but in Solaris user home dirs were on /export/home
while /home was reserved for automounter.
Not knowing FreeNAS (but some Linux systems), /home is location of user home
dirs.
Maybe problem solution can be looked in that direction?
Regards.
On 02/18/15 01:12
I am out of date and that's been fixed by now.
Greg
-- Original message--From: Hugh McIntyreDate: Tue, Sep 16, 2014 1:29 AMTo:
openindiana-discuss@openindiana.org;Subject:Re: [OpenIndiana-discuss] nfs oi server - linux clientsYou don't
even need NIS or LDAP. Plain /etc/passwd works
Hi Harry,
It's possible you have somehow mounted the filesystem locally with
noexec (unlikely, but you can check with mount | grep /projects/dv and
make sure noexec is not in the options).
But at a guess, it's more likely you may have the wrong username mapping
since NFSv4 may need
I used NIS when I was doing this, while I was beta testing Solaris 9 and
had a Linux client to work with, and that managed to work pretty well,
given I didn't have any connectivity issues between the hosts.
I know that solution is kinda deprecated, but it's pretty complicated to
set up LDAP
You don't even need NIS or LDAP. Plain /etc/passwd works fine, either
by making sure the necessary user/uid mappings and passwd files are the
same on all systems (if using NFS v2/v3) or not even bothering with the
uid's matching if using NFSv4.
(Non-matching uid's is kind of the point of
worked Linux to
Linux but not Linux client to Solaris server.
Hopefully I am out of date and that's been fixed by now.
Greg
-- Original message--From: Hugh McIntyreDate: Tue, Sep 16, 2014 1:29
AMTo: openindiana-discuss@openindiana.org;Subject:Re: [OpenIndiana-discuss] nfs
oi server
Hugh McIntyre li...@mcintyreweb.com writes:
Hi Harry,
It's possible you have somehow mounted the filesystem locally with
noexec (unlikely, but you can check with mount | grep /projects/dv
and make sure noexec is not in the options).
Host naming convention for clarity
solsrv is my oi server
Harry Putnam rea...@newsguy.com writes:
Hugh McIntyre li...@mcintyreweb.com writes:
Hi Harry,
It's possible you have somehow mounted the filesystem locally with
noexec (unlikely, but you can check with mount | grep /projects/dv
and make sure noexec is not in the options).
Host naming
I need a current modern walk thru of what it takes to setup nfs
serving, so that the clients' users are able to access rw and run
scripts or binaries.
First a quite description of the situation:
First and formost I am a terrrible green horn. I've rarely used nfs.
And never with solaris as
Hi,
does anyone know if commmit aea676fd5203d93bed37515d4f405d6a31c21509
Feb 1, 2013,
https://github.com/illumos/illumos-gate/commit/aea676fd5203d93bed37515d4f405d6a31c21509
is in /dev oi_151a9 ?
How can one find out which fixes have been included in a oi /dev
version ?
--
Dr.Udo Grabowski
On 06/04/2014 16:27, Udo Grabowski (IMK) wrote:
Hi,
does anyone know if commmit aea676fd5203d93bed37515d4f405d6a31c21509
Feb 1, 2013,
https://github.com/illumos/illumos-gate/commit/aea676fd5203d93bed37515d4f405d6a31c21509
is in /dev oi_151a9 ?
How can one find out which fixes have been
On 04/06/2014 14:34, Alexander Pyhalov wrote:
On 06/04/2014 16:27, Udo Grabowski (IMK) wrote:
Hi,
does anyone know if commmit aea676fd5203d93bed37515d4f405d6a31c21509
Feb 1, 2013,
https://github.com/illumos/illumos-gate/commit/aea676fd5203d93bed37515d4f405d6a31c21509
is in /dev oi_151a9 ?
Hello everybody.
We would like to change several NFS server parameters in a production
server using sharectl tool:
sharectl set -p servers=1024 nfs
Is it necessary to restart the NFS server using svcadm tool to refresh
these values por área they loaded on realtime?
Best regards and thank you
Thank you very much for your help, Marcel.
Do you think that sharectl set -p servers=1024 nfs execution will
disconnect already connected NFS clients or will produce any NFS service
disruption?
El 15/02/2014 17:53, Marcel Telka escribió:
On Sat, Feb 15, 2014 at 03:48:53PM +0100, Alberto Picón
However, I have read the following information from OpenSolaris
http://books.google.es/books?id=y8qaxiZNvqACpg=PT341lpg=PT341ots=7y3KM-T-Hpfocus=viewportdq=sharectl+servers+nfshl=esoutput=html_text
Where it stands that for NFS version selection:
sharectl set -p version_max=3 nfs
it is required
On Sat, Feb 15, 2014 at 06:55:21PM +0100, Alberto Picón Couselo wrote:
Do you think that sharectl set -p servers=1024 nfs execution will
disconnect already connected NFS clients or will produce any NFS service
disruption?
Yes, the clients will be disconnected (the TCP connections will be
On Sat, Feb 15, 2014 at 07:04:55PM +0100, Alberto Picón Couselo wrote:
However, I have read the following information from OpenSolaris
http://books.google.es/books?id=y8qaxiZNvqACpg=PT341lpg=PT341ots=7y3KM-T-Hpfocus=viewportdq=sharectl+servers+nfshl=esoutput=html_text
Where it stands that
)
[mailto:openindi...@nedharvey.com]
Sent: Thursday, 30 January 2014 11:55 PM
To: Discussion list for OpenIndiana
Subject: Re: [OpenIndiana-discuss] NFS
From: Edward Ned Harvey (openindiana)
It *appears* that NFSv4 is fine in both 151a7 and 151a9.
It *appears* that NFSv3 is broken in 151a9
From: Ryan John [mailto:john.r...@bsse.ethz.ch]
Being reliant on NFS myself, I decided to test this. I just updated one test
machine from OI_a8 to OI_a9, and another from OI_a7 to OI_a9
Both machines give the same results, IE: NFSv3 works okay.
My clients are RetHat EL6.
I wonder what
On 31/01/2014 3:25 PM, Edward Ned Harvey (openindiana) wrote:
From: Ryan John [mailto:john.r...@bsse.ethz.ch]
Being reliant on NFS myself, I decided to test this. I just updated one test
machine from OI_a8 to OI_a9, and another from OI_a7 to OI_a9
Both machines give the same results, IE: NFSv3
If a share was mounted on the client and you change the underlying NFS
version on the server then you will need to get the client to unmount all
shares from the server before they can see the version 3 shares ... is this
the case in your instance?
Are your shares auto-mounted? if so it depends on
From: Jonathan Adams [mailto:t12nsloo...@gmail.com]
Sent: Thursday, January 30, 2014 4:06 AM
If a share was mounted on the client and you change the underlying NFS
version on the server then you will need to get the client to unmount all
shares from the server before they can see the
On Thu, Jan 30, 2014 at 02:57:14PM +, Edward Ned Harvey (openindiana) wrote:
I wondered if maybe I have firewall enabled on the server. So I used nc
and telnet from the client to confirm the port is open. (111 and 2049).
No problem.
In addition to ports 111 and 2049 you need a port for
to test if it is a permissions problem, can you just set sharenfs=on? and
then try to access from the other machines?
On 30 January 2014 14:57, Edward Ned Harvey (openindiana)
openindi...@nedharvey.com wrote:
From: Jonathan Adams [mailto:t12nsloo...@gmail.com]
Sent: Thursday, January 30,
From: Jonathan Adams [mailto:t12nsloo...@gmail.com]
to test if it is a permissions problem, can you just set sharenfs=on? and
then try to access from the other machines?
Thanks for the help everyone. I decided to take it a step further than that:
On both the 151a7 (homer) and 151a9 (marge)
From: Edward Ned Harvey (openindiana)
It *appears* that NFSv4 is fine in both 151a7 and 151a9.
It *appears* that NFSv3 is broken in 151a9. Which was unfortunately, necessary
to support ESXi client and Ubuntu 10.04 client.
___
OpenIndiana-discuss
I ran into a similar issue on OmniOS 151008j recently.
When I ran 'rpcinfo -p {nfs_server}' it returned access denied.
Restarting the rpc service fixed it:
svcadm restart svc:/network/rpc/bind:default
I don't know what put the server in that state, but it's happened only once
on the heavily
At home, I have oi_151a7 and ESXi 5.1.
I wrote down precisely how to share NFS, and mount from the ESXi machine.
sudo zfs set sharenfs=rw=@192.168.5.5/32,root=@192.168.5.5/32
mypool/somefilesystem
I recall it was a pain to get the syntax correct, especially thanks to some
On Thu, Jan 30, 2014 at 03:01:56AM +, Edward Ned Harvey (openindiana) wrote:
At home, I have oi_151a7 and ESXi 5.1.
I wrote down precisely how to share NFS, and mount from the ESXi machine.
sudo zfs set sharenfs=rw=@192.168.5.5/32,root=@192.168.5.5/32
mypool/somefilesystem
On 08/29/13 14:58, Francis Swasey wrote:
Hi,
I've trying to set up an OI box (using 151a8) and I'm using nuttcp 6.1.2 to
make sure I'm getting the most I can out of the Emulex One Connect (10GbE)
cards that are in the box they gave me to use. Unfortunately, I'm seeing
asymmetrical
Hi,
I've trying to set up an OI box (using 151a8) and I'm using nuttcp 6.1.2 to
make sure I'm getting the most I can out of the Emulex One Connect (10GbE)
cards that are in the box they gave me to use. Unfortunately, I'm seeing
asymmetrical numbers.
The clients are all RHEL6 boxes (and that
If we create local users in /etc/passwd and /etc/groups, can you please
tell us how to refresh NFSv4 server to update the user mapping table in
Openindiana?. How do you face this issue?. If we restart the NFS service in
Openindiana, using /etc/init.d/nfs restart, will NFSv4 clients reconnect
First thing I'll do is to go in the BIOS and disable CPU C states and
disable all power saving features. If that doesn't help then try NFSv4.
The reason I disable CPU C states is because of previous experience with
OpenSolaris on Dell boxes about 2yr ago. It will crash the system in
similar
I can confirm you that we have disabled all power saving features of the
boxes. However, I can't assure that CPU C states are totally disabled.
Anyway, we have changed to NFSv4 to test the system stability. The PHP
process reads a folder with a huge number of hashed files and folders
and
On 11 Apr 2013, at 0:29 , Peter Wood peterwood...@gmail.com wrote:
On Wed, Apr 10, 2013 at 7:35 AM, Paul van der Zwan
pa...@vanderzwan.orgwrote:
On 9 Apr 2013, at 3:13 , Peter Wood peterwood...@gmail.com wrote:
I've asked the ZFS discussion list for help on this but now I have more
On 10 Apr 2013, at 22:03 , Ian Collins i...@ianshome.com wrote:
Paul van der Zwan wrote:
When it hung the system would not respond to anything at all.
The only way out I could find was a hard reset or power cycle.
I do have the following in /etc/system:
set snooping=1
set
On 10 Apr 2013, at 22:03 , Ian Collins i...@ianshome.com wrote:
Paul van der Zwan wrote:
When it hung the system would not respond to anything at all.
The only way out I could find was a hard reset or power cycle.
I do have the following in /etc/system:
set snooping=1
set
On 9 Apr 2013, at 3:13 , Peter Wood peterwood...@gmail.com wrote:
I've asked the ZFS discussion list for help on this but now I have more
information and it looks like a bug in the drivers or something.
I have number of Dell PE R710 and PE 2950 servers running OpenSolaris, OI
151a and OI
On Wed, Apr 10, 2013 at 04:35:06PM +0200, Paul van der Zwan wrote:
On 9 Apr 2013, at 3:13 , Peter Wood peterwood...@gmail.com wrote:
I've asked the ZFS discussion list for help on this but now I have more
information and it looks like a bug in the drivers or something.
I have number
On 10 Apr 2013, at 16:46 , Marcel Telka mar...@telka.sk wrote:
On Wed, Apr 10, 2013 at 04:35:06PM +0200, Paul van der Zwan wrote:
On 9 Apr 2013, at 3:13 , Peter Wood peterwood...@gmail.com wrote:
I've asked the ZFS discussion list for help on this but now I have more
information and it
Paul van der Zwan wrote:
When it hung the system would not respond to anything at all.
The only way out I could find was a hard reset or power cycle.
I do have the following in /etc/system:
set snooping=1
set pcplusmp:apic_panic_on_nmi=1
But that did not make a difference.
BTW the hang was/is
On Wed, Apr 10, 2013 at 7:35 AM, Paul van der Zwan pa...@vanderzwan.orgwrote:
On 9 Apr 2013, at 3:13 , Peter Wood peterwood...@gmail.com wrote:
I've asked the ZFS discussion list for help on this but now I have more
information and it looks like a bug in the drivers or something.
I
There could be corruption in that dir. Can you run a scrub on the pool
zpool scrub pool
On Tue, Apr 9, 2013 at 6:43 AM, Peter Wood peterwood...@gmail.com wrote:
I've asked the ZFS discussion list for help on this but now I have more
information and it looks like a bug in the drivers or
I've asked the ZFS discussion list for help on this but now I have more
information and it looks like a bug in the drivers or something.
I have number of Dell PE R710 and PE 2950 servers running OpenSolaris, OI
151a and OI 151a.7. All these systems are used as storage servers, clean OS
install,
On Mon, 6 Aug 2012, Daniel Kjar wrote:
Really? What do you call that crap in etc under auto_master and auto_home?
Those are template sample files for you to edit.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,
I've run into a bizzare issue. An NFS export I have in /etc/vfstab fails
to automount on an OI client at boot, even though the filesystem is
clearly marked as mount at boot:
#device device mount FS fsckmount
mount
#to mount to fsck point
For nfsv3
svc:/network/nfs/client:default
should be enabled.
All these should be enabled for nfsv4
online Jun_29 svc:/network/nfs/cbd:default
online Jun_29 svc:/network/nfs/status:default
online Jun_29 svc:/network/nfs/mapid:default
online Jun_29
Sašo Kiselkov wrote:
I've run into a bizzare issue. An NFS export I have in /etc/vfstab fails
to automount on an OI client at boot, even though the filesystem is
clearly marked as mount at boot:
#device device mount FS fsckmount
mount
#to mount to
On 08/06/2012 02:03 PM, Udo Grabowski (IMK) wrote:
For nfsv3
svc:/network/nfs/client:default
should be enabled.
All these should be enabled for nfsv4
online Jun_29 svc:/network/nfs/cbd:default
online Jun_29 svc:/network/nfs/status:default
online Jun_29
On 08/06/2012 02:15 PM, James Carlson wrote:
Sašo Kiselkov wrote:
I've run into a bizzare issue. An NFS export I have in /etc/vfstab fails
to automount on an OI client at boot, even though the filesystem is
clearly marked as mount at boot:
#device device mount FS
Thanks for asking about this. I had just assumed the same. Machines I
updated all the way from opensolaris still mounted nfs at boot but my
clean 151a5s did not. Very annoying when apache fails cause it didn't
mount htdocs from the file server.
On 08/ 6/12 09:04 AM, Sašo Kiselkov wrote:
On
On Mon, Aug 6, 2012 at 3:04 PM, Sašo Kiselkov skiselkov...@gmail.com wrote:
On 08/06/2012 02:15 PM, James Carlson wrote:
Sašo Kiselkov wrote:
I've run into a bizzare issue. An NFS export I have in /etc/vfstab fails
to automount on an OI client at boot, even though the filesystem is
clearly
Daniel Kjar wrote:
Thanks for asking about this. I had just assumed the same. Machines I
updated all the way from opensolaris still mounted nfs at boot but my
clean 151a5s did not. Very annoying when apache fails cause it didn't
mount htdocs from the file server.
OK. It's possible that I'm
I would never use a remote etc. Lose the network and the box becomes
unusable. Not good. That automounter drives me batty. I hate the
whole export/home thing and remove the auto_home crap and reboot as soon
as I set up a new server.
On 08/ 6/12 09:21 AM, James Carlson wrote:
Daniel Kjar
On 06/08/2012 15:18, Michael Schuster wrote:
On Mon, Aug 6, 2012 at 3:04 PM, Sašo Kiselkovskiselkov...@gmail.com wrote:
On 08/06/2012 02:15 PM, James Carlson wrote:
Sašo Kiselkov wrote:
I've run into a bizzare issue. An NFS export I have in /etc/vfstab fails
to automount on an OI client at
On 08/06/2012 03:21 PM, James Carlson wrote:
Daniel Kjar wrote:
Thanks for asking about this. I had just assumed the same. Machines I
updated all the way from opensolaris still mounted nfs at boot but my
clean 151a5s did not. Very annoying when apache fails cause it didn't
mount htdocs from
Really? What do you call that crap in etc under auto_master and auto_home?
On 08/ 6/12 09:31 AM, James Carlson wrote:
Daniel Kjar wrote:
I would never use a remote etc. Lose the network and the box becomes
unusable. Not good. That automounter drives me batty. I hate the
whole export/home
On Mon, Aug 6, 2012 at 3:33 PM, Daniel Kjar dk...@elmira.edu wrote:
Really? What do you call that crap in etc under auto_master and auto_home?
ah ... every elephant is an animal, but not every animal is an elephant ;-)
(in other words: there's other applications of the automounter than
Daniel Kjar wrote:
Really? What do you call that crap in etc under auto_master and auto_home?
Read the man pages for the automounter. Start with automount(1M).
Yes, the system comes by default with that crap, but (a) you certainly
are under no obligation to use /export/home if you don't like
I am sure it has great value, why would it be there otherwise?. I just
remember installing a fresh version solaris one day and trying to
figure out why I couldn't just delete export and use /home like always.
Therefore, in my mind I associate it with being frustrated by a I am
sorry Dave,
Hi,
I have been using Solaris only since 2.6 and /home has always been an
autofs mount point. What version of Solaris did not have /home as an
autofs mount point?
Or are you confusing OI with some other OS?
Mike
. On Mon, 2012-08-06 at 09:51 -0400, Daniel Kjar wrote:
I am sure it has great
the first box I had was sol8 but it was inherited and apparently 'fixed'
if this has been around for longer than that. I did move to sol from
linux so I may just be confusing my first automount experience.
On 08/ 6/12 10:12 AM, Michael Stapleton wrote:
Hi,
I have been using Solaris only
On Aug 6, 2012, at 5:15 AM, James Carlson wrote:
It's never been possible to mount NFS at boot.
Well, some of us old farts remember nd, and later, NFS-based diskless
workstations :-)
The current lack of support for diskless leaves an empty feeling in my heart :-P
-- richard
--
ZFS
On 08/06/2012 09:19 PM, Richard Elling wrote:
On Aug 6, 2012, at 5:15 AM, James Carlson wrote:
It's never been possible to mount NFS at boot.
Well, some of us old farts remember nd, and later, NFS-based diskless
workstations :-)
The current lack of support for diskless leaves an empty
Gabriele Bulfon wrote:
Nice discussion.
Even though I remember not being able to remove because of a bash waiting
there,
but probably was a zfs destroy...and IMHO this is a more logic approach
Even there, you can still do it if you want. The issue isn't the zfs
destroy operation itself,
Richard L. Hamilton wrote:
A remote filesystem protocol by ATT (and present only in very early Solaris,
as I recall), called RFS, went to great lengths to provide all the usual
semantics. You could even access remote device files (although presumably
both client and server had to support
On Tue, Jun 5, 2012 at 8:57 AM, Gabriele Bulfon gbul...@sonicle.com wrote:
Hi,
On NFS mounted file systems I often happen to find daemons of the client
complaining about the hidden .nfsxxx files appearing and disappearing.
These are often annoying.
Is there any way to let the server
Sorry for the top post...
These files shouldn't be accessed by daemons other than those daemons in the NFS system. If other
daemons are doing so, they're not respecting the NFS rules of the game.
The only thing to do with these files is to remove them after after a system
crash or similar
Jose-Marcio Martins da Cruz wrote:
Sorry for the top post...
These files shouldn't be accessed by daemons other than those daemons in
the NFS system. If other daemons are doing so, they're not respecting
the NFS rules of the game.
Well ... sort of. What do you say when rm -rf somedir
James Carlson wrote:
Jose-Marcio Martins da Cruz wrote:
...
Well ... sort of. What do you say when rm -rf somedir fails because
some of the files within somedir, although owned by the invoker,
cannot be removed? Or when the GUI Trash icon stays messy after
emptying because there are files
2012-06-05 16:31, Jose-Marcio Martins da Cruz wrote:
Well ... sort of. What do you say when rm -rf somedir fails because
some of the files within somedir, although owned by the invoker,
cannot be removed?
That means that these files are still open and in use by some program,
or some active
.
--
Da: James Carlson
A: Discussion list for OpenIndiana
Data: 5 giugno 2012 13.15.25 CEST
Oggetto: Re: [OpenIndiana-discuss] NFS hidden files
Gabriele Bulfon wrote:
Hi,
On NFS mounted file systems I often happen to find daemons of the client
complaining
Gabriele Bulfon wrote:
I understand your point.
But, my question is...shouldn't a network filesystem try to completely
emulate a local file system,
trying to hide as much as possible the fact of being a network share?
Sure, although try is certainly the operative word here.
In this case,
2012-05-09 5:13, Martin Frost wrote:
I'm trying to export a ZFS filesystem on oi_148 via NFS, but the NFS
mount fails. The same ZFS filesystem is shared via CIFS, and that's
working. I hope CIFS sharing doesn't interfere with NFS exporting.
Does your nfsserver's dmesg (/var/adm/messages) log
Date: Wed, 09 May 2012 18:38:54 +0400
From: Jim Klimov jimkli...@cos.ru
2012-05-09 5:13, Martin Frost wrote:
I'm trying to export a ZFS filesystem on oi_148 via NFS, but the NFS
mount fails. The same ZFS filesystem is shared via CIFS, and that's
working. I hope CIFS sharing
Hello,
I'm trying to setup an NFS server under oi 151. So far so good, but
there is one hurdle I'd like to overcome regarding security.
The nfs service is running -
root@openindiana:~# svcs -a | grep nfs | grep server
online 22:51:58 svc:/network/nfs/server:default
And I have one
On Tue, May 08, 2012 at 11:43:29AM -0400, Tim Dunphy wrote:
Hello,
I'm trying to setup an NFS server under oi 151. So far so good, but
there is one hurdle I'd like to overcome regarding security.
The nfs service is running -
root@openindiana:~# svcs -a | grep nfs | grep server
Hi Richard,
Thanks for your input. I found that I can share the volume via zfs..
sorry I forgot to mention that this was a zfs pool.
I found that I was able to remove the entry from dfstab and use this
command to share the volume -
zfs set sharenfs=rw tank/xen
And when I check the result it
On Tue, May 08, 2012 at 01:07:23PM -0400, Tim Dunphy wrote:
Hi Richard,
Thanks for your input. I found that I can share the volume via zfs..
sorry I forgot to mention that this was a zfs pool.
I found that I was able to remove the entry from dfstab and use this
command to share the volume
On 05/ 8/12 12:19 PM, to...@ulkhyvlers.net wrote:
On Tue, May 08, 2012 at 01:07:23PM -0400, Tim Dunphy wrote:
Hi Richard,
Thanks for your input. I found that I can share the volume via zfs..
sorry I forgot to mention that this was a zfs pool.
I found that I was able to remove the entry from
ok, thanks for the tips .. I'll do a little more reading on NFS so I
can increase my understanding.
but in the meantime, this seemed to do the trick!
zfs set sharenfs='rw,root=thebsdbox' tank/xen
[root@LBSD2:~] #touch /mnt/xen/test
[root@LBSD2:~] #touch /mnt/xen/test2
[root@LBSD2:~] #touch
Tim Dunphy wrote:
ok, thanks for the tips .. I'll do a little more reading on NFS so I
can increase my understanding.
but in the meantime, this seemed to do the trick!
zfs set sharenfs='rw,root=thebsdbox' tank/xen
[root@LBSD2:~] #touch /mnt/xen/test
[root@LBSD2:~] #touch /mnt/xen/test2
I'm trying to export a ZFS filesystem on oi_148 via NFS, but the NFS
mount fails. The same ZFS filesystem is shared via CIFS, and that's
working. I hope CIFS sharing doesn't interfere with NFS exporting.
Here's the setup, where my nfs server is 'nfsserver', the filesystem
I'm trying to mount is
1 - 100 of 121 matches
Mail list logo