Re: recent Trixie upgrade removed nfs client

2024-05-01 Thread Brad Rogers
On Tue, 30 Apr 2024 15:48:09 -0400
Gary Dale  wrote:

Hello Gary,

>Yes but: both gdb and nfs-client installed fine. Moreover, the 
>nfs-client doesn't appear to be a dependency of any of the massive load 
>of files updated lately.  The gdb package however is but for some

This transition is ongoing;  Just because a package is uninstallable
today, doesn't mean the same will be true tomorrow.  Sometimes,
dependencies transfer in the wrong order.

Minor point;  nfs-client doesn't appear to exist in Debian.  At least,
not according my search of https://packages.debian.org  Closest packages I 
could find are nfs-common or ndb-client.

>Shouldn't autoremove only offer to remove packages that used to be a 
>dependency but aren't currently (i.e. their status has changed)?

There are lots of inter-dependant relationships (that I don't even
pretend to understand).  It's not as simple as 'X doesn't depend on Y, so
it should not be removed'.  There's (nearly) always other things going
on at such times as this.  For example, it's not until today I could get
libllvmt64 to install, and replace, libllvm.  For several days,
attempting the replacement would have ended up with broken packages, so
the upgrade was not allowed.

Sometimes, only upgrading a subset of the packages offered can help;
apt isn't perfect at resolving all the issues.  Assuming the issues are
solvable in the first place.  This is not a criticism of apt, because
aptitude and synaptic can have difficulties, too.  Each tool has its
foibles.

On top of all that, I've found quite a few library packages don't
automatically migrate to their t64 counterpart.  Whether that's by
accident or design, IDK.  What I do know is that the act of installing
(manually) the t64 version will force the removal of the old version.
There's usually a 'complaint' about such an action (warning about
removing an in use library), but it proceeds without problems.

-- 
 Regards  _   "Valid sig separator is {dash}{dash}{space}"
 / )  "The blindingly obvious is never immediately apparent"
/ _)rad   "Is it only me that has a working delete key?"
Two sides to every story
Public Image - Public Image Ltd


pgpKK8g09RESF.pgp
Description: OpenPGP digital signature


Re: recent Trixie upgrade removed nfs client

2024-04-30 Thread Gary Dale

On 2024-04-30 10:58, Brad Rogers wrote:

On Tue, 30 Apr 2024 09:51:01 -0400
Gary Dale  wrote:

Hello Gary,


Not looking for a solution. Just reporting a spate of oddities I've
encountered lately.

As Erwan says, this is 'normal'.  Especially ATM due to the t64
transition.

As you've found out, paying attention to removals is a Good Idea(tm).
Sometimes those removals cannot be avoided.  Of course, removal of
'library' to be replaced with 'libraryt64' is absolutely fine.

If the upgrade wants to remove (say) half of the base packages of KDE,
waiting a few days would be prudent.   :-D

You may also notice quite a few packages being reported as "local or
obsolete".  This is expected as certain packages have had to be removed
from testing to enable a smoother flow of the transition.  Many will
return in due course.  I do know of one exception, however;  deborphan
has bee removed from testing and, as things stand, it looks like it
might be permanent -  I fully understand why, but I shall mourn its
passing, as I find it to be quite handy for weeding out cruft.

Yes but: both gdb and nfs-client installed fine. Moreover, the 
nfs-client doesn't appear to be a dependency of any of the massive load 
of files updated lately.  The gdb package however is but for some reason 
apt didn't want to install it.


The point is that apt didn't handle the situation reasonably. If it 
wanted a package that was installable, should it not have installed it? 
And while nfs-client isn't a dependency of other installed packages, why 
should autoremove remove it? It's status of not being a dependency 
didn't change. There are lots of packages that aren't depended on by 
other packages that I have installed (e.g. every end-user application). 
Shouldn't autoremove only offer to remove packages that used to be a 
dependency but aren't currently (i.e. their status has changed)?




Re: recent Trixie upgrade removed nfs client

2024-04-30 Thread songbird
Gary Dale wrote:

> I'm running Trixie on an AMD64 system.
>
> Yesterday after doing my usual morning full-upgrade, I rebooted because 
> there were a lot of Plasma-related updates. When I logged in, I found I 
> wasn't connected to my file server shares. I eventually traced this down 
> to a lack of nfs software on my workstation. Reinstalling nfs-client 
> fixed this.
>
> I guess I need to pay closer attention to what autoremove tells me it's 
> going to remove, but I'm confused as to why it would remove nfs-client & 
> related packages.
>
> This follows a couple of previous full-upgrades that were having 
> problems. The first, a few days ago, was stopped by gdb not being 
> available. However, it installed fine manually (apt install gdb). I 
> don't see why apt full-upgrade didn't do this automatically as a 
> dependency for whatever package needed it.
>
> The second was blocked by the lack of a lcl-qt5 or lcl-gtk5 library. I 
> can see this as legitimate because it looks like you don't need both so 
> the package manager lets you decide which you want.
>
> Not looking for a solution. Just reporting a spate of oddities I've 
> encountered lately.

  the on-going time_t transitions may be causing some packages
to be removed for a while as dependencies get adjusted.

  i've currently not been doing full upgrades because there are
many Mate packages that would be removed.


  songbird



Re: recent Trixie upgrade removed nfs client

2024-04-30 Thread Brad Rogers
On Tue, 30 Apr 2024 09:51:01 -0400
Gary Dale  wrote:

Hello Gary,

>Not looking for a solution. Just reporting a spate of oddities I've 
>encountered lately.

As Erwan says, this is 'normal'.  Especially ATM due to the t64
transition.

As you've found out, paying attention to removals is a Good Idea(tm).
Sometimes those removals cannot be avoided.  Of course, removal of 
'library' to be replaced with 'libraryt64' is absolutely fine. 

If the upgrade wants to remove (say) half of the base packages of KDE,
waiting a few days would be prudent.   :-D

You may also notice quite a few packages being reported as "local or
obsolete".  This is expected as certain packages have had to be removed
from testing to enable a smoother flow of the transition.  Many will
return in due course.  I do know of one exception, however;  deborphan
has bee removed from testing and, as things stand, it looks like it
might be permanent -  I fully understand why, but I shall mourn its
passing, as I find it to be quite handy for weeding out cruft.

-- 
 Regards  _   "Valid sig separator is {dash}{dash}{space}"
 / )  "The blindingly obvious is never immediately apparent"
/ _)rad   "Is it only me that has a working delete key?"
He looked the wrong way at a policeman
I Predict A Riot - Kaiser Chiefs


pgpNgF_iNx5wu.pgp
Description: OpenPGP digital signature


Re: recent Trixie upgrade removed nfs client

2024-04-30 Thread Erwan David
On Tue, Apr 30, 2024 at 03:51:01PM CEST, Gary Dale  
said:
> I'm running Trixie on an AMD64 system.
> 
> Yesterday after doing my usual morning full-upgrade, I rebooted because
> there were a lot of Plasma-related updates. When I logged in, I found I
> wasn't connected to my file server shares. I eventually traced this down to
> a lack of nfs software on my workstation. Reinstalling nfs-client fixed
> this.
> 
> I guess I need to pay closer attention to what autoremove tells me it's
> going to remove, but I'm confused as to why it would remove nfs-client &
> related packages.
> 
> This follows a couple of previous full-upgrades that were having problems.
> The first, a few days ago, was stopped by gdb not being available. However,
> it installed fine manually (apt install gdb). I don't see why apt
> full-upgrade didn't do this automatically as a dependency for whatever
> package needed it.
> 
> The second was blocked by the lack of a lcl-qt5 or lcl-gtk5 library. I can
> see this as legitimate because it looks like you don't need both so the
> package manager lets you decide which you want.
> 
> Not looking for a solution. Just reporting a spate of oddities I've
> encountered lately.
> 

Trixie is undergoing major transitions. You must be careful and check what each 
upgrade will want to uninstall, but it is normal for a "testing" distribution.

In those cases I use the curses interface of aptitude to check which upgrade 
will remove another package that I want, and limit my upgrades to the one that 
do not break my system. Usually some days later it is Ok 
(sometimes week for major transitions)

-- 
Erwan



recent Trixie upgrade removed nfs client

2024-04-30 Thread Gary Dale

I'm running Trixie on an AMD64 system.

Yesterday after doing my usual morning full-upgrade, I rebooted because 
there were a lot of Plasma-related updates. When I logged in, I found I 
wasn't connected to my file server shares. I eventually traced this down 
to a lack of nfs software on my workstation. Reinstalling nfs-client 
fixed this.


I guess I need to pay closer attention to what autoremove tells me it's 
going to remove, but I'm confused as to why it would remove nfs-client & 
related packages.


This follows a couple of previous full-upgrades that were having 
problems. The first, a few days ago, was stopped by gdb not being 
available. However, it installed fine manually (apt install gdb). I 
don't see why apt full-upgrade didn't do this automatically as a 
dependency for whatever package needed it.


The second was blocked by the lack of a lcl-qt5 or lcl-gtk5 library. I 
can see this as legitimate because it looks like you don't need both so 
the package manager lets you decide which you want.


Not looking for a solution. Just reporting a spate of oddities I've 
encountered lately.




Uninterruptible sleep apache process while aceessing nfs on debian 12 bookworm

2024-04-29 Thread El Mahdi Mouslih
Hi

We recently migrated to new nfs server running on debian 12 bookworm

On the client Apache processes started randomly switching to D state,

In apache fluststatus Process 93661 a mis 10786 sec
=

4-1 93661 1598/ W 15.92 10786 0 2367404 0.0 71.45 142.44 172.20.1.47 http/1.1 
sisca.groupe-mfc.fr:80 POST 
/deverrouille-fiche-ajax.php?sTable=prospects=243239



ps aux ==> Process 93661 un interruptible sleep

root@hexaom-v2-vm-prod-front2:~# while true; do date; ps auxf | awk 
'{if($8=="D") print $0;}'; sleep 1; done
Fri 26 Apr 2024 12:37:59 PM CEST
www-data   93661  0.1  1.4 315100 120468 ?   D08:45   0:14  \_ 
/usr/sbin/apache2 -k start
www-data  119374  0.2  0.0  0 0 ?D11:33   0:10  \_ [apache2]
www-data  127425  0.1  0.8 214520 68308 ?D12:27   0:00  \_ 
/usr/sbin/apache2 -k start



process stack :  (can't attach using gdp gcore etc )
===
root@hexaom-v2-vm-prod-front2:~# cat /proc/93661/stack
[<0>] wait_on_commit+0x71/0xb0 [nfs]
[<0>] __nfs_commit_inode+0x131/0x180 [nfs]
[<0>] nfs_wb_all+0xb4/0x100 [nfs]
[<0>] nfs4_file_flush+0x6f/0xa0 [nfsv4]
[<0>] filp_close+0x2f/0x70
[<0>] __x64_sys_close+0x1e/0x60
[<0>] do_syscall_64+0x30/0x80
[<0>] entry_SYSCALL_64_after_hwframe+0x62/0xc7
=



In the client  debian 11
=
 rpcdebug -m nfs -s all
Apr 26 11:30:15 hexaom-v2-vm-prod-front2 kernel: [51318.693854] 
decode_attr_fs_locations: fs_locations done, error = 0
Apr 26 11:30:15 hexaom-v2-vm-prod-front2 kernel: [51318.693871] 
nfs41_sequence_process: Error 0 free the slot
Apr 26 11:30:15 hexaom-v2-vm-prod-front2 kernel: [51318.694161] 
nfs41_sequence_process: Error 0 free the slot
Apr 26 11:30:15 hexaom-v2-vm-prod-front2 kernel: [51318.694301] 
nfs41_sequence_process: Error 0 free the slot
=

No error in nfds server even with debug all : rpcdebug -m nfsd -s all

Information :  on client and server
***



client :


root@hexaom-v2-vm-prod-front2:~# cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux 11 (bullseye)"
NAME="Debian GNU/Linux"
VERSION_ID="11"
VERSION="11 (bullseye)"
VERSION_CODENAME=bullseye
ID=debian
HOME_URL="https://www.debian.org/;
SUPPORT_URL="https://www.debian.org/support;
BUG_REPORT_URL="https://bugs.debian.org/;


root@hexaom-v2-vm-prod-front2:~# uname -a
Linux hexaom-v2-vm-prod-front2 5.10.0-28-amd64 #1 SMP Debian 5.10.209-2 
(2024-01-31) x86_64 GNU/Linux


root@hexaom-v2-vm-prod-front2:~# dpkg -l | grep -i nfs
ii  liblockfile1:amd641.17-1+b1 
 amd64NFS-safe locking library
ii  libnfsidmap2:amd64    0.25-6
 amd64NFS idmapping library
ii  nfs-common1:1.3.4-6 
         amd64NFS support files common to 
client and server


fstab:
192.20.2.30:/NFS/sessions_v2 /srv/sessions nfs 
defaults,rw,relatime,vers=4.1,hard,timeo=100,retrans=4,_netdev 0 0


=




Server:
=
root@SERVSESSION01:~# cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
NAME="Debian GNU/Linux"
VERSION_ID="12"
VERSION="12 (bookworm)"
VERSION_CODENAME=bookworm
ID=debian
HOME_URL="https://www.debian.org/;
SUPPORT_URL="https://www.debian.org/support;
BUG_REPORT_URL="https://bugs.debian.org/;


Linux SERVSESSION01 6.1.0-18-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.76-1 
(2024-02-01) x86_64 GNU/Linux

*
root@SERVSESSION01:~# dpkg -l | grep nfs
ii  libnfsidmap1:amd64    1:2.6.2-4   amd64 
   NFS idmapping library
ii  nfs-common1:2.6.2-4   amd64 
   NFS support files common to client and server
ii  nfs-kernel-server 1:2.6.2-4   amd64 
   support for NFS kernel server
root@SERVSESSION01:~# dpkg -l | grep rpc
ii  libtirpc-common   1.3.3+ds-1  all   
   transport-independent RPC library - common files
ii  libtirpc3:amd64   1.3.3+ds-1  amd64 
   transport-independent RPC library
ii  rpcbind   1.2.6-6+b1      amd64 
   converts RPC program numbers into universal addresses
root@SERVSESSION01:~#
**


*
root@SERVSESSION01:~# cat /etc/default/nfs-common
# If you do not set values for the NEED_ options, they will be attempted
# autodetected; this should be sufficient for most people. Valid alternatives
# for the NEED_ options are "yes" and "no".

# Do you want to start the statd daemon? It is not needed for NFSv4.
NEED_STATD=

# Options for rpc.sta

Re: very poor nfs performance

2024-03-09 Thread Ralph Aichinger
On Sat, 2024-03-09 at 13:54 +0100, hw wrote:
> 
> NFS can be hard on network card drivers
> IPv6 may be faster than IPv4
> the network cable might suck
> the switch might suck or block stuff

As iperf and other network protocols were confirmed to be fast by the
OP it is very unlikely that it is a straight network problem. Yes,
these effects do exist occasionally (weird interactions of higher level
protocols and the low level stuff), but it is very rare. The cable that
is so specifically broken to slow down NFS but not scp might exist, but
it is very unlikely.

/ralph




Re: very poor nfs performance

2024-03-09 Thread hw
On Thu, 2024-03-07 at 10:13 +0100, Stefan K wrote:
> Hello guys,
> 
> I hope someone can help me with my problem.
> Our NFS performance ist very bad, like ~20MB/s, mountoption looks like that:

Reading or writing, or both?

Try testing with files on a different volume.

> rw,relatime,sync,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,local_lock=none

try IPv6

> [...]
> Only one nfs client(debian 12) is connected via 10G,

try a good 1GB network card

> since we host also database files on the nfs share,

bad idea

> 'sync'-mountoption is important (more or less), but it should still
> be much faster than 20MB/s

I wouldn't dare to go without sync other than for making backups
maybe.  Without sync, you probably need to test with a larger file.

> so can somebody tell me whats wrong or what should I change to speed
> that up?

Guesswork:

NFS can be hard on network card drivers
IPv6 may be faster than IPv4
the network cable might suck
the switch might suck or block stuff
ZFS might suck in combination with NFS
network cards might happen to be in disadvantageous slots
network cards can get too hot
try Fedora instead of Debian (boot a live system on the server,
configure NFS and see what happens when serving files from BTRFS)
do you see any unusual system load while transferring files?
do you need to run more NFS processes (increase their limit)
are you running irqbalance?
are you running numad if you're on a numa machine?
what CPU governors are you using?
do the i/o schedulers interfere?



signature.asc
Description: This is a digitally signed message part


Re: very poor nfs performance

2024-03-08 Thread Dan Ritter
Mike Kupfer wrote: 
> Stefan K wrote:
> 
> > > Can you partition the files into 2 different shares?  Put the database
> > > files in one share and access them using "sync", and put the rest of the
> > > files in a different share, with no "sync"?
> > this could be a solution, but I want to understand why is it so slow and 
> > fix that
> 
> It's inherent in how sync works.  Over-the-wire calls are expensive.
> NFS implementations try to get acceptable performance by extensive
> caching, using asynchronous operations when possible, and by issuing a
> smaller number of large RPCs (rather than a larger number of small
> RPCs).  The sync option defeats all of those mechanisms.

It is also the case that databases absolutely need sync to work
properly, so running them over NFS is a bad idea. At most, a
sqlite DB can be OK -- because sqlite is single user.

-dsr-



Re: Aw: Re: Re: very poor nfs performance

2024-03-08 Thread Mike Kupfer
Stefan K wrote:

> > Can you partition the files into 2 different shares?  Put the database
> > files in one share and access them using "sync", and put the rest of the
> > files in a different share, with no "sync"?
> this could be a solution, but I want to understand why is it so slow and fix 
> that

It's inherent in how sync works.  Over-the-wire calls are expensive.
NFS implementations try to get acceptable performance by extensive
caching, using asynchronous operations when possible, and by issuing a
smaller number of large RPCs (rather than a larger number of small
RPCs).  The sync option defeats all of those mechanisms.

mike



Re: very poor nfs performance

2024-03-08 Thread debian-user
Stefan K  wrote:
> > Run the database on the machine that stores the files and perform
> > database access remotely over the net instead. ?  
> 
> yes, but this doesn't resolve the performance issue with nfs

But it removes your issue that forces you to use the sync option.



Aw: Re: very poor nfs performance

2024-03-08 Thread Stefan K
> Run the database on the machine that stores the files and perform
> database access remotely over the net instead. ?

yes, but this doesn't resolve the performance issue with nfs



Aw: Re: Re: very poor nfs performance

2024-03-08 Thread Stefan K
> Can you partition the files into 2 different shares?  Put the database
> files in one share and access them using "sync", and put the rest of the
> files in a different share, with no "sync"?
this could be a solution, but I want to understand why is it so slow and fix 
that



Re: very poor nfs performance

2024-03-08 Thread debian-user
Stefan K  wrote:
> > You could try removing the "sync" option, just as an experiment, to
> > see how much it is contributing to the slowdown.  
> 
> If I don't use sync I got around 300MB/s  (tested with 600MB-file) ..
> that's ok (far from great), but since there are database files on the
> nfs it can cause file/database corruption, so we would like to use
> sync option

Run the database on the machine that stores the files and perform
database access remotely over the net instead. ?

> best regards
> Stefan



Aw: Re: very poor nfs performance

2024-03-08 Thread Stefan K
> You could try removing the "sync" option, just as an experiment, to see
> how much it is contributing to the slowdown.

If I don't use sync I got around 300MB/s  (tested with 600MB-file) .. that's ok 
(far from great), but since there are database files on the nfs it can cause 
file/database corruption, so we would like to use sync option

best regards
Stefan



Aw: Re: very poor nfs performance

2024-03-08 Thread Stefan K
> You could test with noatime if you don't need access times.
> And perhaps with lazytime instead of relatime.
Mountoptions are:
type zfs (rw,xattr,noacl)
I get you point, but when you look at my fio output, the performance is quiet 
good

> Could you provide us
> nfsstat -v
server:
nfsstat -v
Server packet stats:
packetsudptcptcpconn
509979521   0  510004972   2

Server rpc stats:
calls  badcalls   badfmt badauthbadclnt
509971853   0  0  0  0

Server reply cache:
hits   misses nocache
0  0  509980028

Server io stats:
read   write
1587531840   3079615002

Server read ahead cache:
size   0-10%  10-20% 20-30% 30-40% 40-50% 50-60% 
60-70% 70-80% 80-90% 90-100%notfound
0  0  0  0  0  0  0  0  
0  0  0  0

Server file handle cache:
lookup anon   ncachedir  ncachenondir  stale
0  0  0  0  0

Server nfs v4:
null compound
2 0% 509976662 99%

Server nfs v4 operations:
op0-unused   op1-unused   op2-future   access   close
0 0% 0 0% 0 0% 5015903   0% 3091693   0%
commit   create   delegpurge   delegreturn  getattr
3146340% 1498360% 0 0% 1615740   0% 390748077 
20%
getfhlink lock locktlocku
2573550   0% 0 0% 170% 0 0% 150%
lookup   lookup_root  nverify  open openattr
3931149   0% 0 0% 0 0% 3131045   0% 0 0%
open_confopen_dgrdputfhputpubfh putrootfh
0 0% 3 0% 510522216 26% 0 0% 4 
0%
read readdir  readlink remove   rename
59976532  3% 4217910% 0 0% 4299650% 2445640%
renewrestorefhsavefh   secinfo  setattr
0 0% 0 0% 5422310% 0 0% 8453240%
setcltid setcltidconf verify   writerellockowner
0 0% 0 0% 0 0% 404569758 21% 0 
0%
bc_ctl   bind_connexchange_id  create_ses   destroy_ses
0 0% 0 0% 4 0% 2 0% 1 0%
free_stateid getdirdeleg  getdevinfo   getdevlist   layoutcommit
150% 0 0% 0 0% 0 0% 0 0%
layoutgetlayoutreturn secinfononam sequence set_ssv
0 0% 0 0% 2 0% 509980018 26% 0 
0%
test_stateid want_deleg   destroy_clid reclaim_comp allocate
100% 0 0% 1 0% 2 0% 164   0%
copy copy_notify  deallocate   ioadvise layouterror
2976670% 0 0% 0 0% 0 0% 0 0%
layoutstats  offloadcanceloffloadstatusreadplus seek
0 0% 0 0% 0 0% 0 0% 0 0%
write_same
0 0%


client:
nfsstat -v
Client packet stats:
packetsudptcptcpconn
0  0  0  0

Client rpc stats:
calls  retransauthrefrsh
37415730   0  37425651

Client nfs v4:
null read writecommit   open
1 0% 4107833  10% 30388717 81% 2516  0% 55493 0%
open_confopen_noatopen_dgrdclosesetattr
0 0% 1942520% 0 0% 2473800% 75890 0%
fsinfo   renewsetclntidconfirm  lock
459   0% 0 0% 0 0% 0 0% 4 0%
locktlockuaccess   getattr  lookup
0 0% 2 0% 1315330% 1497029   4% 3180560%
lookup_root  remove   rename   link symlink
1 0% 31656 0% 15877 0% 0 0% 0 0%
create   pathconf statfs   readlink readdir
7019  0% 458   0% 1705220% 0 0% 30007 0%
server_caps  delegreturn  getacl   setacl   fs_locations
917   0% 1181090% 0 0% 0 0% 0 0%
rel_lkowner  secinfo  fsid_present exchange_id  
create_session
0 0% 0 0% 0 0% 2 0% 1 0%
destroy_session  sequence get_lease_time   reclaim_comp layoutget
0 0% 0 

Re: very poor nfs performance

2024-03-08 Thread Michel Verdier
On 2024-03-07, Stefan K wrote:

> I hope someone can help me with my problem.
> Our NFS performance ist very bad, like ~20MB/s, mountoption looks like that:
> rw,relatime,sync,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,local_lock=none

What are the mount options for your zfs filesystem ?

You could test with noatime if you don't need access times.
And perhaps with lazytime instead of relatime.

Could you provide us
nfsstat -v



Re: very poor nfs performance

2024-03-07 Thread Mike Kupfer
Stefan K wrote:

> 'sync'-mountoption is important (more or less), but it should still be
> much faster than 20MB/s

I don't know if "sync" could be entirely responsible for such a
slowdown, but it's likely at least contributing, particularly if the
application is doing small I/Os at the system call level.

You could try removing the "sync" option, just as an experiment, to see
how much it is contributing to the slowdown.

mike



Aw: Re: very poor nfs performance

2024-03-07 Thread Stefan K
Hi Ralph,

I just tested it with scp and I got 262MB/s
So it's not a network issue, just a NFS issue, somehow.

best regards
Stefan

> Gesendet: Donnerstag, 07. März 2024 um 11:22 Uhr
> Von: "Ralph Aichinger" 
> An: debian-user@lists.debian.org
> Betreff: Re: very poor nfs performance
>
> On Thu, 2024-03-07 at 10:13 +0100, Stefan K wrote:
> > Hello guys,
> > 
> > I hope someone can help me with my problem.
> > Our NFS performance ist very bad, like ~20MB/s, mountoption looks
> > like that:
> 
> Are both sides agreeing on MTU (using Jumbo frames or not)?
> 
> Have you tested the network with iperf (or simiar), does this happen
> only with NFS or also with other network traffic?
> 
> /ralph
> 
>



Re: very poor nfs performance

2024-03-07 Thread Ralph Aichinger
On Thu, 2024-03-07 at 10:13 +0100, Stefan K wrote:
> Hello guys,
> 
> I hope someone can help me with my problem.
> Our NFS performance ist very bad, like ~20MB/s, mountoption looks
> like that:

Are both sides agreeing on MTU (using Jumbo frames or not)?

Have you tested the network with iperf (or simiar), does this happen
only with NFS or also with other network traffic?

/ralph



very poor nfs performance

2024-03-07 Thread Stefan K
Hello guys,

I hope someone can help me with my problem.
Our NFS performance ist very bad, like ~20MB/s, mountoption looks like that:
rw,relatime,sync,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,local_lock=none
The NFS server (debian 12) is a ZFS Fileserver with 40G network connection, 
also the read/write performance is good:
fio --rw=readwrite  --rwmixread=70 --name=test --size=50G --direct=1 
--bsrange=4k-1M --numjobs=30 --group_reporting 
--filename=/zfs_storage/test/asdfg --runtime=600 --time_based
   READ: bw=11.1GiB/s (11.9GB/s), 11.1GiB/s-11.1GiB/s (11.9GB/s-11.9GB/s), 
io=6665GiB (7156GB), run=64-64msec
  WRITE: bw=4875MiB/s (5112MB/s), 4875MiB/s-4875MiB/s (5112MB/s-5112MB/s), 
io=2856GiB (3067GB), run=64-64msec

Only one nfs client(debian 12) is connected via 10G, since we host also 
database files on the nfs share, 'sync'-mountoption is important (more or 
less), but it should still be much faster than 20MB/s

so can somebody tell me whats wrong or what should I change to speed that up?

thanks in advance.
best regards
Stefan



Re: utilisation de nis et nfs pour un réseau de 32 postes

2024-02-25 Thread BERTRAND Joël
Je ne sais pas pourquoi je ne peux pas avoir la copie du mail dans la
réponse. On va donc copier à la main.

>Pour l’architecture globale, si je comprends bien c’est :
Un serveur de fichiers sous un *nix contenant à la fois les boot des
PC et leurs données
Des postes sans disque (pourquoi ?)
un réseau ;-)

Oui. Ça permet d'avoir une solution centralisée de sauvegarde et
d'archivage. Ça permet aussi de considérer le poste de travail comme
jetable, sans aucune donnée des utilisateurs. Changer un poste de
travail qui a claqué, c'est une histoire de deux fichiers à éditer côté
serveur pour le boot réseau.

>Je ne comprends pas bien la notion de PC sans disques, depuis les tests
Sun de stations sans disques écroulant tout réseau je n’en vois pas
l'intérêt

Le réseau n'est jamais écroulé. Au cul du serveur principal, il y a un
switch Cisco, chaque machine étant sur un port particulier. Le goulot
d'étranglement n'est pas là.

>J’aurais tendance à proposer ce qui est largement utilisé dans les
clusters de calculs et les déploiements via réseau c’est à dire :

Un boot PXE pour charger l’initramfs
la diffusion de l’arborescence système via un Netboot
L’OS tourne en RAM avec un disque RAM

Aucun intérêt. Il y a des cibles qui sont des machines pour des clients
avec de l'électronique embarquée et qui arrivent péniblement à 512 Mo de
mémoire. On ne va pas y mettre un OS en RAM.

> Pour ce qui est de l’architecture de l’OS sur le PC j’utiliserais un
disque local et installerai toutes les données système dessus. Encore
une fois, c’est bien plus sur et efficace (voir latence réseau). Du
coup, la home dir de l’utilisateur doit rester locale avec un montage
NFS de ses données réseau dans un sous répertoire. Comme ça les usages
de l’OS dans le répertoire utilisateur ne sont pas ralentis par le
montage NFS et les données restent accessibles.

JE NE VEUX PAS DE DISQUES LOCAUX POUR TOUT UN TAS DE RAISON. Là, j'ai
un seul endroit à surveiller avec les sauvegardes et archivages. Et ça
évite les chouineries du type mon disque a planté et je n'ai pas de
sauvegarde ou j'ai eu des alertes smartd mais j'ai oublié de te le dire.
Ça évite aussi le "j'ai pas de sauvegarde parce qu'elle passe à 23h05 et
que mon poste était coupé".

> Si je comprends bien tu mélange sur un même réseau la 2 technologies
iSCSI qui utilise le mode « block » et Ethernet qui utilise le réseau en
mode caractères.
Ce n’est pa très bon.
L’un, iSCSI, serait plus opérationnel avec des trames « Jumbo » (9000
Oc) pour minimiser le découpage des blocks. L’autre, Ethernet,
fonctionne mieux avec des trames de 1500 Oc. Et il n’est pas raisonnable
de mixer les 2 sur un même commutateur.
L’un, iSCSI, travaille en SAN c’est à dire directement sur un système de
fichiers via réseau. L’autre, NFS, réclame un service de niveau haut
fourni par un serveur (NAS).

Les deux fonctionnent avec des blocs de 1500 octets. Il y a des trames
de 9000 octets sur un autre sous réseau accédant aux NAS. Et ces 1500
octets permettent de swapper à la vitesse maximale. En d'autres termes,
ce n'est pas le facteur limitant et il y a même beaucoup de marge. Le
facteur limitant est le serveur (pour swapper à 1 Gbps, l'occupation CPU
de istgt monte à 40% d'un coeur en état D).

> L’explication est juste au dessus.

Ben non.

> Vrai, il n’était pas vraiment nécessaire de passer à un switch Cisco
(cher) mais c’est vrai.
Un discriminant est de choisir un commutateur managable.

Non plus. Le TPlink était parfaitement manageable.

> On en revient au montage par block d’un système de fichiers via un SAN.

Rien à voir. Je ne peux pas me permettre de créer et retirer à la volée
des volumes racine pour les différents clients. D'autant que certains OS
utilisés ne peuvent utiliser une racine en iSCSI.

JB



signature.asc
Description: OpenPGP digital signature


Re: utilisation de nis et nfs pour un réseau de 32 postes

2024-02-25 Thread BERTRAND Joël
zithro a écrit :
> On 24 Feb 2024 23:23, BERTRAND Joël wrote:
>> Un gros serveur sous NetBSD et toutes les stations sont diskless et
>> bootent sur le réseau. Les disques sont en NFS et les swaps en iSCSI.
> 
> Peux-tu expliquer ce choix (NFS vs iSCSI) stp ?

Oui, je peux ;-)

> Si je dis pas de conneries, tu pourrais boot root (/) en iSCSI.

Je pourrais. Mais j'ai un gros volume qui contient les racines de
toutes les machines. Si je voulais traiter en iSCSI, il me faudrait un
système de fichiers distribué et supporté par tous les clients. Il me
faudrait aussi des OS capables de démarrer sur un volume iSCSI.

Pour utiliser iSCSI sereinement, il me faudrait aussi un volume par
client, exporté en tant que tel. Les /home sont sur un autre volume. En
revanche, les points de montage des racines (/srv/machine) ne sont
exportables que sur la bonne machine (verrouillé dans /etc/exports, les
IP étant fournies par DHCP en fonction de l'adresse MAC du client).

> Note que je suis autant interessé par ton raisonnement (ton choix
> pratique) que par le débat NFS vs iSCSI (la théorie) !
> (Y'a pas de solutions toutes faites, le forum de FreeNAS est un bon
> exemple).
> 
>> La qualité du
>> switch est primordiale. Passer d'un TPLink à un Cisco m'a changé la vie.
> 
> Entièrement d'accord avec toi.
> J'en profite pour un coup de gueule, c'est le problème avec le matos
> "grand public".
> Un switch 1Gb/s "grand public" veut dire que tu auras ce débit entre
> DEUX stations ! (Comprendre entre 2 ports physiques).
> Un "vrai" switch 1Gb/s 10 ports devrait tenir au moins 5Gb/s (sans
> uplink) : deux stations à 1Gpbs, fois 5.
> J'ai découvert ce problème par la pratique, chercher "switch backplane"
> sur le net. Même certains switch soit-disant d'entreprise (SOHO) sont
> incapables de tels débits.
> (Mais YMMV comme disent les ricains).

J'ai toutefois été surpris de constater qu'un vieux switch 3Com à 24
ports mettait la pâtée à un TPlink pourtant relativement cher. Comme
j'ai été surpris de constater qu'il était assez intelligent alors qu'il
n'est pas officiellement manageable pour gérer une agrégation de lien de
niveau 2.

>> Le goulot d'étranglement n'est pas le réseau, mais le système de
>> fichier sur les disques exportés. J'ai fait la bêtise d'utiliser FFSv2
>> sur lequel il n'est pas possible de mettre un cache. Lorsque j'aurai le
>> temps, je remplacerai cela par un ZFS+cache.
> 
> AFAIK, le problème de tous réseaux c'est la latence, pas le débit.
> (Toutes proportions gardées).
> Donc améliorer les accès disque(s) n'améliorent pas forcément la
> "réactivité".
> Peux-tu éclairer ma lanterne stp ?

Le NFSv3 n'a pas de cache et travaille en TCP (j'ai essayé UDP, ce
n'est pas franchement mieux). Il est possible de monter les disques en
async, mais je déconseille (côté NetBSD, il vaut mieux ne pas utiliser
async et la journalisation en même temps). Avec l'option async, ça va
nettement plus vite, mais on risque des surprises en cas de coupure de
courant.

Quand il y a des tas de petites écritures, le goulot d'étranglement est
l'accès disque surtout en écriture (là, il vaut mieux des disques CMR
que SMR) parce que le système ne peut pas cacher efficacement ces petits
accès. On s'aperçoit que le paquet ACK met un peu plus de temps à
revenir au client. Ça suffit pour faire tomber les performances.

Sur des lectures, j'atteins sans peine 800 à 900 Mbps. Sur des écriture
de quelques gros fichiers, ça monte à 500 Mbps. Si ce sont des petits
fichiers en écriture, les performances sont ridicules. Un apt install
texlive-full prend des plombes. Attention, je n'attends ces débits
qu'avec des cartes réseau Intel. Les Realtek sont largement moins bonnes
(bon, ça reste utilisable tout de même).

À côté, iSCSI permet d'atteindre 1 Gbps sur le swap.

> PS: j'ai travaillé dans la VoIP, où j'ai -finalement- compris que
> latence et débit n'ont rien à voir. Sans même parler de jitter (la
> variation de latence en bon céfran).

Ben oui, ça n'a rien à voir. Mais le gros problème est d'avoir le
paquet ACK du TCP le plus vite possible. Et ça, ça passe par un cache
côté serveur capable d'accepter les transactions le plus rapidement
possible en résistant aux coupures de courant. C'est pour cela qu'il
faudrait que je teste un ZFS avec un cache sur un SSD sacrificiel.

Bien cordialement,

JB



signature.asc
Description: OpenPGP digital signature


Re: utilisation de nis et nfs pour un réseau de 32 postes

2024-02-25 Thread Pierre Malard
Bonjour,

Les commentaires sont dans le mail.

Pour l’architecture globale, si je comprends bien c’est :
Un serveur de fichiers sous un *nix contenant à la fois les boot des PC et 
leurs données
Des postes sans disque (pourquoi ?)
un réseau ;-)

Je ne comprends pas bien la notion de PC sans disques, depuis les tests Sun de 
stations sans disques écroulant tout réseau je n’en vois pas l'intérêt
J’aurais tendance à proposer ce qui est largement utilisé dans les clusters de 
calculs et les déploiements via réseau c’est à dire :
Un boot PXE pour charger l’initramfs
la diffusion de l’arborescence système via un Netboot
L’OS tourne en RAM avec un disque RAM

https://www.it-connect.fr/installation-et-configuration-dun-serveur-pxe/ : 
C’est la solution de Netboot Debian qui est prise en exemple mais on peut 
charger un vrai système. Cela impose suffisamment de RAM…

Pour ce qui est de l’architecture de l’OS sur le PC j’utiliserais un disque 
local et installerai toutes les données système dessus. Encore une fois, c’est 
bien plus sur et efficace (voir latence réseau). Du coup, la home dir de 
l’utilisateur doit rester locale avec un montage NFS de ses données réseau dans 
un sous répertoire. Comme ça les usages de l’OS dans le répertoire utilisateur 
ne sont pas ralentis par le montage NFS et les données restent accessibles.

Même si on reste dans le schéma de démarrer les postes avec un disque iSCSI 
(Boot on SAN) contenant l’OS, ce qui est écrit au dessus reste valable 
vis-à-vis des accès NFS.


> Le 24 févr. 2024 à 23:23, BERTRAND Joël  a écrit :
> 
> Basile Starynkevitch a écrit :
>> 
>> On 2/23/24 12:02, Erwann Le Bras wrote:
>>> 
>>> Bonjour
>>> 
>>> Peut-être faire des essais avec SSHFS? le $HOME des utilisateurs
>>> serait monté sur chaque client au boot.
>>> 
>>> Mais je ne sais pas si c'est plus efficace que NFS.
>>> 
>> 
>> J'aurais tendance à imaginer que c'est moins efficace que NFS, qui est
>> de toute façon lent (car Ethernet est beaucoup plus lent que par exemple
>> une liaison SATA à un disque local, même rotatif).
>> 
>> NFS (à l'époque lointaine où je l'avais utilisé) ne crypte pas les
>> données. SSHFS semble les crypter.
>> 
>> Autrefois (avant 2000) j'avais même utilisé des Sun4/110 dont le swap
>> était une partition NFS distante.
>> 
>> Librement
>> 
> 
>   Bonsoir,
> 
>   J'ai un réseau complet et hétérogène avec NIS+NFS.

Je ne comprends pas bien ce mélange NIS est une base de services comme LDAP, 
NFS un système de partage de fichiers.

> 
>   Un gros serveur sous NetBSD et toutes les stations sont diskless et
> bootent sur le réseau. Les disques sont en NFS et les swaps en iSCSI.

Si je comprends bien tu mélange sur un même réseau la 2 technologies iSCSI qui 
utilise le mode « block » et Ethernet qui utilise le réseau en mode caractères.
Ce n’est pa très bon.
L’un, iSCSI, serait plus opérationnel avec des trames « Jumbo » (9000 Oc) pour 
minimiser le découpage des blocks. L’autre, Ethernet, fonctionne mieux avec des 
trames de 1500 Oc. Et il n’est pas raisonnable de mixer les 2 sur un même 
commutateur.
L’un, iSCSI, travaille en SAN c’est à dire directement sur un système de 
fichiers via réseau. L’autre, NFS, réclame un service de niveau haut fourni par 
un serveur (NAS).

> Ça fonctionne parfaitement bien (ça rame lorsqu'il y a de toutes petites
> écritures en rafale en raison du protocole réseau TCP, mais l'immense
> majorité du temps, ça fonctionne bien).

L’explication est juste au dessus.

> 
>   Le serveur est relié à un switch Cisco au travers de deux liens
> ethernet aggrégés, le reste est en 1 Gbps classique. La qualité du
> switch est primordiale. Passer d'un TPLink à un Cisco m'a changé la vie.

Vrai, il n’était pas vraiment nécessaire de passer à un switch Cisco (cher) 
mais c’est vrai.
Un discriminant est de choisir un commutateur managable.

> 
>   Le goulot d'étranglement n'est pas le réseau, mais le système de
> fichier sur les disques exportés. J'ai fait la bêtise d'utiliser FFSv2
> sur lequel il n'est pas possible de mettre un cache. Lorsque j'aurai le
> temps, je remplacerai cela par un ZFS+cache.

On en revient au montage par block d’un système de fichiers via un SAN.

> 
>   NFS à partir de la version 4 chiffre les données (mais n'est pas
> interopérable pour l'instant avec NetBSD, donc je n'ai pas testé).

Effectivement

> 
>   Bien cordialement,
> 
>   JB

--
Pierre Malard
Responsable architectures système CDS DINAMIS/THEIA Montpellier
IRD - UMR Espace-Dev - UAR CPST - IR Data-Terra
Maison de la Télédétection
500 rue Jean-François Breton
34093 Montpellier Cx 5
France

Tél : +33 626 89 22 68

   « Je n'ai jamais séparé la République des idées de justice sociale,
 sans laqu

Re: utilisation de nis et nfs pour un réseau de 32 postes

2024-02-24 Thread zithro

On 24 Feb 2024 23:23, BERTRAND Joël wrote:

Un gros serveur sous NetBSD et toutes les stations sont diskless et
bootent sur le réseau. Les disques sont en NFS et les swaps en iSCSI.


Peux-tu expliquer ce choix (NFS vs iSCSI) stp ?
Si je dis pas de conneries, tu pourrais boot root (/) en iSCSI.
Note que je suis autant interessé par ton raisonnement (ton choix 
pratique) que par le débat NFS vs iSCSI (la théorie) !
(Y'a pas de solutions toutes faites, le forum de FreeNAS est un bon 
exemple).



La qualité du
switch est primordiale. Passer d'un TPLink à un Cisco m'a changé la vie.


Entièrement d'accord avec toi.
J'en profite pour un coup de gueule, c'est le problème avec le matos 
"grand public".
Un switch 1Gb/s "grand public" veut dire que tu auras ce débit entre 
DEUX stations ! (Comprendre entre 2 ports physiques).
Un "vrai" switch 1Gb/s 10 ports devrait tenir au moins 5Gb/s (sans 
uplink) : deux stations à 1Gpbs, fois 5.
J'ai découvert ce problème par la pratique, chercher "switch backplane" 
sur le net. Même certains switch soit-disant d'entreprise (SOHO) sont 
incapables de tels débits.

(Mais YMMV comme disent les ricains).


Le goulot d'étranglement n'est pas le réseau, mais le système de
fichier sur les disques exportés. J'ai fait la bêtise d'utiliser FFSv2
sur lequel il n'est pas possible de mettre un cache. Lorsque j'aurai le
temps, je remplacerai cela par un ZFS+cache.


AFAIK, le problème de tous réseaux c'est la latence, pas le débit.
(Toutes proportions gardées).
Donc améliorer les accès disque(s) n'améliorent pas forcément la 
"réactivité".

Peux-tu éclairer ma lanterne stp ?

PS: j'ai travaillé dans la VoIP, où j'ai -finalement- compris que 
latence et débit n'ont rien à voir. Sans même parler de jitter (la 
variation de latence en bon céfran).


--
++
zithro / Cyril



Re: utilisation de nis et nfs pour un réseau de 32 postes

2024-02-24 Thread BERTRAND Joël
Basile Starynkevitch a écrit :
> 
> On 2/23/24 12:02, Erwann Le Bras wrote:
>>
>> Bonjour
>>
>> Peut-être faire des essais avec SSHFS? le $HOME des utilisateurs
>> serait monté sur chaque client au boot.
>>
>> Mais je ne sais pas si c'est plus efficace que NFS.
>>
> 
> J'aurais tendance à imaginer que c'est moins efficace que NFS, qui est
> de toute façon lent (car Ethernet est beaucoup plus lent que par exemple
> une liaison SATA à un disque local, même rotatif).
> 
> NFS (à l'époque lointaine où je l'avais utilisé) ne crypte pas les
> données. SSHFS semble les crypter.
> 
> Autrefois (avant 2000) j'avais même utilisé des Sun4/110 dont le swap
> était une partition NFS distante.
> 
> Librement
> 

Bonsoir,

J'ai un réseau complet et hétérogène avec NIS+NFS.

Un gros serveur sous NetBSD et toutes les stations sont diskless et
bootent sur le réseau. Les disques sont en NFS et les swaps en iSCSI.
Ça fonctionne parfaitement bien (ça rame lorsqu'il y a de toutes petites
écritures en rafale en raison du protocole réseau TCP, mais l'immense
majorité du temps, ça fonctionne bien).

Le serveur est relié à un switch Cisco au travers de deux liens
ethernet aggrégés, le reste est en 1 Gbps classique. La qualité du
switch est primordiale. Passer d'un TPLink à un Cisco m'a changé la vie.

Le goulot d'étranglement n'est pas le réseau, mais le système de
fichier sur les disques exportés. J'ai fait la bêtise d'utiliser FFSv2
sur lequel il n'est pas possible de mettre un cache. Lorsque j'aurai le
temps, je remplacerai cela par un ZFS+cache.

NFS à partir de la version 4 chiffre les données (mais n'est pas
interopérable pour l'instant avec NetBSD, donc je n'ai pas testé).

Bien cordialement,

JB



signature.asc
Description: OpenPGP digital signature


Re: utilisation de nis et nfs pour un réseau de 32 postes

2024-02-24 Thread Michel Verdier
Le 23 février 2024 Erwann Le Bras a écrit :

> Peut-être faire des essais avec SSHFS? le $HOME des utilisateurs serait monté
> sur chaque client au boot.
>
> Mais je ne sais pas si c'est plus efficace que NFS.

J'ai pas mal utilisé sshfs et ça reste assez performant même via
internet pour des postes nomades. Il y a bien sûr le cryptage et la
compression qui demandent un peu plus de CPU, mais la compression
accélère bien le tout. Selon le matériel et le débit en local on peut
désactiver la compression.



Re: utilisation de nis et nfs pour un réseau de 32 postes

2024-02-23 Thread Basile Starynkevitch



On 2/23/24 12:02, Erwann Le Bras wrote:


Bonjour

Peut-être faire des essais avec SSHFS? le $HOME des utilisateurs 
serait monté sur chaque client au boot.


Mais je ne sais pas si c'est plus efficace que NFS.



J'aurais tendance à imaginer que c'est moins efficace que NFS, qui est 
de toute façon lent (car Ethernet est beaucoup plus lent que par exemple 
une liaison SATA à un disque local, même rotatif).


NFS (à l'époque lointaine où je l'avais utilisé) ne crypte pas les 
données. SSHFS semble les crypter.


Autrefois (avant 2000) j'avais même utilisé des Sun4/110 dont le swap 
était une partition NFS distante.


Librement

--
Basile Starynkevitch 
(only mine opinions / les opinions sont miennes uniquement)
92340 Bourg-la-Reine, France
web page: starynkevitch.net/Basile/
See/voir:   https://github.com/RefPerSys/RefPerSys



Re: utilisation de nis et nfs pour un réseau de 32 postes

2024-02-23 Thread Erwann Le Bras

Bonjour

Peut-être faire des essais avec SSHFS? le $HOME des utilisateurs serait 
monté sur chaque client au boot.


Mais je ne sais pas si c'est plus efficace que NFS.

Le 20/02/2024 à 12:26, Pierre Malard a écrit :

Ou ahh ! NIS, ça ne me rajeuni pas ça ;-)
Et pourquoi pas un LDAP pour l’authentification ?

Pour ce qui du montage direct du répertoire partagé sur la home dir ce 
n’est pas judicieux car ça va ramer dur si on fait une utilisation 
gourmande en entrée sortie.
Si je me souviens de ce qu’on avait fait ici c’est d’utiliser PAM et 
autofs pour gérer les accès sur les postes avec :


  * Création de la home dir à la volée si besoin depuis un squelette
qui contient un point de montage pour NFS
  * Montage du répertoire partagé de l’utilisateur dans $HOME/NFS


Comme ça l’utilisateur n’est pas ralenti dans ses compilations par 
exemple et peut accéder à ses données dans le répertoire ~/NFS



Le 20 févr. 2024 à 08:34, olivier  a 
écrit :


Bonjour,

J'ai un réseau totalement avec débian 11 (que je compte mettre à jour 
avec la version 12), constitué d'un serveur avec deux cartes réseau, 
l'une reliée à l’extérieur par la fibre (DHCP) et l'autre carte 
(Adresse IP fixe 192.168.200.0) reliée à un switch. Ce switch est 
relié à 32 postes (avec IP fixe de 192.168.200.10 à 192.168.200.50, 
adresse de la passerelle 192.168.200.0, masque de sous réseau 
255.255.255.0).


Les 32 postes sont utilisés par une classe d'élèves. J'ai 200 élèves 
à gérer, donc 200 profil différent.


Pour que chaque poste accède à internet, j'ai fais

iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE

Est ce judicieux ?

J'ai essayé avec NIS avec debian 11, l'authentification à l'air de 
bien fonctionner. Pour l'authentification, NIS est il bien adapté 
pour ce genre de configuration ?


Par contre au niveau de l'export (NFS), cela rame un peu (je me rend 
compte que j'exporte l'ensemble du home serveur sur tous les clients 
et non celui uniquement de l'utilisateur). Comment faire pour 
exporter sur la machine cliente uniquement le profil de l'utilisateur 
et non tous les utilisateurs ?


Sur le serveur, j'ai mis à la fin dans le fichier /etc/exports

/home/NFS_Partage 192.168.200.1/24(rw,sync,no_subtree_check)
mais j'hésite avec

/home/NFS_Partage 
192.168.0.0/24(rw,all_squash,anonuid=1000,anongid=1000,sync,no_subtree_check)


Sur le client, j'ai mis à la fin dans le fichier fstab

DomaineNFS:/home/NFS_Partage /home nfs defaults    0 0

On m'a parlé de LDAP, mais je ne sais pas trop comment m'y prendre. 
Est il préférable d'utiliser LDAP ou NIS pour l'authentification ? 
Existe il un petit manuel simple pour créer 200 utilisateurs.


Mon réseau fonctionne, mais rame beaucoup au delà de 4 utilisateurs ? 
et je n'arrive pas à trouver une solution.


Un grand merci pour votre aide.

Olivier





--
Pierre Malard

  « /La façon de donner vaut mieux que ce que l'on donne /»
                   Pierre Corneille (1606-1684) - Le menteur
|\      _,,,---,,_
/,`.-'`'    -.  ;-;;,_
|,4-  ) )-,_. ,\ (  `'-'
 '---''(_/--' `-'\_)   πr

perl -e '$_=q#: 3|\ 5_,3-3,2_: 3/,`.'"'"'`'"'"' 5-.  ;-;;,_:  |,A-  ) 
)-,_. ,\ (  `'"'"'-'"'"': '"'"'-3'"'"'2(_/--'"'"'  `-'"'"'\_): 
24πr::#;y#:#\n#;s#(\D)(\d+)#$1x$2#ge;print'

- --> Ce message n’engage que son auteur <--


Re: utilisation de nis et nfs pour un réseau de 32 postes

2024-02-20 Thread Pierre Malard
Ou ahh ! NIS, ça ne me rajeuni pas ça ;-)
Et pourquoi pas un LDAP pour l’authentification ?

Pour ce qui du montage direct du répertoire partagé sur la home dir ce n’est 
pas judicieux car ça va ramer dur si on fait une utilisation gourmande en 
entrée sortie.
Si je me souviens de ce qu’on avait fait ici c’est d’utiliser PAM et autofs 
pour gérer les accès sur les postes avec :
Création de la home dir à la volée si besoin depuis un squelette qui contient 
un point de montage pour NFS
Montage du répertoire partagé de l’utilisateur dans $HOME/NFS

Comme ça l’utilisateur n’est pas ralenti dans ses compilations par exemple et 
peut accéder à ses données dans le répertoire ~/NFS


> Le 20 févr. 2024 à 08:34, olivier  a écrit :
> 
> Bonjour,
> 
> J'ai un réseau totalement avec débian 11 (que je compte mettre à jour avec la 
> version 12), constitué d'un serveur avec deux cartes réseau, l'une reliée à 
> l’extérieur par la fibre (DHCP) et l'autre carte (Adresse IP fixe 
> 192.168.200.0) reliée à un switch. Ce switch est relié à 32 postes (avec IP 
> fixe de 192.168.200.10 à 192.168.200.50, adresse de la passerelle 
> 192.168.200.0, masque de sous réseau 255.255.255.0).
> 
> Les 32 postes sont utilisés par une classe d'élèves. J'ai 200 élèves à gérer, 
> donc 200 profil différent.
> 
> Pour que chaque poste accède à internet, j'ai fais
> 
> iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
> 
> Est ce judicieux ?
> 
> J'ai essayé avec NIS avec debian 11, l'authentification à l'air de bien 
> fonctionner. Pour l'authentification, NIS est il bien adapté pour ce genre de 
> configuration ?
> 
> Par contre au niveau de l'export (NFS), cela rame un peu (je me rend compte 
> que j'exporte l'ensemble du home serveur sur tous les clients et non celui 
> uniquement de l'utilisateur). Comment faire pour exporter sur la machine 
> cliente uniquement le profil de l'utilisateur et non tous les utilisateurs ?
> 
> Sur le serveur, j'ai mis à la fin dans le fichier /etc/exports
> 
> /home/NFS_Partage 192.168.200.1/24(rw,sync,no_subtree_check)
> mais j'hésite avec
> 
> /home/NFS_Partage 
> 192.168.0.0/24(rw,all_squash,anonuid=1000,anongid=1000,sync,no_subtree_check)
> 
> Sur le client, j'ai mis à la fin dans le fichier fstab
> 
> DomaineNFS:/home/NFS_Partage /home nfs defaults0 0
> 
> On m'a parlé de LDAP, mais je ne sais pas trop comment m'y prendre. Est il 
> préférable d'utiliser LDAP ou NIS pour l'authentification ? Existe il un 
> petit manuel simple pour créer 200 utilisateurs.
> 
> Mon réseau fonctionne, mais rame beaucoup au delà de 4 utilisateurs ? et je 
> n'arrive pas à trouver une solution.
> 
> Un grand merci pour votre aide.
> 
> Olivier
> 
> 
> 

-- 
Pierre Malard

  « La façon de donner vaut mieux que ce que l'on donne »
   Pierre Corneille (1606-1684) - Le menteur
   |\  _,,,---,,_
   /,`.-'`'-.  ;-;;,_
  |,4-  ) )-,_. ,\ (  `'-'
 '---''(_/--'  `-'\_)   πr

perl -e '$_=q#: 3|\ 5_,3-3,2_: 3/,`.'"'"'`'"'"' 5-.  ;-;;,_:  |,A-  ) )-,_. ,\ 
(  `'"'"'-'"'"': '"'"'-3'"'"'2(_/--'"'"'  `-'"'"'\_): 
24πr::#;y#:#\n#;s#(\D)(\d+)#$1x$2#ge;print'
- --> Ce message n’engage que son auteur <--



signature.asc
Description: Message signed with OpenPGP


Re: utilisation de nis et nfs pour un réseau de 32 postes

2024-02-20 Thread Basile Starynkevitch


On 2/20/24 08:34, olivier wrote:

Bonjour,

J'ai un réseau totalement avec débian 11 (que je compte mettre à jour 
avec la version 12), constitué d'un serveur avec deux cartes réseau, 
l'une reliée à l’extérieur par la fibre (DHCP) et l'autre carte 
(Adresse IP fixe 192.168.200.0) reliée à un switch. Ce switch est 
relié à 32 postes (avec IP fixe de 192.168.200.10 à 192.168.200.50, 
adresse de la passerelle 192.168.200.0, masque de sous réseau 
255.255.255.0).


Les 32 postes sont utilisés par une classe d'élèves. J'ai 200 élèves à 
gérer, donc 200 profil différent.


Pour que chaque poste accède à internet, j'ai fais

iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE

Est ce judicieux ?

J'ai essayé avec NIS avec debian 11, l'authentification à l'air de 
bien fonctionner. Pour l'authentification, NIS est il bien adapté pour 
ce genre de configuration ?



A l'époque déjà lointaine où j'étais sysadmin occasionnel (au CEA) NIS 
fonctionnait bien. C'était avant 2000, sur des stations Sun.



Par contre (et pour avoir enseigné plus récemment Linux à l'IUT d'Orsay) 
je m'interroge sur la pertinence de mettre (en 2024) le /home sur NFS. 
Ça a un inconvenient colossal en 2024: NFS est plus lent que l'accès au 
disque local de chaque poste. Si on compile des sources sur un serveur 
NFS avec le fichier objet sur un serveur NFS (le même), si on lance un 
exécutable ELF sur un serveur NFS, c'est nettement plus lent.


Ca dépend de quels genres d'enseignements il s'agit. S'il s'agit 
d'enseigner à des débutants complets la programmation avec un langage 
compilé (C, C++, Ada, Fortran, Ocaml) j'aurais tendance à :


 * décider avec les autres enseignants si les étudiants peuvent avoir
   accès (ou non) aux fichiers de leurs camarades. A mon avis c'est
   très utile (pour l'entraide, la détection du plagiat, ...).
 * expliquer aux étudiants ce qu'est NFS et un serveur de fichiers.
 * Ne pas monter NFS le /home mais un autre répertoire, par exemple
   /UnivMontp3et annoncer aux étudiants que le /home n'est pas sauvegardé!
 * expliquer aux étudiants ce qu'est un versionneur de fichiers; leur
   recommander (voir imposer) l'utilisation de git: https://git-scm.com/
 * installer un service git sur le serveur.
 * expliquer à tous les étudiants leurs droits et devoirs (comme
   étudiants). Par exemple, ceux (et ils sont nombreux) qui ont un
   ordinateur portable personnel (à eux, pas payé par l'université),
   ont-ils le droit de l'utiliser en TP? de le connecter au réseau? d'y
   installer Linux? d'accéder le serveur de l'université depuis la
   salle de cours ou de leur chambre d'étudiant? d'imprimer sur
   l'imprimante de la classe leurs fichiers source (ou d'autres)?





Par contre au niveau de l'export (NFS), cela rame un peu (je me rend 
compte que j'exporte l'ensemble du home serveur sur tous les clients 
et non celui uniquement de l'utilisateur). Comment faire pour exporter 
sur la machine cliente uniquement le profil de l'utilisateur et non 
tous les utilisateurs ?



Ca doit être possible par une configuration de PAM. 
https://fr.wikipedia.org/wiki/Pluggable_Authentication_Modules mais 
j'ignore les détails! Il y a probablement des universités qui peut-être 
utilisent rsync à la connection de l'étudiant pour recopier sur le 
disque local des fichiers depuis le serveur.




Sur le serveur, j'ai mis à la fin dans le fichier /etc/exports

/home/NFS_Partage 192.168.200.1/24(rw,sync,no_subtree_check)
mais j'hésite avec

/home/NFS_Partage 
192.168.0.0/24(rw,all_squash,anonuid=1000,anongid=1000,sync,no_subtree_check)


Sur le client, j'ai mis à la fin dans le fichier fstab

DomaineNFS:/home/NFS_Partage /home nfs defaults 0 0

On m'a parlé de LDAP, mais je ne sais pas trop comment m'y prendre. 
Est il préférable d'utiliser LDAP ou NIS pour l'authentification ? 
Existe il un petit manuel simple pour créer 200 utilisateurs.



La création de 200 utilisateurs est probablement faisable en GNU bash ou 
Python ou Ocaml ou C++ en quelques dizaines de lignes.




Mon réseau fonctionne, mais rame beaucoup au delà de 4 utilisateurs ? 
et je n'arrive pas à trouver une solution.


A mon avis, NIS pour chaque accès aux fichiers de l'étudiant va ramer 
énormément! Pourquoi pas un /home local à la machine pour l'étudiant, 
qui serait par exemple recopié depuis le serveur NFS au login, et 
recopié (par rsync) vers le serveur NFS puis effacé (localement) au logout?



Variante: l'étudiant liste sur son $HOME/.mes-depots-git (ou dans une 
carte NIS maison?) tous les répertoires git à cloner au login.



A l'IUT d'Orsay il y avait même un applicatif sur le PC de l'enseignant 
qui permettait d'éteindre simultanément tous les PCs (de l'université) 
dans la salle de TP utilisés par les étudiants.




Un grand merci pour votre aide.



Espérant avoir un peu aidé!


Librement





--
Basile Starynkevitch
(only mine opinions / les opinions sont miennes uniquement)
92340 Bourg-la-Reine, France
web page: starynkevitch.net/Basile

utilisation de nis et nfs pour un réseau de 32 postes

2024-02-19 Thread olivier

Bonjour,

J'ai un réseau totalement avec débian 11 (que je compte mettre à jour 
avec la version 12), constitué d'un serveur avec deux cartes réseau, 
l'une reliée à l’extérieur par la fibre (DHCP) et l'autre carte (Adresse 
IP fixe 192.168.200.0) reliée à un switch. Ce switch est relié à 32 
postes (avec IP fixe de 192.168.200.10 à 192.168.200.50, adresse de la 
passerelle 192.168.200.0, masque de sous réseau 255.255.255.0).


Les 32 postes sont utilisés par une classe d'élèves. J'ai 200 élèves à 
gérer, donc 200 profil différent.


Pour que chaque poste accède à internet, j'ai fais

iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE

Est ce judicieux ?

J'ai essayé avec NIS avec debian 11, l'authentification à l'air de bien 
fonctionner. Pour l'authentification, NIS est il bien adapté pour ce 
genre de configuration ?


Par contre au niveau de l'export (NFS), cela rame un peu (je me rend 
compte que j'exporte l'ensemble du home serveur sur tous les clients et 
non celui uniquement de l'utilisateur). Comment faire pour exporter sur 
la machine cliente uniquement le profil de l'utilisateur et non tous les 
utilisateurs ?


Sur le serveur, j'ai mis à la fin dans le fichier /etc/exports

/home/NFS_Partage 192.168.200.1/24(rw,sync,no_subtree_check)
mais j'hésite avec

/home/NFS_Partage 
192.168.0.0/24(rw,all_squash,anonuid=1000,anongid=1000,sync,no_subtree_check)


Sur le client, j'ai mis à la fin dans le fichier fstab

DomaineNFS:/home/NFS_Partage /home nfs defaults    0 0

On m'a parlé de LDAP, mais je ne sais pas trop comment m'y prendre. Est 
il préférable d'utiliser LDAP ou NIS pour l'authentification ? Existe il 
un petit manuel simple pour créer 200 utilisateurs.


Mon réseau fonctionne, mais rame beaucoup au delà de 4 utilisateurs ? et 
je n'arrive pas à trouver une solution.


Un grand merci pour votre aide.

Olivier





Re: usrmerge on root NFS will not be run automatically

2024-02-13 Thread fabjunkm...@gmail.com
Very unimpressed with the so called "fix" for #842145 of just blocking
running the script on nfs mount rather than fixing the script to work
properly with nfs.

The problem with the script is that it does not ignore the .nfs*
files. An explanation of these files is available here:

https://unix.stackexchange.com/questions/348315/nfs-mount-device-or-resource-busy

I am not a programmer so will not share my crappy attempt at getting
the script to work. I will describe a workaround I did and maybe that
may help someone come up with their own workaround for system with nfs
root (where the nfs server is not running linux). Or even better maybe
the package maintainer might adjust the script within usrmerge to work
properly with nfs using these ideas.

- Starting from bullseye (I rolled back to a snapshot pre-install of
usrmerge), downloaded src of usrmerge to get access to the
convert-usrmerge script.

- modify script to try and get it to ignore/avoid changing any objects
with ".nfs000" in its name

- run the script so it hopefully completes successfully (In my case it
did not fully complete. It made most of the changes but I still had
some directories in root such as /lib - probably from mistakes in my
code changes)

- rebooted nfs client to clear open file handles on nfs which would
remove .nfs000* files

- ran the original unmodified script which this time completed
successfully including converting eg /lib to a symlink. I think this
worked as most of the files had moved and symlinks created so there
was not much left for it to do except sort out the remaining few
moves/symlinks.

- installed usrmerge package which this time completed (it detected
the merge completed and did not complain about nfs)

>From there I expect it should be safe to upgrade to bookworm without
usrmerge breaking the upgrade (not tested yet).

good luck



/usr on NFS (was: Re: disable auto-linking of /bin -> /usr/bin/)

2024-01-10 Thread Andy Smith
Hello,

On Wed, Jan 10, 2024 at 10:41:05AM -0800, Mike Castle wrote:
> I did that for years.
> 
> Then again, when I started doing that, I was using PLIP over a
> null-printer cable.  But even after I could afford larger harddrives
> (so I had room to install /usr), and Ethernet cards (and later a hub),
> I still ran /usr over NFS.

You can still do it if you want, as long as your initramfs mounts
/usr from nfs, which I'm pretty sure it will without any difficulty
if you have the correct entry in /etc/fstab. I don't think anything
has gone out of its way to break that use it's just that it's been
given up on, and I don't blame Debian for that since it would mean
lots of work to bend upstream authors to a use case that they have
no interest in.

Time moved on and the way to do "immutable OS" evolved.

Just a couple of days ago I retired a Debian machine that had been
running constantly for 18½ years from the mostly-read-only 512M
CompactFlash boot device it came with. 

https://social.bitfolk.com/@grifferz/111704438510674644

Thanks,
Andy

-- 
https://bitfolk.com/ -- No-nonsense VPS hosting



Re: NFS: IPV6

2024-01-06 Thread Leandro Noferini
Pocket  writes:

[...]

> I am in the process of re-configuring NFS for V4 only.

Could it be there is some misunderstanding?

IPV4 and IPV6 are quite different concepts from NFSv4: I think this
works either on IPV4 and IPV6.

--
Ciao
leandro



Re: NFS: IPV6

2024-01-05 Thread Andy Smith
Hello,

On Fri, Jan 05, 2024 at 07:04:21AM -0500, Pocket wrote:
> I have this in the exports, ipv4 works
> 
> /srv/Multimedia 192.168.1.0/24(rw,no_root_squash,subtree_check)
> /srv/Other 192.168.1.0/24(rw,no_root_squash,subtree_check)
> #/home 2002:474f:e945:0:0:0:0:0/64(rw,no_root_squash,subtree_check)

This syntax in /etc/exports works for me:

/srv 2001:0db8:a:b::/64(rw,fsid=0)

And then on a client you have to surround the server IP with []
e.g.:

# mount -t nfs4 '[2001:0db8:a:b::1]':/srv /mnt

I have not tested what degree of quoting is required in /etc/fstab,
i.e. may or may not work without the single quotes.

How's your migration away from Debian and all is evils going, by
the way?

Thanks,
Andy

-- 
https://bitfolk.com/ -- No-nonsense VPS hosting



Re: NFS: IPV6

2024-01-05 Thread Greg Wooledge
On Fri, Jan 05, 2024 at 09:54:54AM +, debian-u...@howorth.org.uk wrote:
> plus FWIW...
> 
> https://docs.oracle.com/cd/E23824_01/html/821-1453/ipv6-ref-71.html
> 
> "NFS software and Remote Procedure Call (RPC) software support IPv6 in a
> seamless manner. Existing commands that are related to NFS services
> have not changed. Most RPC applications also run on IPv6 without any
> change. Some advanced RPC applications with transport knowledge might
> require updates."

I wouldn't assume Oracle (Solaris?) documentation is authoritative for
Debian systems.  The NFS implementations are probably quite different.



Re: NFS: IPV6

2024-01-05 Thread Pocket

On 1/5/24 04:54, debian-u...@howorth.org.uk wrote:

Marco Moock  wrote:

Am 04.01.2024 um 18:19:57 Uhr schrieb Pocket:


Where can I find information on how to configure NFS to use ipv6
addresses both server and client.


Does IPv6 work basically on your machine, including name resolution?

Does it work if you enter the address directly?

https://ipv6.net/blog/mounting-an-nfs-share-over-ipv6/

How does your fstab look like?


plus FWIW...

https://docs.oracle.com/cd/E23824_01/html/821-1453/ipv6-ref-71.html

"NFS software and Remote Procedure Call (RPC) software support IPv6 in a
seamless manner. Existing commands that are related to NFS services
have not changed. Most RPC applications also run on IPv6 without any
change. Some advanced RPC applications with transport knowledge might
require updates."




According to debian docs NFSServerSetup from the debian wiki

Additionally, rpcbind is not strictly needed by NFSv4 but will be
started as a prerequisite by nfs-server.service. This can be
prevented by masking rpcbind.service and rpcbind.socket.
sudo systemctl mask rpcbind.service
sudo systemctl mask rpcbind.socket

I am going to do that to use NFSv4 only.

I believe that my issue is the in the /etc/exports file but I don't know 
for sure.

I have this in the exports, ipv4 works

/srv/Multimedia 192.168.1.0/24(rw,no_root_squash,subtree_check)
/srv/Other 192.168.1.0/24(rw,no_root_squash,subtree_check)
#/home 2002:474f:e945:0:0:0:0:0/64(rw,no_root_squash,subtree_check)

I am looking for an example

I have commented out the ipv6 for now because I want to use NFSv4 only 
and after I get that working I want to get ipv6 mounts working and 
change the ipv4 mounts to use ipv6.
/srv/Multimedia and /srv/Other are root mounts and there isn't any bind 
mounts



--
Hindi madali ang maging ako




Re: NFS: IPV6

2024-01-05 Thread Pocket

On 1/5/24 03:35, Marco Moock wrote:

Am 04.01.2024 um 18:19:57 Uhr schrieb Pocket:


Where can I find information on how to configure NFS to use ipv6
addresses both server and client.


Does IPv6 work basically on your machine, including name resolution?


Yes I have bind running and ssh to the host is working



Does it work if you enter the address directly?

https://ipv6.net/blog/mounting-an-nfs-share-over-ipv6/

How does your fstab look like?




I followed some info that I found on the internet and it didn't work.

I am in the process of re-configuring NFS for V4 only.

I should have that done here shortly and I will try again to mount NFS 
mounts shortly




--
Hindi madali ang maging ako




Re: NFS: IPV6

2024-01-05 Thread debian-user
Marco Moock  wrote:
> Am 04.01.2024 um 18:19:57 Uhr schrieb Pocket:
> 
> > Where can I find information on how to configure NFS to use ipv6 
> > addresses both server and client.  
> 
> Does IPv6 work basically on your machine, including name resolution?
> 
> Does it work if you enter the address directly?
> 
> https://ipv6.net/blog/mounting-an-nfs-share-over-ipv6/
> 
> How does your fstab look like?

plus FWIW...

https://docs.oracle.com/cd/E23824_01/html/821-1453/ipv6-ref-71.html

"NFS software and Remote Procedure Call (RPC) software support IPv6 in a
seamless manner. Existing commands that are related to NFS services
have not changed. Most RPC applications also run on IPv6 without any
change. Some advanced RPC applications with transport knowledge might
require updates."



Re: NFS: IPV6

2024-01-05 Thread Marco Moock
Am 04.01.2024 um 18:19:57 Uhr schrieb Pocket:

> Where can I find information on how to configure NFS to use ipv6 
> addresses both server and client.

Does IPv6 work basically on your machine, including name resolution?

Does it work if you enter the address directly?

https://ipv6.net/blog/mounting-an-nfs-share-over-ipv6/

How does your fstab look like?



NFS: IPV6

2024-01-04 Thread Pocket



Where can I find information on how to configure NFS to use ipv6 
addresses both server and client.


I haven't found any good information on how to do that and what I did 
find was extremely sparce.


I have NFS mounts working using ipv4 and want to change that to ipv6


--
Hindi madali ang maging ako



Re: Update on problem mounting NFS share

2023-10-05 Thread David Christensen

On 10/5/23 05:01, Steve Matzura wrote:

On 10/4/2023 2:32 PM, David Christensen wrote:

On 10/4/23 05:03, Steve Matzura wrote:

On 10/3/2023 6:06 PM, David Christensen wrote:

On 10/3/23 12:03, Steve Matzura wrote:

I gave up on the NFS business and went back to good old buggy
but reliable SAMBA (LOL), ...


I have attempted to document the current state of Samba on my 
SOHO, below.  ...


Wow but that's an awful lot of work for something that seems to 
be a timing problem. But at least I learned something.


What is posted is the results of a lot of work -- trying countless
combinations of settings on the server and on the client via edit,
rebooting, and test cycles; until I found a combination that
seems to work.

Your OP of /etc/fstab:

//192.168.1.156/BigVol1 /mnt/bigvol1 civs vers=2.0,credentials=/root/smbcreds,ro

* The third field is "civs".  Should that be "cifs"?

* The fourth field contains "ro".  Is that read-only?  If so, how 
do you create, update, and delete files and directories on 
/mnt/bigvol1?


The 'civs' is indeed cifs. That was hand-transcribed, not 
copied/pasted. My bad.



Please "reply to List".


Please using inline posting style.


Copy-and-paste can require more effort, but precludes transcription 
errors.  On the Internet, errors are not merely embarrassing: they can 
cause confusion forever.



I mount the drive read-only because it 
contains my entire audio and video library, so anyone to whom I give 
access on my network must not ever have the slightest possibility of 
being able to modify it.



So, you create, update, and delete content via some means other than 
/mnt/bigvol1 (?).



TIMTOWTDI for multiple-user Samba shares.


ACL's would seem to be the canonical solution, but they introduce 
complexities -- Unskilled users?  Usage consistency from Windows 
Explorer, Command Prompt, Finder, Terminal, Thunar, Terminal Emulator, 
etc.?  Unix or Windows ACL's?  Backup and restore?  Integrity auditing 
and validation?



I chose to implement a "groupshare" share with read-write access for all 
Samba users and a social contract for usage (I also implemented a 
"groupshare" group/user and configured Samba to force ownership of content):


2023-10-05 11:51:52 toor@samba ~
# cat /usr/local/etc/smb4.conf
[global]
local master = Yes
netbios name = SAMBA
ntlm auth = ntlmv1-permitted
passdb backend = tdbsam
preferred master = Yes
security = USER
server string = Samba Server Version %v
wins support = Yes
workgroup = WORKGROUP

[dpchrist]
force user = dpchrist
path = /var/local/samba/dpchrist
read only = No
valid users = dpchrist

[groupshare]
create mask = 0777
directory mask = 0777
force create mode = 0666
force directory mode = 0777
force unknown acl user = Yes
force user = groupshare
path = /var/local/samba/groupshare
read only = No

2023-10-05 11:54:48 toor@samba ~
# grep groupshare /etc/group /etc/passwd
/etc/group:groupshare:*:999:
/etc/passwd:groupshare:*:999:999:Groupshare:/home/groupshare:/usr/sbin/nologin

2023-10-05 12:00:55 toor@samba ~
# find /var/local/samba/ -type d -depth 1 | egrep 'dpchrist|groupshare' 
| xargs ls -ld
drwxrwxr-x  98 dpchristdpchrist102 Oct  3 14:13 
/var/local/samba/dpchrist
drwxrwxr-x   8 groupshare  groupshare   13 Oct  5 11:32 
/var/local/samba/groupshare



Debian client:

2023-10-05 11:45:42 root@taz ~
# egrep 'dpchrist|groupshare' /etc/fstab | perl -pe 's/\s+/ /g'
//samba/dpchrist /samba/dpchrist cifs 
noauto,vers=3.0,user,username=dpchrist 0 0

//samba/groupshare /samba/groupshare cifs noauto,vers=3.0,user 0 0



By the way, another friend to whom I showed my problem came up with a
similar solution surrounding my original hypothesis that there is a
delay between the time fstab is processed and networking is 
available. He said he tested it probably a dozen times and it worked 
every time. His suggested fstab line is this:


//192.168.1.156/BigVol1 /mnt/bigvol1 cifs 
vers=3.1.1,credentials=,rw,GID=1000,uid=1000,noauto,x-systemd.automount



It's a matter of edit-reboot-test cycles with a consistent and complete 
test protocol.



"GID=1000,uid=1000" -- looks similar to my "groupshare" idea, but from 
the client side.  That should produce the same results for most use-cases.



"credentials=,noauto,x-systemd.automount" -- I assume 
this is a work-around for unreliable mounting at system start-up (?).  I 
tried similar tricks, and ended up going back to KISS -- typing mount(8) 
commands by hand once per boot.



David



Re: Update on problem mounting NFS share

2023-10-04 Thread David Christensen

On 10/4/23 05:03, Steve Matzura wrote:

On 10/3/2023 6:06 PM, David Christensen wrote:

On 10/3/23 12:03, Steve Matzura wrote:

I gave up on the NFS business and went back to good old buggy
but reliable SAMBA (LOL), ...




I have attempted to document the current state of Samba on my
SOHO, below.  ...



Wow but that's an awful lot of work for something that seems to be a
timing problem. But at least I learned something.


What is posted is the results of a lot of work -- trying countless
combinations of settings on the server and on the client via edit,
rebooting, and test cycles; until I found a combination that seems to work.


Your OP of /etc/fstab:

//192.168.1.156/BigVol1 /mnt/bigvol1 civs
vers=2.0,credentials=/root/smbcreds,ro

* The third field is "civs".  Should that be "cifs"?

* The fourth field contains "ro".  Is that read-only?  If so, how do you
create, update, and delete files and directories on /mnt/bigvol1?


David



Re: Update on problem mounting NFS share

2023-10-03 Thread David Christensen

On 10/3/23 12:03, Steve Matzura wrote:
I gave up on the NFS business and went back to good old buggy but 
reliable SAMBA (LOL), which is what I was using when I was on Debian 8, 
and which worked fine. Except for one thing, everything's great.



In /etc/fstab, I have:


//192.168.1.156/BigVol1 /mnt/bigvol1 civs 
vers=2.0,credentials=/root/smbcreds,ro



That should work, right? Well, it does, but only sometimes. If I boot 
the system, the remote share isn't there. If I unmount everything with 
'umount -a', wait a few seconds, then remount everything with 'mount 
-a', I sometimes have to do it twice. Sometimes, the first time I get a 
message from mount about error -95, but if I wait the space of a couple 
heartbeats and try 'mount -a' again, the share mounts. If I look through 
/var/kern.log for errors, I don't find anything that stands out as 
erroneous, but would be glad to supply extracts here that might help me 
to trace this down and fix it.



Using Samba to share files over the network requires various steps and 
settings on both the server and on the clients.  I put a lot of effort 
into Samba back in the day, and only went far enough to get basic file 
sharing working.  Since then, I have copied-and-pasted.  But Microsoft 
has not stood still, nor has Samba.



I have attempted to document the current state of Samba on my SOHO, 
below.  But beware -- my Samba setup is insecure and has issues.



My username is "dpchrist" on all computers and on Samba.


My primary group is "dpchrist" on all Unix computers.


My UID and GID are both "12345" (redaction) on all Unix computers.


The server is FreeBSD (I previously used Debian, but switched to get 
native ZFS):


2023-10-03 12:20:58 toor@f3 ~
# freebsd-version -kru
12.4-RELEASE-p5
12.4-RELEASE-p5
12.4-RELEASE-p5


The latest version of Samba seemed to want Kerberos, so I chose an older 
version that does not:


2023-10-03 12:25:25 toor@samba ~
# pkg version | grep samba
samba413-4.13.17_5 =


I configured Samba to share files:

2023-10-03 14:49:00 toor@samba ~
# cat /usr/local/etc/smb4.conf
[global]
local master = Yes
netbios name = SAMBA
ntlm auth = ntlmv1-permitted
passdb backend = tdbsam
preferred master = Yes
security = USER
server string = Samba Server Version %v
wins support = Yes
workgroup = WORKGROUP

[dpchrist]
force user = dpchrist
path = /var/local/samba/dpchrist
read only = No
valid users = dpchrist



I validate the configuration file with testparm(1):

2023-10-03 13:37:31 toor@samba ~
# testparm
Load smb config files from /usr/local/etc/smb4.conf
Loaded services file OK.
Weak crypto is allowed
Server role: ROLE_STANDALONE

Press enter to see a dump of your service definitions

# Global parameters
[global]
ntlm auth = ntlmv1-permitted
preferred master = Yes
security = USER
server string = Samba Server Version %v
wins support = Yes
idmap config * : backend = tdb

[dpchrist]
force user = dpchrist
path = /var/local/samba/dpchrist
read only = No
valid users = dpchrist



I created a Samba user account:

root@samba:~ # pdbedit -a dpchrist
new password:
retype new password:


Whenever I change anything related to Samba on the server, I reboot and 
verify before I attempt to connect from a client.



On Debian clients:

2023-10-03 12:44:39 root@taz ~
# cat /etc/debian_version ; uname -a
11.7
Linux taz 5.10.0-25-amd64 #1 SMP Debian 5.10.191-1 (2023-08-16) x86_64 
GNU/Linux



I installed the Samba client file sharing package:

2023-10-03 12:55:06 root@taz ~
# dpkg-query -W cifs-utils
cifs-utils  2:6.11-3.1+deb11u1


I created a mount point for the incoming share:

2023-10-03 12:58:13 root@taz ~
# ls -ld /samba/dpchrist
drwxr-xr-x 2 dpchrist dpchrist 0 Jun 18 14:31 /samba/dpchrist


I created an /etc/fstab entry for the incoming share:

2023-10-03 12:59:41 root@taz ~
# grep samba\/dpchrist /etc/fstab
//samba/dpchrist	/samba/dpchrist		cifs	 
noauto,vers=3.0,user,username=dpchrist		0	0



I mount the incoming share manually:

2023-10-03 13:01:07 dpchrist@taz ~
$ mount /samba/dpchrist
Password for dpchrist@//samba/dpchrist:

2023-10-03 13:01:46 dpchrist@taz ~
$ mount | grep samba\/dpchrist
//samba/dpchrist on /samba/dpchrist type cifs 
(rw,nosuid,nodev,relatime,vers=3.0,cache=strict,username=dpchrist,uid=12345,forceuid,gid=12345,forcegid,addr=192.168.5.24,file_mode=0755,dir_mode=0755,soft,nounix,serverino,mapposix,rsize=4194304,wsize=4194304,bsize=1048576,echo_interval=60,actimeo=1,user=dpchrist)



Note that there is a maddening issue with Samba on Unix clients -- the 
Unix execute bits vs. MS-DOS System, Hidden, and Archive bits:


https://unix.stackexchange.com/questions/103415/why-are-files-in-a-smbfs-mounted-share-created-with-executable-bit-set


On Windows 7 clients, I needed to change a Registry 

Re: Update on problem mounting NFS share

2023-10-03 Thread piorunz

On 03/10/2023 20:03, Steve Matzura wrote:

I gave up on the NFS business


Why?


and went back to good old buggy but reliable SAMBA (LOL)


:o

Sorry but I think you created bigger problem that you already had. NFS
works great, I've been using it for years and it never failed me. I
cannot image what was not working for you. Anyway, good luck.

--
With kindest regards, Piotr.

⢀⣴⠾⠻⢶⣦⠀
⣾⠁⢠⠒⠀⣿⡁ Debian - The universal operating system
⢿⡄⠘⠷⠚⠋⠀ https://www.debian.org/
⠈⠳⣄



Update on problem mounting NFS share

2023-10-03 Thread Steve Matzura
I gave up on the NFS business and went back to good old buggy but 
reliable SAMBA (LOL), which is what I was using when I was on Debian 8, 
and which worked fine. Except for one thing, everything's great.



In /etc/fstab, I have:


//192.168.1.156/BigVol1 /mnt/bigvol1 civs 
vers=2.0,credentials=/root/smbcreds,ro



That should work, right? Well, it does, but only sometimes. If I boot 
the system, the remote share isn't there. If I unmount everything with 
'umount -a', wait a few seconds, then remount everything with 'mount 
-a', I sometimes have to do it twice. Sometimes, the first time I get a 
message from mount about error -95, but if I wait the space of a couple 
heartbeats and try 'mount -a' again, the share mounts. If I look through 
/var/kern.log for errors, I don't find anything that stands out as 
erroneous, but would be glad to supply extracts here that might help me 
to trace this down and fix it.



TIA


Re: usrmerge on root NFS will not be run automatically

2023-10-03 Thread Marco
On Thu, 14 Sep 2023 22:17:35 +0200
Marco  wrote:

> If I screw with this I'd prefer to do it at night or on a weekend to
> keep the systems running during business hours.

Followup:

I went through the list and resolved each conflict manually. I
launched usrmerge after every change and deleted/merged the
offending files.

Note that I ran usrmerge on the individual hosts themselves, on NFS
root. Although usrmerge complained that this won't work, it somehow
did. Systems rebooted, all came up fine, no broken packages and the
programs are working.

Thanks for all the support. Case solved.

Marco



Re: Can't mount NFS NAS after major upgrade

2023-09-18 Thread debian-user
Steve Matzura  wrote:
 
> mount /mnt/bigvol1/dir-1 /home/steve/dir-1 -o bind,ro

In addition to what others have observed it might be worth mentioning
that the -v option to mount (i.e. verbose) often gives more information
about what's going on.



Re: Can't mount NFS NAS after major upgrade

2023-09-17 Thread tomas
On Sun, Sep 17, 2023 at 02:43:16PM -0400, Steve Matzura wrote:

As Charles points out, this looks rather like CIFS, not NFS:

> # NAS box:
> //192.168.1.156/BigVol1 /mnt/bigvol1 cifs
   
> _netdev,username=,password=,ro 0 0

If Charles's (and my) hunch is correct, perhaps this wiki page
contains leada for you to follow:

  https://wiki.debian.org/Samba/ClientSetup

Cheers
-- 
tomás


signature.asc
Description: PGP signature


Re: Can't mount NFS NAS after major upgrade

2023-09-17 Thread Tom Dial




On 9/17/23 12:43, Steve Matzura wrote:

I upgraded a version 8 system to version 11 from scratch--e.g., I totally 
reinitialized the internal drive and laid down an entirely fresh install of 11. 
Then 12 came out about a week later, but I haven't yet upgraded to 12 because I 
have a show-stopper on 11 which I absolutely must solve before moving ahead, 
and it's the following:


For years I have had a Synology NAS that was automatically mounted and 
directories thereon bound during the boot process via the following lines at 
the end of /etc/fstab:


# NAS box:
//192.168.1.156/BigVol1 /mnt/bigvol1 cifs 
_netdev,username=,password=,ro 0 0

Then I had the following line, replicated for several directories on bigvol1, 
to bind them to directories on the home filesystem, all in a script called 
/root/remount that I executed manually after each reboot:


mount /mnt/bigvol1/dir-1 /home/steve/dir-1 -o bind,ro

I had directories set up on the home filesystem to accept these binds, like 
this:


mount /mnt/bigvol1/dir-1 /home/steve/dir-1 -o bind,ro


None of this works any more on Debian 11. After boot, /mnt/bigvol1 is empty, so 
there's no need to even try the remount script because there's nothing to which 
those directories can bind, so even if those mount commands are correct, I 
would never know until bigvol1 mounts correctly and content appears in at least 
'ls -ld /mnt/bigvol1'.



Are there relevant messages in the output of dmesg or in the systemd journal? 
If so, they might give useful information.

This is out of range of my usage and experience, but from others I have found 
that some consumer NAS units still use, and are effectively stuck at, SMB1. SMB 
version 1 has a fairly serious uncorrectable vulnerability and Microsoft 
deprecated it (but continued to support it through, I think Windows 11. I 
believe Samba no longer supports it by default, but still can be configured to 
use it, with some effort, if you wish. Another, and preferable fix would be to 
configure the Synology to use SMB version 3, if that appears to be the cause of 
the problem.

If the Synology NAS supports NFS, that might be a better approach in the long 
run, though.

Regards,
Tom Dial


Research into this problem made me try similar techniques after having 
installed nfs-utils. I got bogged down by a required procedure entailing 
exportation of NFS volume information in order to let nfs-utils know about the 
NFS drive, but before I commit to that, I thought I'd ask in here to make sure 
I'm not about to do anything horribly wrong.


So, summarily put, what's different about mounting a networked NFS drive from 8 
to 11 and 12?


Thanks in advance.






Re: Can't mount NFS NAS after major upgrade

2023-09-17 Thread Charles Curley
On Sun, 17 Sep 2023 14:43:16 -0400
Steve Matzura  wrote:

> # NAS box:
> //192.168.1.156/BigVol1 /mnt/bigvol1 cifs 
> _netdev,username=,password=,ro 0 0

Possibly part of the problem is that this is a CIFS (Samba) mount, not
an NFS mount.

Is samba installed?

If you try to mount that mount manually, what error message(s) and
return value do you get? e.g. a successful mount:

root@jhegaala:~# mount /home/charles/samba 
root@jhegaala:~# echo $?
0
root@jhegaala:~# 

You may also want to rethink that line in fstab. I have, e.g.:

//samba.localdomain/samba /home/charles/samba cifs 
_netdev,rw,credentials=/etc/samba/charles.credentials,uid=charles,gid=charles,file_mode=0644,noauto
  0   0

The noauto is in there because this is a laptop, and I have scripts to
mount it only if the machine is on a home network. For a desktop, I
remove the noauto and have x-systemd.automount instead.


-- 
Does anybody read signatures any more?

https://charlescurley.com
https://charlescurley.com/blog/



Can't mount NFS NAS after major upgrade

2023-09-17 Thread Steve Matzura
I upgraded a version 8 system to version 11 from scratch--e.g., I 
totally reinitialized the internal drive and laid down an entirely fresh 
install of 11. Then 12 came out about a week later, but I haven't yet 
upgraded to 12 because I have a show-stopper on 11 which I absolutely 
must solve before moving ahead, and it's the following:



For years I have had a Synology NAS that was automatically mounted and 
directories thereon bound during the boot process via the following 
lines at the end of /etc/fstab:



# NAS box:
//192.168.1.156/BigVol1 /mnt/bigvol1 cifs 
_netdev,username=,password=,ro 0 0


Then I had the following line, replicated for several directories on 
bigvol1, to bind them to directories on the home filesystem, all in a 
script called /root/remount that I executed manually after each reboot:



mount /mnt/bigvol1/dir-1 /home/steve/dir-1 -o bind,ro

I had directories set up on the home filesystem to accept these binds, 
like this:



mount /mnt/bigvol1/dir-1 /home/steve/dir-1 -o bind,ro


None of this works any more on Debian 11. After boot, /mnt/bigvol1 is 
empty, so there's no need to even try the remount script because there's 
nothing to which those directories can bind, so even if those mount 
commands are correct, I would never know until bigvol1 mounts correctly 
and content appears in at least 'ls -ld /mnt/bigvol1'.



Research into this problem made me try similar techniques after having 
installed nfs-utils. I got bogged down by a required procedure entailing 
exportation of NFS volume information in order to let nfs-utils know 
about the NFS drive, but before I commit to that, I thought I'd ask in 
here to make sure I'm not about to do anything horribly wrong.



So, summarily put, what's different about mounting a networked NFS drive 
from 8 to 11 and 12?



Thanks in advance.



Re: usrmerge on root NFS will not be run automatically

2023-09-15 Thread Marco
On Fri, 15 Sep 2023 17:55:06 +
Andy Smith  wrote:

> I haven't followed this thread closely, but is my understanding
> correct:
> 
> - You have a FreeBSD NFS server with an export that is a root
>   filesystem of a Debian 11 install shared by multiple clients

Almost. It's not *one* Debian installation, it's many (diskless
workstations PXE boot). Each host has it's own root on the NFS.
Some stuff is shared, but that's not relevant here.

> - You're trying to do an upgrade to Debian 12 running on one of the
>   clients.

Not on one on *all* clients.

> - It tries to do a usrmerge but aborts because NFS is not supported
>   by that script?

Correct. Strangely the usrmerge script succeeded on one host. But on
all others it throws errors. Either relating to NFS being not
supported or because of duplicate files.

> If so, have you tried reporting a bug on this yet?

No I haven't. As far as I understand it's a known issue and the
developer has decided to just have the script fail on NFS.

> If you don't get anywhere with that, I don't think you have much
> choice except to take away the root directory tree to a Linux host,
> chroot into it and complete the merge there, then pack it up again
> and bring it back to your NFS server. Which is very far from ideal.

I'll try to solve the conflicts manually. If that fails, that's what
I have to do, I guess. I didn't expect that level of fiddling with
system files for a simple upgrade. But hey, here we are now.

> The suggestions about running a VM on the NFS server probably aren't
> going to work as you won't be able to take the directory tree out of
> use and export it as a block device to the VM.

Indeed.

> The option of making the usrmerge script work from FreeBSD might not
> be too technically challenging but I wouldn't want to do it without
> assistance from the Debian developers responsible for the script.

I won't do that. I don't speak Perl and will not rewrite the
usrmerge script.

Marco



Re: usrmerge on root NFS will not be run automatically

2023-09-15 Thread Andy Smith
Hello,

On Fri, Sep 15, 2023 at 01:52:27PM +0200, Marco wrote:
> On Thu, 14 Sep 2023 16:43:09 -0400
> Dan Ritter  wrote:
> > Each of these things could be rewritten to be compatible with
> > FreeBSD; I suspect it would take about twenty minutes to an hour,
> > most of it testing, for someone who was familiar with FreeBSD's
> > userland
> 
> I'm not going down that route.

I haven't followed this thread closely, but is my understanding
correct:

- You have a FreeBSD NFS server with an export that is a root
  filesystem of a Debian 11 install shared by multiple clients

- You're trying to do an upgrade to Debian 12 running on one of the
  clients.

- It tries to do a usrmerge but aborts because NFS is not supported
  by that script?

If so, have you tried reporting a bug on this yet? It seems like an
interesting problem which although being quite a corner case, might
spark the interest of the relevant Debian developers.

If you don't get anywhere with that, I don't think you have much
choice except to take away the root directory tree to a Linux host,
chroot into it and complete the merge there, then pack it up again
and bring it back to your NFS server. Which is very far from ideal.

The suggestions about running a VM on the NFS server probably aren't
going to work as you won't be able to take the directory tree out of
use and export it as a block device to the VM. Or rather, you could
do that, but it's probably not quicker/easier than the method of
taking a copy of it elsewhere then bringing it back.

The option of making the usrmerge script work from FreeBSD might not
be too technically challenging but I wouldn't want to do it without
assistance from the Debian developers responsible for the script.

Cheers,
Andy

-- 
https://bitfolk.com/ -- No-nonsense VPS hosting



Re: usrmerge on root NFS will not be run automatically

2023-09-15 Thread Stefan Monnier
> So the file in /lib appears to be newer. So what to do? Can I delete
> the one in /usr/lib ?

Yes.


Stefan



Re: usrmerge on root NFS will not be run automatically

2023-09-15 Thread Marco
On Thu, 14 Sep 2023 16:54:27 -0400
Stefan Monnier  wrote:

> Still going on with this?

I am.

> Have you actually looked at those two files:
> 
> /lib/udev/rules.d/60-libsane1.rules and
> /usr/lib/udev/rules.d/60-libsane1.rules
> 
> to see if they're identical or not and to see if you might have an
> idea how to merge them?

Yes, I did. On some hosts they are identical, on others they're
different. That's why I asked how to handle that.

> `usrmerge` did give you a pretty clear explanation of the problem it's
> facing (AFAIC)

It does indeed.

> and I believe it should be very easy to address it

Everything is easy if you only know how to do it.

As I said, on some hosts they are identical. So what to do? Can I
delete one of them? If yes, which one?

On other hosts they differ, here the first lines:

/lib/

# This file was generated from description files (*.desc)
# by sane-desc 3.6 from sane-backends 1.1.1-debian
#
# udev rules file for supported USB and SCSI devices
#
# For the list of supported USB devices see /lib/udev/hwdb.d/20-sane.hwdb
#
# The SCSI device support is very basic and includes only
# scanners that mark themselves as type "scanner" or
# SCSI-scanners from HP and other vendors that are entitled "processor"
# but are treated accordingly.
#
# If your SCSI scanner isn't listed below, you can add it to a new rules
# file under /etc/udev/rules.d/.
#
# If your scanner is supported by some external backend (brother, epkowa,
# hpaio, etc) please ask the author of the backend to provide proper
# device detection support for your OS
#
# If the scanner is supported by sane-backends, please mail the entry to
# the sane-devel mailing list (sane-de...@alioth-lists.debian.net).
#
ACTION=="remove", GOTO="libsane_rules_end"

…

/usr/lib/

# This file was generated from description files (*.desc)
# by sane-desc 3.6 from sane-backends 1.0.31-debian
#
# udev rules file for supported USB and SCSI devices
#
# For the list of supported USB devices see /lib/udev/hwdb.d/20-sane.hwdb
#
# The SCSI device support is very basic and includes only
# scanners that mark themselves as type "scanner" or
# SCSI-scanners from HP and other vendors that are entitled "processor"
# but are treated accordingly.
#
# If your SCSI scanner isn't listed below, you can add it to a new rules
# file under /etc/udev/rules.d/.
#
# If your scanner is supported by some external backend (brother, epkowa,
# hpaio, etc) please ask the author of the backend to provide proper
# device detection support for your OS
#
# If the scanner is supported by sane-backends, please mail the entry to
# the sane-devel mailing list (sane-de...@alioth-lists.debian.net).
#
ACTION!="add", GOTO="libsane_rules_end"

…

So the file in /lib appears to be newer. So what to do? Can I delete
the one in /usr/lib ?

> (no need to play with anything funny like setting up a VM or
> mounting the disk from some other system).

Which is good because that's not that easy, apparently.

Thank you for your replies and support regarding this matter.

Marco



Re: usrmerge on root NFS will not be run automatically

2023-09-15 Thread Marco
On Thu, 14 Sep 2023 16:43:09 -0400
Dan Ritter  wrote:

> The heart of the convert-usrmerge perl script is pretty
> reasonable. However:
> 
> […]
> 
> Similarly, there are calls to stat and du which probably have
> some incompatibilities.
> 
> The effect of running this would be fairly safe, but also not do
> anything: you would get some errors and then it would die.

Ok, then I'll not try that. Would be a waste of time.

> Each of these things could be rewritten to be compatible with
> FreeBSD; I suspect it would take about twenty minutes to an hour,
> most of it testing, for someone who was familiar with FreeBSD's
> userland

I'm not going down that route.

Marco



Re: usrmerge on root NFS will not be run automatically

2023-09-14 Thread Stefan Monnier
Still going on with this?

Have you actually looked at those two files:

/lib/udev/rules.d/60-libsane1.rules and 
/usr/lib/udev/rules.d/60-libsane1.rules

to see if they're identical or not and to see if you might have an idea
how to merge them?
[ as I suggested a week ago.  ]

`usrmerge` did give you a pretty clear explanation of the problem it's
facing (AFAIC) and I believe it should be very easy to address it (no
need to play with anything funny like setting up a VM or mounting
the disk from some other system).

If you're not sure what to do with those two files, show them to us.


Stefan


Marco [2023-09-14 20:28:59] wrote:

> On Thu, 14 Sep 2023 13:20:09 -0400
> Dan Ritter  wrote:
>
>> > FreeBSD (actually a TrueNAS appliance)  
>> 
>> If it supports the 9P share system, Debian can mount that with
>> -t 9p.
>> 
>> I don't know whether TrueNAS enabled that.
>
> No it does not. I just confirmed, the only choices are raw disk
> access (ZVOL), NFS and Samba.
>
> However, usrmerge is a perl script. Can I run it on the server
> (after chroot'ing) in a jail (under FreeBSD)? Or does this mess
> things up? Just a thought.
>
> Marco



Re: usrmerge on root NFS will not be run automatically

2023-09-14 Thread Marco
On Thu, 14 Sep 2023 15:01:50 -0400
Dan Ritter  wrote:

> Is this a mission-critical server?

I'd say so, yes. It's not one single server. It's *all*
workstations.

> i.e. will screwing it up for a day cause other people to be upset

Yes, because no one can use their computer.

> or you to lose money?

Yes.

If I screw with this I'd prefer to do it at night or on a weekend to
keep the systems running during business hours.

> Do you have a good, current backup?

Yes.

> Since it's TrueNAS, I assume you are using ZFS, so: have you sent
> snapshots to some other device recently?

Yes, every three hours. And one before every system upgrade.



Re: usrmerge on root NFS will not be run automatically

2023-09-14 Thread Marco
On Thu, 14 Sep 2023 13:20:09 -0400
Dan Ritter  wrote:

> > FreeBSD (actually a TrueNAS appliance)  
> 
> If it supports the 9P share system, Debian can mount that with
> -t 9p.
> 
> I don't know whether TrueNAS enabled that.

No it does not. I just confirmed, the only choices are raw disk
access (ZVOL), NFS and Samba.

However, usrmerge is a perl script. Can I run it on the server
(after chroot'ing) in a jail (under FreeBSD)? Or does this mess
things up? Just a thought.

Marco



Re: usrmerge on root NFS will not be run automatically

2023-09-14 Thread Marco
On Thu, 14 Sep 2023 11:00:17 -0400
Dan Ritter  wrote:

> What VM software are you using

bhyve

…which I know very little about. It's supported on the server, I've
tried it, set up a VM, it works. But the server is mainly serving
NFS shares to various clients.

> and what's the OS on which that runs?

FreeBSD (actually a TrueNAS appliance)

Marco



Re: usrmerge on root NFS will not be run automatically

2023-09-14 Thread Dan Ritter
Marco wrote: 
> On Fri, 8 Sep 2023 12:26:38 -0400
> Dan Ritter  wrote:
> > * have the VM mount the filesystem directly
> 
> How? I can only attach devices (=whole disks) to the VM or mount the
> FS via NFS. I can't attach it as a device because it's not a device,
> rather than a directory with the root file systems of several hosts
> directly on the server. So that doesn't work.


What VM software are you using, and what's the OS on which that
runs?

-dsr-



Re: usrmerge on root NFS will not be run automatically

2023-09-14 Thread Marco
On Fri, 8 Sep 2023 12:26:38 -0400
Dan Ritter  wrote:

> Can you start a temporary VM directly on the server?

I just checked. I can, yes.

> If so, you can
> * stop your remote Debian machine

Ok, no problem.

> * run a Debian rescue image in the VM on the NFS server

No problem.

> * have the VM mount the filesystem directly

How? I can only attach devices (=whole disks) to the VM or mount the
FS via NFS. I can't attach it as a device because it's not a device,
rather than a directory with the root file systems of several hosts
directly on the server. So that doesn't work.

This leaves me with an NFS mount in the VM. But NFS mounts are not
supported by usrmerge, that's the whole issue I'm facing here.

So this VM-on-the-server idea doesn't work in my case or am I
missing something here?

Another question: if usrmerge complains that the file is present in
/lib as well as /usr/lib, what's the correct thing to do if

i)  the files are identical
ii) the files are different ?

Regards
Marco



Re: usrmerge on root NFS will not be run automatically

2023-09-11 Thread Javier Barroso
Hello,

El dom., 10 sept. 2023 21:55, Marco  escribió:

> On Fri, 8 Sep 2023 12:26:38 -0400
> Dan Ritter  wrote:
>
> > > That is quite an involved task. I didn't expect such fiddling for a
> > > simple OS update. I'm a bit worried that the permissions and owners
> > > go haywire when I copy stuff directly off the server onto a VM and
> > > back onto the server. Is there a recommended procedure or
> > > documentation available?
> >
> > Can you start a temporary VM directly on the server?
>
> I might actually. I'll have to check the following days.
>
> > If so, you can
> > * stop your remote Debian machine
> > * run a Debian rescue image in the VM on the NFS server
> > * have the VM mount the filesystem directly
> > * chroot, run usrmerge
> > * unmount
>
> Ok, that's also quite a task, but it seems less error-prone than
> copying a bunch of system files across the network and hope for the
> best. I'll try.
>
> Marco
>

Maybe you can open a new bug asking for a better documentation or what
should be done in this case.

Maybe dpkg -L with both files can help to clarify what should be done

Regards

>


Re: usrmerge on root NFS will not be run automatically

2023-09-10 Thread Marco
On Fri, 8 Sep 2023 12:26:38 -0400
Dan Ritter  wrote:

> > That is quite an involved task. I didn't expect such fiddling for a
> > simple OS update. I'm a bit worried that the permissions and owners
> > go haywire when I copy stuff directly off the server onto a VM and
> > back onto the server. Is there a recommended procedure or
> > documentation available?  
> 
> Can you start a temporary VM directly on the server?

I might actually. I'll have to check the following days.

> If so, you can
> * stop your remote Debian machine
> * run a Debian rescue image in the VM on the NFS server
> * have the VM mount the filesystem directly
> * chroot, run usrmerge
> * unmount

Ok, that's also quite a task, but it seems less error-prone than
copying a bunch of system files across the network and hope for the
best. I'll try.

Marco



Re: usrmerge on root NFS will not be run automatically

2023-09-08 Thread Stefan Monnier
>   root@foobar:~# /usr/lib/usrmerge/convert-usrmerge 
>
>   FATAL ERROR:
>   Both /lib/udev/rules.d/60-libsane1.rules and 
> /usr/lib/udev/rules.d/60-libsane1.rules exist.

The problem is that "usrmerge" needs to unify those two and doesn't
know how.  So you need to do it by hand.
E.g. get rid of one of those two (or maybe if you can make
them 100% identical `usrmerge` will be happy as well).


Stefan



Re: usrmerge on root NFS will not be run automatically

2023-09-08 Thread Marco
On Fri, 8 Sep 2023 16:55:23 +0200
zithro  wrote:

> On 08 Sep 2023 12:54, Marco wrote:
> >Warning: NFS detected, /usr/lib/usrmerge/convert-usrmerge will
> > not be run automatically. See #842145 for details.  
> 
> Read :
> - https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=842145

  “I repeated it a few times. I had to restart various services in between
  retries (I think I restarted everything by the end). Eventually it
  succeeded.”

I tried it 30 times to no avail. The report doesn't offer another
solution.

> - https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1039522

  “instead of converting the client, convert the server first.”

I don't want to convert the server. The server is running fine and
has no issues. I don't have a clue what this has to do with the
server.

  “So there is a workaround when your NFS server is a Linux machine
  and you may use chroot on it, at least.”

The server doesn't run Linux. So also no solution there.

> carefully copy the files over a Linux machine, chroot+convert
> there, then move back to the NFS server.

That is quite an involved task. I didn't expect such fiddling for a
simple OS update. I'm a bit worried that the permissions and owners
go haywire when I copy stuff directly off the server onto a VM and
back onto the server. Is there a recommended procedure or
documentation available?

> Can help: 
> https://unix.stackexchange.com/questions/312218/chroot-from-freebsd-to-linux

I cannot install stuff on the server unfortunately.

Thanks for your quick reply.

Marco



Re: usrmerge on root NFS will not be run automatically

2023-09-08 Thread zithro

On 08 Sep 2023 12:54, Marco wrote:

   Warning: NFS detected, /usr/lib/usrmerge/convert-usrmerge will not be run
   automatically. See #842145 for details.


Read :
- https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=842145
- https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1039522

The solution would be :
"convert-usrmerge can be run in a chroot on the NFS server"

If your NFS server is not Linux-based, carefully copy the files over a 
Linux machine, chroot+convert there, then move back to the NFS server.

Make backups ;)
Can help: 
https://unix.stackexchange.com/questions/312218/chroot-from-freebsd-to-linux



As per the why, I don't know.
Maybe the symlink handling by NFS ?

Reading the script "/usr/lib/usrmerge/convert-usrmerge", you get the 
reasons why it fails :


142 # The other cases are more complex and there are some corner 
cases that

143 # we do not try to resolve automatically.
144
145 # both source and dest are links
[...]
168 # the source is a link
[...]
175 # the destination is a link
[...]
191 # both source and dest are directories
192 # this is the second most common case

You may change the script to detect where/why it fails, so edit
fatal("Both $n and /usr$n exist");
to
fatal("Both $n and /usr$n exist - err line 145");
fatal("Both $n and /usr$n exist - err line 168");

Also found this on
"https://en.opensuse.org/openSUSE:Usr_merge#Known_Problems; :

- "File systems that do not support RENAME_EXCHANGE such as ZFS or NFS 
cannot perform live conversion (Bug 1186637)"
- "Conversion fails if there's a mount point below 
(/usr)/{bin,sbin,lib,lib64}"


Good luck

--
++
zithro / Cyril



usrmerge on root NFS will not be run automatically

2023-09-08 Thread Marco
Hi,

I'm in the process of upgrading my Debian stable hosts and run into
a problem with usrmerge:

  Setting up usrmerge (35) ...

  Warning: NFS detected, /usr/lib/usrmerge/convert-usrmerge will not be run
  automatically. See #842145 for details.

  E: usrmerge failed.
  dpkg: error processing package usrmerge (--configure):
   installed usrmerge package post-installation script subprocess returned 
error exit status 1
  Errors were encountered while processing:
   usrmerge
  E: Sub-process /usr/bin/dpkg returned an error code (1)

True, root is mounted via NFS. So I ran usrmerge manually:

  root@foobar:~# /usr/lib/usrmerge/convert-usrmerge 

  FATAL ERROR:
  Both /lib/udev/rules.d/60-libsane1.rules and 
/usr/lib/udev/rules.d/60-libsane1.rules exist.

  You can try correcting the errors reported and running again
  /usr/lib/usrmerge/convert-usrmerge until it will complete without errors.
  Do not install or update other Debian packages until the program
  has been run successfully.

It instructs me to:

  You can try correcting the errors reported and running again

But it's not mentioned anywhere *how* to correct those errors. It's
true that both files exist. I've read

  https://wiki.debian.org/UsrMerge

But the page doesn't cover the error I face.

How to fix the error? Is there a command I can run (e.g. rsync?) to
fix whatever usrmerge complains about? Like keeping only the newest
file or deleting the old one? I feel there's very little info out
there how to recover from this situation. Any tips are much
appreciated.

Marco

Debian stable 6.1.0-11-amd64



Re: Servidor-nfs--web-base_dades + pc_personal = Virtualbox ?

2023-07-29 Thread jordi Perera

On 28-07-2023 14:47, Narcis Garcia wrote:
Jo de moment et comento que les màquines virtuales les faig córrer amb 
Qemu-KVM des de fa anys, i em va bé. Sempre he trobat un o altre 
desavantatge amb els altres virtualitzadors de màquina completa; no 
recordo detalls.


També et comento que, per a aprofitar millor els recursos de la màquina 
real (o fins i tot virtual) he utilitzat LXC per a executar escriptoris 
remots dins d'un o varis contenidors. La teoria funciona, però a la 
pràctica algunes aplicacions peten quan Linux no els dóna els exigents 
recursos que demanen.

https://es.wikipedia.org/wiki/Multitarea_apropiativa
Això dels contenidors té l'avantatge de què no hi ha efecte bombolla: 
Quan una aplicació tanca i allibera memòria, la màquina mare també 
recupera la memòria lliure.


Ara els escriptoris remots els fico en màquina virtual (KVM). Encara 
haig d'experimentar amb un entorn multiusuari per a escriptoris, com 
pretenia amb els contenidors.


Els controls remots gràfics els faig sempre amb el protocol VNC en les 
seves variades modalitats, degut a la seva maduresa, llibertat i 
versatilitat de plataformes. No he trobat un protocol flexible amb tant 
programari lliure dedicat com VNC.
Tot i amb això, crec que la gent que desenvolupa o mantén VNC ho té 
força descuidat.


En comptes de NFS faig servir les capacitats de SSH, que em va semblar 
més fàcil d'implementar, tot i que consumeix força CPU.


El tema de les intrusions: No posar tots els ous al mateix cistell, i la 
porta per on entres tu no ha de ser una porta que et presenti tots els 
recursos. La porta per on entres tu pot ser un entorn «controlat» a 
partir del qual pots obrir les segones portes als recursos.





Gracies Narcis, pels comentaris.

--
Jordi Perera



Re: Debian 12 kernel NFS server doesn't listen on port 2049 UDP

2023-07-29 Thread Matthias Scheler
On Sat, Jul 29, 2023 at 05:44:59PM +0100, piorunz wrote:
> Edit /etc/nfs.conf file:
> [nfsd]
> udp=y
> 
> then:
> sudo systemctl restart nfs-server

Yes, that fixed my NFS problem.

Thank you very much

-- 
Matthias Scheler  http://zhadum.org.uk/



Re: Debian 12 kernel NFS server doesn't listen on port 2049 UDP

2023-07-29 Thread piorunz

On 29/07/2023 16:00, Matthias Scheler wrote:


Hello,

after upgrading one of my systems from Debian 11 to 12 the kernel NFS server
doesn't seem to accept NFS requests over UDP on port 2049 anymore:

>rpcinfo -p | grep nfs
133   tcp   2049  nfs
134   tcp   2049  nfs
1002273   tcp   2049  nfs_acl

This causes problems for a non-Linux NFS client whose automounter
tries to perform the mount over UDP. Is there a way to re-enable
the UDP port?

Kind regards



Edit /etc/nfs.conf file:
[nfsd]
udp=y

then:
sudo systemctl restart nfs-server

Result:
$ rpcinfo -p | grep nfs
133   tcp   2049  nfs
134   tcp   2049  nfs
1002273   tcp   2049  nfs_acl
133   udp   2049  nfs
1002273   udp   2049  nfs_acl

--
With kindest regards, Piotr.

⢀⣴⠾⠻⢶⣦⠀
⣾⠁⢠⠒⠀⣿⡁ Debian - The universal operating system
⢿⡄⠘⠷⠚⠋⠀ https://www.debian.org/
⠈⠳⣄



Debian 12 kernel NFS server doesn't listen on port 2049 UDP

2023-07-29 Thread Matthias Scheler


Hello,

after upgrading one of my systems from Debian 11 to 12 the kernel NFS server
doesn't seem to accept NFS requests over UDP on port 2049 anymore:

>rpcinfo -p | grep nfs
133   tcp   2049  nfs
134   tcp   2049  nfs
1002273   tcp   2049  nfs_acl

This causes problems for a non-Linux NFS client whose automounter
tries to perform the mount over UDP. Is there a way to re-enable
the UDP port?

Kind regards

-- 
Matthias Scheler  http://zhadum.org.uk/



Re: Servidor-nfs--web-base_dades + pc_personal = Virtualbox ?

2023-07-28 Thread Narcis Garcia
Jo de moment et comento que les màquines virtuales les faig córrer amb 
Qemu-KVM des de fa anys, i em va bé. Sempre he trobat un o altre 
desavantatge amb els altres virtualitzadors de màquina completa; no 
recordo detalls.


També et comento que, per a aprofitar millor els recursos de la màquina 
real (o fins i tot virtual) he utilitzat LXC per a executar escriptoris 
remots dins d'un o varis contenidors. La teoria funciona, però a la 
pràctica algunes aplicacions peten quan Linux no els dóna els exigents 
recursos que demanen.

https://es.wikipedia.org/wiki/Multitarea_apropiativa
Això dels contenidors té l'avantatge de què no hi ha efecte bombolla: 
Quan una aplicació tanca i allibera memòria, la màquina mare també 
recupera la memòria lliure.


Ara els escriptoris remots els fico en màquina virtual (KVM). Encara 
haig d'experimentar amb un entorn multiusuari per a escriptoris, com 
pretenia amb els contenidors.


Els controls remots gràfics els faig sempre amb el protocol VNC en les 
seves variades modalitats, degut a la seva maduresa, llibertat i 
versatilitat de plataformes. No he trobat un protocol flexible amb tant 
programari lliure dedicat com VNC.
Tot i amb això, crec que la gent que desenvolupa o mantén VNC ho té 
força descuidat.


En comptes de NFS faig servir les capacitats de SSH, que em va semblar 
més fàcil d'implementar, tot i que consumeix força CPU.


El tema de les intrusions: No posar tots els ous al mateix cistell, i la 
porta per on entres tu no ha de ser una porta que et presenti tots els 
recursos. La porta per on entres tu pot ser un entorn «controlat» a 
partir del qual pots obrir les segones portes als recursos.



El 28/7/23 a les 13:25, jordi Perera ha escrit:

Bon dia a tothom

El títol és tant complicat com el que us vull preguntar.

Escenari actual del que tinc:

Un Quad-core amb 4 GB que està exposat a internet 24x7 on hi tinc un 
servidor Apache + Postfix + Postgresql + NFS (amb les dades domestiques)



I tres maquines més, totes diferents de maquinari i amb diferents 
versions de Debian, una al despatx (la grossa) , un altra al saló amb la 
TV, una al taller i de tant en tant engego un portàtil.


Totes accedeixen per NFS a les dades, però no totes poden córrer els 
mateixos programes i no les tinc actualitzades.


A més a més faig córrer programes que no hi són a Debian o que no hi són 
prou actualitzats, Freecad, Cura i que venen amb com appimage i es clar, 
els menús de les aplicacions son diferents, els escriptoris, els scripts.

Depenent el que vulgui fer, haig d'anar a un PC o a un altre.

Ara he aconseguit un ordinador I5 amb 32GB de ram i he pensat de muntar 
la Debian amb entorn gràfic, amb totes les aplicacions, i compartir 
l'escriptori amb totes les altres màquines.


I en un segon pas muntar-hi una màquina virtual amb tot el servidor.

Fins ara, mai he tingut cap intrusió, que jo hagi vist, i mira que no 
paren. I és clar el que vull fer em sembla més perillós que el que tinc 
ara.


També podria fer-ho al revés, muntar el servidor amb tot el que tinc i a 
sobre fer-hi córrer la màquina virtual amb entorn gràfic i que fos 
aquesta màquina virtual qui compartís l'escriptori.


De màquines virtuals, ara tinc un virtualbox amb un win7 només per poder 
manegar un gps, vull dir que l'he provat.


Fa molts anys quan virtualbox era a les beceroles vaig fer alguna cosa 
amb un altre sistema però no me'n recordo ni del nom.


I de compartir escriptori, fa 15 anys, a la feina, teníem muntada alguna 
cosa amb vnc, però ara no sabria ni com començar.


Algú és vol esplaiar amb alguna opinió i o proposta ?

Gràcies per haver llegit fins aquí ;-D



--

Narcis Garcia

__
I'm using this dedicated address because personal addresses aren't 
masked enough at this mail public archive. Public archive administrator 
should fix this against automated addresses collectors.




Servidor-nfs--web-base_dades + pc_personal = Virtualbox ?

2023-07-28 Thread jordi Perera

Bon dia a tothom

El títol és tant complicat com el que us vull preguntar.

Escenari actual del que tinc:

Un Quad-core amb 4 GB que està exposat a internet 24x7 on hi tinc un 
servidor Apache + Postfix + Postgresql + NFS (amb les dades domestiques)



I tres maquines més, totes diferents de maquinari i amb diferents 
versions de Debian, una al despatx (la grossa) , un altra al saló amb la 
TV, una al taller i de tant en tant engego un portàtil.


Totes accedeixen per NFS a les dades, però no totes poden córrer els 
mateixos programes i no les tinc actualitzades.


A més a més faig córrer programes que no hi són a Debian o que no hi són 
prou actualitzats, Freecad, Cura i que venen amb com appimage i es clar, 
els menús de les aplicacions son diferents, els escriptoris, els scripts.

Depenent el que vulgui fer, haig d'anar a un PC o a un altre.

Ara he aconseguit un ordinador I5 amb 32GB de ram i he pensat de muntar 
la Debian amb entorn gràfic, amb totes les aplicacions, i compartir 
l'escriptori amb totes les altres màquines.


I en un segon pas muntar-hi una màquina virtual amb tot el servidor.

Fins ara, mai he tingut cap intrusió, que jo hagi vist, i mira que no 
paren. I és clar el que vull fer em sembla més perillós que el que tinc ara.


També podria fer-ho al revés, muntar el servidor amb tot el que tinc i a 
sobre fer-hi córrer la màquina virtual amb entorn gràfic i que fos 
aquesta màquina virtual qui compartís l'escriptori.


De màquines virtuals, ara tinc un virtualbox amb un win7 només per poder 
manegar un gps, vull dir que l'he provat.


Fa molts anys quan virtualbox era a les beceroles vaig fer alguna cosa 
amb un altre sistema però no me'n recordo ni del nom.


I de compartir escriptori, fa 15 anys, a la feina, teníem muntada alguna 
cosa amb vnc, però ara no sabria ni com començar.


Algú és vol esplaiar amb alguna opinió i o proposta ?

Gràcies per haver llegit fins aquí ;-D

--
Jordi Perera



Booting Debian from NFS (using EFI PXE GRUB)

2023-03-03 Thread tuxifan
Hey!

As kind of a network-wide fallback system (and for some diskless computers in 
our network) I'd like to set up EFI PXE boot for a simple Debian system. 
However, I have only been able to find very sparse and possibly outdated 
information on how to actually tell the kernel/initramfs to mount a NFS 
filesystem as filesystem root. I even asked Chatgpt, and it replied with its 
usual hallucinations, unable to provide real links to source of information.

This is what my TFTP root currently looks like:

├── grub
│   └── grub.cfg
├── grubnetx64.efi
├── initrd.img (Generic Debian Testing initrd.img)
└── vmlinuz (Generic Debian Testing vmlinuz)
(1 directory, 4 files)

And my NFS root currently isn't much more than the result of a Debian Stable 
debootstrap.

Do you have any tips and ideas on how to get Linux to mount that NFS root as 
the filesystem root?

Thanks in advance
Tuxifan




Re: NAS + serveur de fichier SMB NFS SFTP avec Debian 11

2022-12-28 Thread Dethegeek
Bonjour

Openmedievault est basé sur Debian également. Pour avoir mis un peu les
mains dedans, je peux dire que la couche d'administration du produit ne
faut que générer les fichiers de configuration des différents services.
C'est très propre et ce sera beaucoup plus simple que configurer a la main
un debian

Le mer. 28 déc. 2022 à 09:29, Erwann Le Bras  a
écrit :

> bonjour
>
> oui, tout-à-fait, ya toutes les briques qu'il faut dans Debian, faut
> mettre les mains dans les fichiers de config.
>
> bien réfléchir en amont à l'organisation des fichiers et comment on y
> accède
>
> Installer un Debian sans interface graphique, accessible seulement avec
> SSH puis enchaîner avec les fonctionnalités désirées : NFS, SMB puis
> PXE/SFTP.
>
> ne pas oublier l'aspect sécurité et les sauvegardes!
>
> bon courage et très bonnes fêtes de fin d'année.
> Le 26/12/2022 à 16:05, Olivier Back my spare a écrit :
>
> Bonjour
>
> Est-il possible de faire un NAS serveur de fichier SMB NFS SFTP + LDAP
> avec un Debian?
> J'ai acheté un nouvel ordinateur pour ma mère et j'ai récupéré son vieux
> i3 8 Go de RAM, 1 To de HDD.
> Je voudrais en faire un NAS serveur de fichier SMB NFS SFTP + LDAP sans
> utiliser Openvault. Est-ce possible d'obtenir le même résultat avec Debian
> 11?
> Je compte mettre un ssd 128 Go pour l'OS et le swap et 2 HDD 4 To avec une
> carte raid 1 sata "SSU-sata3-t2.v1" pour les datas.
>
> Cordialement
>
>


Re: NAS + serveur de fichier SMB NFS SFTP avec Debian 11

2022-12-28 Thread Erwann Le Bras

bonjour

oui, tout-à-fait, ya toutes les briques qu'il faut dans Debian, faut 
mettre les mains dans les fichiers de config.


bien réfléchir en amont à l'organisation des fichiers et comment on y accède

Installer un Debian sans interface graphique, accessible seulement avec 
SSH puis enchaîner avec les fonctionnalités désirées : NFS, SMB puis 
PXE/SFTP.


ne pas oublier l'aspect sécurité et les sauvegardes!

bon courage et très bonnes fêtes de fin d'année.

Le 26/12/2022 à 16:05, Olivier Back my spare a écrit :

Bonjour

Est-il possible de faire un NAS serveur de fichier SMB NFS SFTP + LDAP 
avec un Debian?
J'ai acheté un nouvel ordinateur pour ma mère et j'ai récupéré son 
vieux i3 8 Go de RAM, 1 To de HDD.
Je voudrais en faire un NAS serveur de fichier SMB NFS SFTP + LDAP 
sans utiliser Openvault. Est-ce possible d'obtenir le même résultat 
avec Debian 11?
Je compte mettre un ssd 128 Go pour l'OS et le swap et 2 HDD 4 To avec 
une carte raid 1 sata "SSU-sata3-t2.v1" pour les datas.


Cordialement


Re: NAS + serveur de fichier SMB NFS SFTP avec Debian 11

2022-12-26 Thread Gilles Mocellin
Le lundi 26 décembre 2022, 16:51:56 CET Jean-François Bachelet a écrit :
> Hello ^^)
> 
> Le 26/12/2022 à 16:05, Olivier Back my spare a écrit :
> > Bonjour
> > 
> > Est-il possible de faire un NAS serveur de fichier SMB NFS SFTP + LDAP
> > avec un Debian?
> > J'ai acheté un nouvel ordinateur pour ma mère et j'ai récupéré son
> > vieux i3 8 Go de RAM, 1 To de HDD.
> > Je voudrais en faire un NAS serveur de fichier SMB NFS SFTP + LDAP
> > sans utiliser Openvault. Est-ce possible d'obtenir le même résultat
> > avec Debian 11?
> > Je compte mettre un ssd 128 Go pour l'OS et le swap et 2 HDD 4 To avec
> > une carte raid 1 sata "SSU-sata3-t2.v1" pour les datas.
> 
> si tu tiens à Debian tu peux utiliser Freenas Scale il est basé dessus ^^)
> 
> Jeff

Bonjour,

Dans les distributions dédié NAS et basée sur Debian, nous avons aussi 
OpenMedia Vault : /
https://www.openmediavault.org//[1]

Peut-être plus simple à installer après coup sur une Debian que FreeNAS Scale ?

--- Mince je viens de voir que tu as mentionner Openvault, sûrement une erreur 
de frappe.

Mais pour répondre à ta question : oui, tous les logiciels sont présent dans 
Debian pour faire 
tout ce qu'un NAS doit savoir faire.
Ce n'est qu'une question de temps à y passer, pour sélectionner les logiciels 
(il y a souvent 
plusieurs possibilités) et les configurer.
Mais si c'est un hobby et que ça te plaît, il faut le faire comme ça, c'est 
très formateur.

Par contre, si j'étais toi, je ne m'embarrasserais pas d'un carte RAID 
matériel, et je ferais du 
RAID soft, soit avec MD soit avec du ZFS.

Les cartes RAID matériel, ça n'a qu'une seule utilité pour moi : permettre de 
faire changer les 
disques à des gens qui n'y connaissent rien.
Avec MD ou ZFS, il faut généralement aller taper des commandes pour remplacer 
le disque.


[1] https://www.openmediavault.org/


Re: NAS + serveur de fichier SMB NFS SFTP avec Debian 11

2022-12-26 Thread Jean-François Bachelet

Hello ^^)

Le 26/12/2022 à 16:05, Olivier Back my spare a écrit :

Bonjour

Est-il possible de faire un NAS serveur de fichier SMB NFS SFTP + LDAP 
avec un Debian?
J'ai acheté un nouvel ordinateur pour ma mère et j'ai récupéré son 
vieux i3 8 Go de RAM, 1 To de HDD.
Je voudrais en faire un NAS serveur de fichier SMB NFS SFTP + LDAP 
sans utiliser Openvault. Est-ce possible d'obtenir le même résultat 
avec Debian 11?
Je compte mettre un ssd 128 Go pour l'OS et le swap et 2 HDD 4 To avec 
une carte raid 1 sata "SSU-sata3-t2.v1" pour les datas.


si tu tiens à Debian tu peux utiliser Freenas Scale il est basé dessus ^^)

Jeff



NAS + serveur de fichier SMB NFS SFTP avec Debian 11

2022-12-26 Thread Olivier Back my spare

Bonjour

Est-il possible de faire un NAS serveur de fichier SMB NFS SFTP + LDAP 
avec un Debian?
J'ai acheté un nouvel ordinateur pour ma mère et j'ai récupéré son vieux 
i3 8 Go de RAM, 1 To de HDD.
Je voudrais en faire un NAS serveur de fichier SMB NFS SFTP + LDAP sans 
utiliser Openvault. Est-ce possible d'obtenir le même résultat avec 
Debian 11?
Je compte mettre un ssd 128 Go pour l'OS et le swap et 2 HDD 4 To avec 
une carte raid 1 sata "SSU-sata3-t2.v1" pour les datas.


Cordialement

--
Gestionnaire d'infrastructure/ Gestionnaire de parc informatique
"It is possible to commit no errors and still lose. That is not a 
weakness. That is life."

– Captain Jean-Luc Picard to Data



Re : Re: Re : Re: Montage nfs en un clic

2022-12-02 Thread benoit
Le vendredi 2 décembre 2022 à 17:25, didier gaumet  a 
écrit :

> 
> - ça doit dépendre aussi du bureau utilisé: je ne sais pas ce qu'utilise
> KDE/Plasma, par exemple, mais il pourrait ne pas utiliser gvfs
> (initialement c'est de l'écosystème Gnome).
> - ça pourrait (suivant les réglages, options et modules supplémentaires
> du DE ou du gestionnaire de fichiers) apparaître comme icône sur le
> bureau: généralement un double-clic gauche ou un clic droit permet alors
> de monter le volume
> - ça pourrait apparaître aussi dans le gestionnaire de fichiers (là
> aussi, explorer les réglages) dans une rubrique "réseau,
> "lecteurs/volumes réseau" ou un truc de ce genre (par exemple dans
> Nautilus (je suis sous Gnome) chez moi apparaissent ma Freebox et mon
> espace Google Drive dans la rubrique "Autres Emplacements / Réseaux"
> 
> Mais une recherche sur le wiki Archlinux généralement assez pertinent ne
> semble pas faire mention de gvfs pour le montage automatique NFS, donc
> c'est peut-être moi qui t'induis en erreur.
> En gros le wiki Archlinux, pour du montage NFS automatique de volumes
> pas forcément disponibles au démarrage (pour ça il y a /etc/fstab),
> renvoie vers deux solutions:
> - l'unité Systemd:
> https://wiki.archlinux.org/title/NFS#As_systemd_unit
> - Autofs que Pierre-Elliot a déjà recommandé:
> https://wiki.archlinux.org/title/Autofs#NFS_network_mounts

Merci pour ta réponse

A mon avis c'est pour ça que gvfs ne marche pas chez moi, je n'utilise pas 
d’environnement de bureau, juste openbox.

Je vais tester sur une machine qui a gnome
Mais autofs recommandé par Pierre-Elliot fonctionne pas mal.



--
Benoit



Re: Re : Re: Montage nfs en un clic

2022-12-02 Thread didier gaumet

Le 02/12/2022 à 15:54, benoit a écrit :

Le vendredi 2 décembre 2022 à 14:08, didier gaumet  a 
écrit :



peut-être même encore plus simple (je ne peux pas tester, je n'ai pas
de NFS) simplement installer le paquet gvfs-backends (qui comprend une
partie NFS et gère les systèmes de fichiers virtuels): le bureau utilisé
devrait probablement alors montrer le partage NFS en volume prêt à monter?


Il était déjà installé, mais je ne sais pas comment lui dire de monter le 
partage nfs...


- ça doit dépendre aussi du bureau utilisé: je ne sais pas ce qu'utilise 
KDE/Plasma, par exemple, mais il pourrait ne pas utiliser gvfs 
(initialement c'est de l'écosystème Gnome).
- ça pourrait (suivant les réglages, options et modules supplémentaires 
du DE ou du gestionnaire de fichiers) apparaître comme icône sur le 
bureau: généralement un double-clic gauche ou un clic droit permet alors 
de monter le volume
- ça pourrait apparaître aussi dans le gestionnaire de fichiers (là 
aussi, explorer les réglages) dans une rubrique "réseau, 
"lecteurs/volumes réseau" ou un truc de ce genre (par exemple dans 
Nautilus (je suis sous Gnome) chez moi apparaissent ma Freebox et mon 
espace Google Drive dans la rubrique "Autres Emplacements / Réseaux"


Mais une recherche sur le wiki Archlinux généralement assez pertinent ne 
semble pas faire mention de gvfs pour le montage automatique NFS, donc 
c'est peut-être moi qui t'induis en erreur.
En gros le wiki Archlinux, pour du montage NFS automatique de volumes 
pas forcément disponibles au démarrage (pour ça il y a /etc/fstab), 
renvoie vers deux solutions:

- l'unité Systemd:
https://wiki.archlinux.org/title/NFS#As_systemd_unit
- Autofs que Pierre-Elliot a déjà recommandé:
https://wiki.archlinux.org/title/Autofs#NFS_network_mounts



Re : Re: Montage nfs en un clic

2022-12-02 Thread benoit
Le vendredi 2 décembre 2022 à 12:41, Pierre-Elliott Bécue  a 
écrit :
Salut
 
> autofs et un "lien" dans nautilus vers le point de montage.
> --
> PEB

Ca marche nickel 

Benoit



Re : Re: Montage nfs en un clic

2022-12-02 Thread benoit
Le vendredi 2 décembre 2022 à 14:08, didier gaumet  a 
écrit :


> peut-être même encore plus simple (je ne peux pas tester, je n'ai pas
> de NFS) simplement installer le paquet gvfs-backends (qui comprend une
> partie NFS et gère les systèmes de fichiers virtuels): le bureau utilisé
> devrait probablement alors montrer le partage NFS en volume prêt à monter?

Il était déjà installé, mais je ne sais pas comment lui dire de monter le 
partage nfs...





Re: Montage nfs en un clic

2022-12-02 Thread didier gaumet



*peut-être* même encore plus simple (je ne peux pas tester, je n'ai pas 
de NFS) simplement installer le paquet gvfs-backends (qui comprend une 
partie NFS et gère les systèmes de fichiers virtuels): le bureau utilisé 
devrait probablement alors montrer le partage NFS en volume prêt à monter?




Re: Montage nfs en un clic

2022-12-02 Thread Pierre-Elliott Bécue
Salut,

benoit  wrote on 02/12/2022 at 12:10:48+0100:

> Bonjour,
>
> Quelle serait la solution le plus conviviale pour monter un point de montage 
> nfs au clic ?
> J'ai écrit une petite fct en attendant, mais c'est pas convivial et ça m'a 
> obligé à ajouter une ligne dans le fstab.
>
> partagenfs(){
> DISK="$HOME/partagenfs/"
> if [ -z "$(grep 'partagenfs' /proc/mounts)" ]
> then
> mount /mnt/partagenfs
> fi
> cd $DISK
> }
>
> # dans /etc/fstab
> servnfs:/export/mnt/partagenfs nfs4rw,hard,intr,_netdev,user,noauto   
>  0   0
>
> Je recherche une solution au clic et facultatif pour, par exemple, un
> ordi portable qui vient dans le réseau et se connecte à la demande au
> serveur nfs.

autofs et un "lien" dans nautilus vers le point de montage.
-- 
PEB



Montage nfs en un clic

2022-12-02 Thread benoit
Bonjour,

Quelle serait la solution le plus conviviale pour monter un point de montage 
nfs au clic ?
J'ai écrit une petite fct en attendant, mais c'est pas convivial et ça m'a 
obligé à ajouter une ligne dans le fstab.

partagenfs(){
DISK="$HOME/partagenfs/"
if [ -z "$(grep 'partagenfs' /proc/mounts)" ]
then
mount /mnt/partagenfs
fi
cd $DISK
}

# dans /etc/fstab
servnfs:/export /mnt/partagenfs nfs4 rw,hard,intr,_netdev,user,noauto 0 0

Je recherche une solution au clic et facultatif pour, par exemple, un ordi 
portable qui vient dans le réseau et se connecte à la demande au serveur nfs.

Merci d'avance

--
Benoît

Envoyé avec la messagerie sécurisée [Proton Mail](https://proton.me/).

Re: Mount NFS hangs

2022-10-04 Thread Greg Wooledge
On Tue, Oct 04, 2022 at 12:04:56PM +0100, tony wrote:
> I can successfully do (pls ignore spurious line break):
> 
> mount -t nfs -o _netdev tony-fr:/mnt/sharedfolder
> /mnt/sharedfolder_client
> 
> but the user id is incorrect.

What do you mean, "the user id"?  As if there's only one?

This isn't a VFAT file system.  It's NFS.  The file system on the server
has multiple UIDs and GIDs and permissions.  These are reflected on
the client.

If a file (say, /mnt/sharedfolder_client/foo.txt) is owned by UID 1001
on the server, then it is also owned by UID 1001 on the client.

Unless of course you're doing ID mapping stuff, which I have never done,
and really don't recommend.

Just make sure UID 1001 on the server and UID 1001 on the client map to
the same person.  And similarly for all the other UIDs and GIDs.



Mount NFS hangs

2022-10-04 Thread tony

Hi,

I need to mount a directory from a debian 11 server to a debian 10 client.

I can successfully do (pls ignore spurious line break):

mount -t nfs -o _netdev tony-fr:/mnt/sharedfolder
/mnt/sharedfolder_client

but the user id is incorrect. If I now try:

mount -t nfs -o _netdev,uid=1002 tony-fr:/mnt/sharedfolder 
/mnt/sharedfolder_client


the command hangs in terminal. Uid 1002 is valid in the /etc/passwd file 
on both machines.


Any suggestion on how to fix this please?

cheers, Tony



Re: "Failed to start Create System Users" when booting Debian 10 rootfs from NFS mount.

2022-09-01 Thread mj

Hi,

A suggestion: we've had issues in the past, where on NFS root the issue 
was that setting "Linux Capabilities" (setcap) fails, because NFS does 
not support the extended attributes to store them.


Perhaps that is your issue as well?

MJ

Op 16-08-2022 om 21:58 schreef Lie Rock:

Hi,

I'm trying to bring up the Debian 10 root file system on an ARM SoC 
board. When the rootfs was in an SD card the board worked well. When I 
put the rootfs on an NFS server and tried to boot the board through NFS 
mount, it reported error through serial port:


|[FAILED] Failed to start Create System Users. See 'systemctl status 
systemd-sysusers.service' for details. |


And this is the only error message printed out. The board went all the 
way to login inputI, but I could not login with any of 
the preset accounts including root (because no users have been created 
as it suggested?), and I didn't see any way to run commands to check 
system status for details.


So how is the process "create system users" performed when Linux/Debian 
starts? What can be contributing to this error?


Any suggestions would be greatly appreciated.

Rock





Re: nfs-kernel-server

2022-08-20 Thread Greg Wooledge
On Sat, Aug 20, 2022 at 06:21:21PM -0700, Wylie wrote:
> 
> i am getting this error ... on a fresh install of nfs-kernel-server
> 
>   mount.nfs: access denied by server while mounting
> 192.168.42.194:/ShareName
> 
> i'm not having this issue on other machines installed previously
> i've tried re-installing Debian and nfs several times

What's in your /etc/exports file on the server?  What's the client's
IP address and hostname?  If you attempt to resolve the client's IP
address from the server, what do you get?

If the client changed IP address or name, or if you changed its entry
in /etc/exports on the server, did you restart the NFS server service?



nfs-kernel-server

2022-08-20 Thread Wylie


i am getting this error ... on a fresh install of nfs-kernel-server

  mount.nfs: access denied by server while mounting 
192.168.42.194:/ShareName


i'm not having this issue on other machines installed previously
i've tried re-installing Debian and nfs several times


Wylie!


Re: "Failed to start Create System Users" when booting Debian 10 rootfs from NFS mount.

2022-08-16 Thread tomas
On Tue, Aug 16, 2022 at 04:20:36PM -0400, Greg Wooledge wrote:
> On Tue, Aug 16, 2022 at 03:58:30PM -0400, Lie Rock wrote:
> > So how is the process "create system users" performed when Linux/Debian
> > starts? What can be contributing to this error?
> 
> unicorn:~$ grep -ri 'create system users' /lib/systemd
> /lib/systemd/system/systemd-sysusers.service:Description=Create System Users

[...]

Good research, and "thank you" from a systemd-abstainer, that's
my way to learn, after all :)

I'd contribute my hunch: perhaps systemd is trying to get sysusers
up "too early", before the root file system is pivoted-in?

Feeding my search engine with "NFS root" and +systemd turns up a
bunch of interesting suggestions (e.g. network has to be up before
NFS has to be mounted, etc:).

Good luck... and tell us what it was ;-)

Cheers
-- 
t


signature.asc
Description: PGP signature


Re: "Failed to start Create System Users" when booting Debian 10 rootfs from NFS mount.

2022-08-16 Thread Greg Wooledge
On Tue, Aug 16, 2022 at 03:58:30PM -0400, Lie Rock wrote:
> So how is the process "create system users" performed when Linux/Debian
> starts? What can be contributing to this error?

unicorn:~$ grep -ri 'create system users' /lib/systemd
/lib/systemd/system/systemd-sysusers.service:Description=Create System Users

unicorn:~$ systemctl cat systemd-sysusers.service
[...]
Documentation=man:sysusers.d(5) man:systemd-sysusers.service(8)
[...]
ExecStart=systemd-sysusers

unicorn:~$ man systemd-sysusers
[...]
   systemd-sysusers creates system users and groups, based on the file
   format and location specified in sysusers.d(5).

That's enough to get you started down the rabbit hole(s).  You should
also definitely check the logs on your system (e.g.
 journaltctl -u systemd-sysusers) to see what *exactly* went wrong.



"Failed to start Create System Users" when booting Debian 10 rootfs from NFS mount.

2022-08-16 Thread Lie Rock
Hi,

I'm trying to bring up the Debian 10 root file system on an ARM SoC board.
When the rootfs was in an SD card the board worked well. When I put the
rootfs on an NFS server and tried to boot the board through NFS mount, it
reported error through serial port:

[FAILED] Failed to start Create System Users.
See 'systemctl status systemd-sysusers.service' for details.

And this is the only error message printed out. The board went all the way
to login inputI, but I could not login with any of the preset accounts
including root (because no users have been created as it suggested?), and I
didn't see any way to run commands to check system status for details.

So how is the process "create system users" performed when Linux/Debian
starts? What can be contributing to this error?

Any suggestions would be greatly appreciated.

Rock


Re: Mounting NFS share from Synology NAS

2022-02-10 Thread Anssi Saari
Andrei POPESCU  writes:

> Are you sure you're actually using NFSv4? (check 'mount | grep nfs').

Yes I'm sure. It's all host on path type nfs4 and in options also
vers=4.2.

Also the bog standard auto.net these days has code to mount using NFSv4.

> In my experience in order to make NFSv4 work it's necessary to configure 
> a "root" share with fsid=0 or something like that and mount
> the actual shares using a path relative to it (my NFS "server" is 
> currently down, so I can't check exactly what I did).

That's the weirdness I meant. But it's not true, these days and hasn't
been for years. Or maybe it's hidden? But I can do, for example:

# mount zippy:/tmp /mnt/foo  
# mount|grep zip
zippy:/tmp on /mnt/foo type nfs4 
(rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.0.2.119,local_lock=none,addr=10.0.2.126)

I don't have anything about that in fstab. This is actually a tmpfs
mount where I have fsid=something in /etc/exports but I don't know if
that's required today. zfs mounts the same way from zippy and I don't
have any fsid stuff there. Of course it could be handled automatically.

Autofs mounts a little differently, this is like the old way:

zippy:/ on /net/zippy type nfs4 
(rw,nosuid,nodev,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.0.2.119,local_lock=none,addr=10.0.2.126)
zippy:/tmp on /net/zippy/tmp type nfs4 
(rw,nosuid,nodev,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.0.2.119,local_lock=none,addr=10.0.2.126)

> As far as I know ZFS is using the kernel NFS server, it's just providing 
> a convenient method to share / unshare so it's not necessary to mess 
> with /etc/exports if all your shares are ZFS data sets.

Good to know.



Re: Mounting NFS share from Synology NAS

2022-02-09 Thread Andrei POPESCU
On Mi, 02 feb 22, 13:49:38, Anssi Saari wrote:
> Greg Wooledge  writes:
> 
> > I'm unclear on how NFS v4 works.  Everything I've read about it in the
> > past says that you have to set up a user mapping, which is shared by
> > the client and the server.  And that this is *not* optional, and *is*
> > exactly as much of a pain as it sounds.
> 
> I've never done that, as far as I remember. NFS (NFSv4, these days)
> mounts in my home network use autofs but I haven't done anything there
> either specifically for NFS of any verstion. I remember there was some
> weirdness at some point with NFSv4 and I didn't bother with it much. I
> had maybe two computers back then so not much of network. But over the
> years my NFS mounts just became NFSv4.

Are you sure you're actually using NFSv4? (check 'mount | grep nfs').

In my experience in order to make NFSv4 work it's necessary to configure 
a "root" share with fsid=0 or something like that and mount
the actual shares using a path relative to it (my NFS "server" is 
currently down, so I can't check exactly what I did).

> Access for me is by UID. Service is by the kernel driver or in the case
> of zfs, the NFS service it provides. I've thought about setting up
> Kerberos but haven't gotten around to it. One thing is, I don't know if
> Kerberos would work with the NFS service zfs provides? No big deal
> either way though.

As far as I know ZFS is using the kernel NFS server, it's just providing 
a convenient method to share / unshare so it's not necessary to mess 
with /etc/exports if all your shares are ZFS data sets.

(zfs-utils Suggests: nfs-kernel-server and 
https://wiki.debian.org/ZFS#NFS_shares implies the same)

Kind regards,
Andrei
-- 
http://wiki.debian.org/FAQsFromDebianUser


signature.asc
Description: PGP signature


  1   2   3   4   5   6   7   8   9   10   >