Re: [PacketFence-users] New PF 12.1 Installation on Debian 11.6 Bullseye

2023-01-16 Thread Ian MacDonald via PacketFence-users
We restaged our environment

https://github.com/inverse-inc/packetfence/issues/7403 describes some
similar symptoms, so I have added some additional debug below from

cat /etc/network/interfaces
/usr/local/pf/sbin/pfperl-api get /api/v1/config/interfaces | jq
ip -br a
docker container ls

Following some of the triage steps in 7403, I additionally enabled debug on
pfperl-api, restarted the service and hit the Wizard Step 1 again, and
while I was capturing output for this email (below), the interfaces all of
a sudden appeared. The log output is also below from the process
restart, and I note there are some WARN level messages related to an ip
command exiting with a non-zero value for interfaces that do not appear in
my interface list.

packetfence.log:Jan 17 00:33:19 pf5 pfperl-api-docker-wrapper[69063]:
pfperl-api(15) WARN: [mac:[undef]] Problem trying to run command: LANG=C
sudo ip -4 -o addr show veth17036df called from (eval). Child exited with
non-zero value 1 (pf::util::pf_run)

It seemed odd, so I reverted the DEBUG log to INFO, and rebooted the system
to see if I could reproduce the behavior.  Sure enough, sitting on the
wizard step 1, as I had believed to have done previously, yielded no
interfaces in the Configurator and no WARN messages in the log.

It seemed that the service restart somehow resulted in the GUI being
updated.  So we restart the service, and sure enough the interfaces
populated in the Configurator.  No sign of the WARN messages either in the
log which seemed suppressed when set to INFO.

I re-ran the dump of the interfaces via API, and it gave the following
strange result

pf5:~# /usr/local/pf/sbin/pfperl-api get /api/v1/config/interfaces | jq
Device "veth7629818" does not exist.
parse error: Invalid numeric literal at line 1, column 19
Device "veth7629818" does not exist.
Device "veth6d6535d" does not exist.
Device "veth6d6535d" does not exist.
Unable to flush stdout: Broken pipe

cheers,
Ian

packetfence.log:Jan 17 00:31:15 pf5 pfperl-api-docker-wrapper[69046]:
Running with args --sig-proxy=true --rm --name=pfperl-api
--add-host=containers-gateway.internal:host-gateway -h pf5  -v
/var/lib/mysql:/var/lib/mysql -v /etc/sudoers:/etc/sudoers -v
/etc/sudoers.d/:/etc/sudoers.d/ -v
/usr/local/fingerbank/conf:/usr/local/fingerbank/conf -v
/usr/local/fingerbank/db:/usr/local/fingerbank/db -v
/usr/local/pf/var/run:/usr/local/pf/var/run -ePF_UID=996 -e PF_GID=996
-eFINGERBANK_UID=997 -e FINGERBANK_GID=997 -eIS_A_CLASSIC_PF_CONTAINER=yes
-v /etc/localtime:/etc/localtime:ro -v
/usr/local/pf/conf:/usr/local/pf/conf -v
/usr/local/pf/raddb/certs:/usr/local/pf/raddb/certs --privileged -v
/run/systemd/system:/run/systemd/system -v
/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket -v
/usr/local/fingerbank/conf:/usr/local/fingerbank/conf -v
/etc/sysconfig/:/etc/sysconfig -v /etc/network:/etc/network -v
/etc/resolv.conf:/etc/resolv.conf --network=host -e HOST_OS
-v/usr/local/pf/var/conf/:/usr/local/pf/var/conf/
-v/usr/local/pf/html/captive-portal/profile-templates:/usr/local/pf/html/captive-portal/profile-templates
packetfence.log:Jan 17 00:31:15 pf5 pfperl-api-docker-wrapper[69053]:
Error: No such container: pfperl-api
packetfence.log:Jan 17 00:31:17 pf5 pfperl-api-docker-wrapper[69063]:
pfperl-api(8) DEBUG: [mac:[undef]] invalid IP:  from cluster::__ANON__
(pf::util::valid_ip)
packetfence.log:Jan 17 00:31:19 pf5 pfperl-api-docker-wrapper[69063]:
pfperl-api(8) DEBUG: [mac:[undef]] cache get for namespace='configfiles',
key='/usr/local/pf/conf/roles.conf', cache='Redis:l1_cache', time='0ms':
MISS (not in cache) (CHI::Driver::_log_get_result)
packetfence.log:Jan 17 00:31:19 pf5 pfperl-api-docker-wrapper[69063]:
pfperl-api(8) DEBUG: [mac:[undef]] cache get for namespace='Default',
key='HASH(0x55f9c40983b0)', cache='RawMemory', time='0ms': MISS (not in
cache) (CHI::Driver::_log_get_result)
packetfence.log:Jan 17 00:31:19 pf5 pfperl-api-docker-wrapper[69063]:
pfperl-api(8) DEBUG: [mac:[undef]] cache set for namespace='Default',
key='{"encoding":null,"reconnect":"60","server":"containers-gateway.internal:6379"}',
size=1, expires='never', cache='RawMemory', time='0ms'
(CHI::Driver::_log_set_result)
packetfence.log:Jan 17 00:31:19 pf5 pfperl-api-docker-wrapper[69063]:
pfperl-api(8) DEBUG: [mac:[undef]] cache get for namespace='configfiles',
key='/usr/local/pf/conf/roles.conf', cache='Redis', time='1ms': HIT
(CHI::Driver::_log_get_result)
packetfence.log:Jan 17 00:31:19 pf5 pfperl-api-docker-wrapper[69063]:
pfperl-api(8) DEBUG: [mac:[undef]] cache set for namespace='configfiles',
key='/usr/local/pf/conf/roles.conf', size=1, expires='never',
cache='Redis:l1_cache', time='0ms' (CHI::Driver::_log_set_result)
packetfence.log:Jan 17 00:31:20 pf5 pfperl-api-docker-wrapper[69063]:
pfperl-api(8) DEBUG: [mac:[undef]] cache get for namespace='configfiles',
key='/usr/local/pf/conf/switches.conf', cache='Redis:l1_cache', time='0ms':
MISS (not in cache) (CHI::Driver::_log_get_result)

[PacketFence-users] New PF 12.1 Installation on Debian 11.6 Bullseye

2023-01-16 Thread Ian MacDonald via PacketFence-users
Hello Packetfence Users,

We tested a fresh install of v12.1 on a freshly spun up Debian 11.6 today.

packetfence_12.1.0+20230116163629+748667390+0011+maintenance~12~1+bullseye1_all.deb

Prior to installing packetfence, we deployed some basic packages, listed
here as part of our default staging script.  I do not see any reason any of
these would cause detection scripts for the interfaces to fail, and both
ifconfig and ip are providing valid output on the CLI.

apt install gnupg arptables dnsutils unzip pigz mtr-tiny less vim screen
curl iperf3 wget tcpdump dialog subnetcalc vlan bridge-utils ethtool iftop
iotop deborphan apt-show-versions ethtool pv systemd-timesyncd

The configurator however did not detect any interfaces.  The simple
interface configuration is shown below, for the Management, Registration
and Isolation (eth0, eth1, eth2 respectively).

ip addr output is below.  The configurator web page is stuck at step 1,
with no interfaces shown to select for the Management network.  Our next
step will be to add our interfaces into the configuration manually via CLI
and see if the configurator picks them up, or possibly revert to  v11.1 and
see if it happens there too.

Any insights here on why this might be happening appreciated.

pf5:~# ip addr
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group
default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
   valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
   valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc mq state UP group
default qlen 1000
link/ether 00:16:3e:dc:7d:fd brd ff:ff:ff:ff:ff:ff
inet 10.2.1.2/24 brd 10.2.1.255 scope global eth0
   valid_lft forever preferred_lft forever
inet6 fe80::216:3eff:fedc:7dfd/64 scope link
   valid_lft forever preferred_lft forever
3: eth1:  mtu 1500 qdisc mq state UP group
default qlen 1000
link/ether 00:16:3e:dc:7d:fe brd ff:ff:ff:ff:ff:ff
inet 10.2.2.2/24 brd 10.2.2.255 scope global eth1
   valid_lft forever preferred_lft forever
inet6 fe80::216:3eff:fedc:7dfe/64 scope link
   valid_lft forever preferred_lft forever
4: eth2:  mtu 1500 qdisc mq state UP group
default qlen 1000
link/ether 00:16:3e:dc:7d:ff brd ff:ff:ff:ff:ff:ff
inet 10.2.3.2/24 brd 10.2.3.255 scope global eth2
   valid_lft forever preferred_lft forever
inet6 fe80::216:3eff:fedc:7dff/64 scope link
   valid_lft forever preferred_lft forever
153: docker0:  mtu 1500 qdisc noqueue
state DOWN group default
link/ether 02:42:e7:4c:45:ca brd ff:ff:ff:ff:ff:ff
inet 100.64.0.1/24 brd 100.64.0.255 scope global docker0
   valid_lft forever preferred_lft forever
___
PacketFence-users mailing list
PacketFence-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/packetfence-users


[PacketFence-users] Pre-Registration of Users without captive portal

2023-01-16 Thread Durga Prasad Malyala via PacketFence-users
Hello Friends,
I am looking for a deployment mechanism whereis I restrict my users to
2 devices. And ask users to submit their MAC addresses (removing
dynamic MAC/privacy option). Someone from IT will enter them into PF
after some check and frequency of change requests.

1) First option -  I want to just Authenticate using MAC addresses and
also have WPA/WPA2 with shared secret to prevent a bunch of devices
getting connected to my APs and increasing the concurrent users count.
(For instance I don't want a Pizza delivery guy to get associated with
the AP since it is on OPEN mode for registration purposes.)

2) Second Option - Again Users get their 2 devices pre-registered
(though IT Dept) and we use EAP MSCHAP2 and login with AD credentials
(most important only 2 devices of that user should be allowed to be
connected.)

Currently there is no plan for hotspot or registration screen.

Can you share some Ideas and links?

Thanks/DP


___
PacketFence-users mailing list
PacketFence-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/packetfence-users