Re: [gentoo-user] mediatek mt7921: no AP?

2022-01-02 Thread Jorge Almeida
On Mon, Jan 3, 2022 at 6:53 AM Andrew Udvare  wrote:
>
> I'm pretty sure none of these built-in wi-fis like that will support AP mode. 
> They would much rather charge more for that feature. 
> https://www.524wifi.com/index.php/network-modules-adapters/wifi-6-11-ax-modules/wle3000h2-11ax-4x4-mu-mimo-2-4ghz-qcn-9024-wifi-6-802-11ax-2-4-ghz-single-band-mini-pcie-module-compex.html
>
> https://forum.openwrt.org/t/802-11ax-wifi-ap-mpci-e-cards/63577/2
>
>
Well, the joke is on me: I don't need anything very powerful, I use AP
just to connect smartphone to computer. Connection with the outside
world is via ethernet, as I prefer to keep disabled the wifi in the
the ISP-owned router.
Thanks for the links.



Re: [gentoo-user] mediatek mt7921: no AP?

2022-01-02 Thread Andrew Udvare
I'm pretty sure none of these built-in wi-fis like that will support AP
mode. They would much rather charge more for that feature.
https://www.524wifi.com/index.php/network-modules-adapters/wifi-6-11-ax-modules/wle3000h2-11ax-4x4-mu-mimo-2-4ghz-qcn-9024-wifi-6-802-11ax-2-4-ghz-single-band-mini-pcie-module-compex.html

https://forum.openwrt.org/t/802-11ax-wifi-ap-mpci-e-cards/63577/2

I own a similar motherboard but I don't use the wi-fi.

On Sun, Jan 2, 2022, 16:25 Jorge Almeida  wrote:

> I just mounted a box with a  ROG STRIX X570-E GAMING WIFI II
> motherboard. I'm not a gamer; one of the reasons I chose this was the
> build-in WiFi. Bad idea! The thing does not support AP mode, which I
> need. Nothing in the mo manual suggests such a limitation.
> Here are the details, in case someone with the same hw has  a better
> experience and would share it. Otherwise, it may be useful as a
> warning. I know I will not purchase ASUS and/or Mediatek again.
>
> Jorge Almeida
>
> $ lshw
> *-network
>   description: Wireless interface
>   product: MEDIATEK Corp.
>   vendor: MEDIATEK Corp.
>   physical id: 0
>   bus info: pci@:03:00.0
>   logical name: wnet2
>   version: 00
>   serial: b4:b5:b6:93:8d:7f
>   width: 64 bits
>   clock: 33MHz
>   capabilities: pciexpress msi pm bus_master
> cap_list ethernet physical wireless
>   configuration: broadcast=yes driver=mt7921e
> driverversion=5.15.12 firmware=01-20211014150922
> ip=192.168.2.131 latency=0 link=no multicast=yes wireless=IEEE 802.11
>   resources: irq:99 memory:e030-e03f
> memory:e040-e0403fff memory:e0404000-e0404fff
>
> $ lsmod
> xt_MASQUERADE  12288  1
> iptable_nat12288  1
> nf_nat 36864  2 iptable_nat,xt_MASQUERADE
> xt_connmark12288  4
> xt_helper  12288  110
> xt_limit   12288  26
> mt7921e77824  0
> mt76_connac_lib28672  1 mt7921e
> mt76   53248  2 mt7921e,mt76_connac_lib
> mac80211  503808  3 mt76,mt7921e,mt76_connac_lib
> cfg80211  372736  4 mt76,mt7921e,mac80211,mt76_connac_lib
> igb   196608  0
> rfkill 24576  1 cfg80211
> i2c_piix4  20480  0
>
> $ iw dev
> phy#0
> Interface wnet2
> ifindex 6
> wdev 0x1
> addr b4:b5:b6:93:8d:7f
> type managed
> txpower 3.00 dBm
> multicast TXQ:
> qsz-byt qsz-pkt flows   drops   marks
> overlmt hashcol tx-bytes tx-packets
> 0   0   0   0   0   0
>  0   0   0
>
> $ iw list | grep "Supported interface modes" -A 8
> Supported interface modes:
>  * managed
>  * monitor
> Band 1:
> Capabilities: 0x9ff
> RX LDPC
> HT20/HT40
> SM Power Save disabled
> RX Greenfield
>
>


[gentoo-user] mediatek mt7921: no AP?

2022-01-02 Thread Jorge Almeida
I just mounted a box with a  ROG STRIX X570-E GAMING WIFI II
motherboard. I'm not a gamer; one of the reasons I chose this was the
build-in WiFi. Bad idea! The thing does not support AP mode, which I
need. Nothing in the mo manual suggests such a limitation.
Here are the details, in case someone with the same hw has  a better
experience and would share it. Otherwise, it may be useful as a
warning. I know I will not purchase ASUS and/or Mediatek again.

Jorge Almeida

$ lshw
*-network
  description: Wireless interface
  product: MEDIATEK Corp.
  vendor: MEDIATEK Corp.
  physical id: 0
  bus info: pci@:03:00.0
  logical name: wnet2
  version: 00
  serial: b4:b5:b6:93:8d:7f
  width: 64 bits
  clock: 33MHz
  capabilities: pciexpress msi pm bus_master
cap_list ethernet physical wireless
  configuration: broadcast=yes driver=mt7921e
driverversion=5.15.12 firmware=01-20211014150922
ip=192.168.2.131 latency=0 link=no multicast=yes wireless=IEEE 802.11
  resources: irq:99 memory:e030-e03f
memory:e040-e0403fff memory:e0404000-e0404fff

$ lsmod
xt_MASQUERADE  12288  1
iptable_nat12288  1
nf_nat 36864  2 iptable_nat,xt_MASQUERADE
xt_connmark12288  4
xt_helper  12288  110
xt_limit   12288  26
mt7921e77824  0
mt76_connac_lib28672  1 mt7921e
mt76   53248  2 mt7921e,mt76_connac_lib
mac80211  503808  3 mt76,mt7921e,mt76_connac_lib
cfg80211  372736  4 mt76,mt7921e,mac80211,mt76_connac_lib
igb   196608  0
rfkill 24576  1 cfg80211
i2c_piix4  20480  0

$ iw dev
phy#0
Interface wnet2
ifindex 6
wdev 0x1
addr b4:b5:b6:93:8d:7f
type managed
txpower 3.00 dBm
multicast TXQ:
qsz-byt qsz-pkt flows   drops   marks
overlmt hashcol tx-bytes tx-packets
0   0   0   0   0   0
 0   0   0

$ iw list | grep "Supported interface modes" -A 8
Supported interface modes:
 * managed
 * monitor
Band 1:
Capabilities: 0x9ff
RX LDPC
HT20/HT40
SM Power Save disabled
RX Greenfield



Re: [gentoo-user] Long boot time after kernel update

2022-01-02 Thread Dale
Frank Steinmetzger wrote:
> Am Sun, Jan 02, 2022 at 01:38:01PM -0600 schrieb Dale:
>
> Same for my 6 TB Reds in the NAS. But 1200 is a rather big increase. Did you
> ever try this? Almost double for only one third more capacity.
>
> I suspect that internally the drive can do the long selftest in parallel --
> all platters at the same time. But when going from CMR to SMR, platter count
> does not grow linearly with capacity. So the drive may have ⅓ more capacity,
> but number of platters stayed the same.

This is what it reports for the 8Tb drive.


root@fireball / # smartctl -a /dev/sdc | grep poll
recommended polling time:    (   2) minutes.
recommended polling time:    (1175) minutes.
root@fireball / #


root@fireball / # smartctl -i /dev/sdc
smartctl 7.2 2020-12-30 r5155 [x86_64-linux-5.14.15-gentoo] (local build)
Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family: Western Digital Red
Device Model: WDC WD80EFZX-68UW8N0
Serial Number:    X
LU WWN Device Id: XX
Firmware Version: 83.H0A83
User Capacity:    8,001,563,222,016 bytes [8.00 TB]


I recall running it when I first bought the drive, to be sure it was not
reporting any problems after shipping etc, and that sounds about right. 
I know it took a good long while.



>
>> Yea, over two days.  O_O
> Uhm, even without a calculator, I challange that. 1 hour is 60 minutes, so
> 10 hours = 600 minutes, making 1200 minutes a mere 20 hours. ;-)
>
>> Dale
>>
>> :-)  :-) 
>>


Ooops.  I think I forgot to divide by 60 first.  Should be 1200/60/24. 
I think I skipped the 60 part.  Math isn't always by best thing.  lol 

Dale

:-)  :-) 



Re: [gentoo-user] Long boot time after kernel update

2022-01-02 Thread Frank Steinmetzger
Am Sun, Jan 02, 2022 at 01:38:01PM -0600 schrieb Dale:
> Frank Steinmetzger wrote:
> > Am Mon, Dec 27, 2021 at 08:15:51AM -0600 schrieb Dale:
> >
> >> I don't recall seeing this mentioned but this may be part of the issue
> >> unless I'm missing something that rules this out.  Could it be a drive
> >> is a SMR drive?
> >
> > SMR may slow down drive response time and throughput, but it should never
> > generate I/O errors in the syslog. If resetting or swapping the SATA cables
> > does not help, then I’d suspect the drive going bad.

The original problem had already been solved (I was reading up on old mail
from the Christmas week, go figure).

> > A long selftest might be in order (smartctl -t long). smartctl -a shows
> > how long this will take approximately (it’s rather accurate).
> >
> > For my PC’s rust drive (1 TB WD Blue) it says:
> > Extended self-test routine
> > recommended polling time:( 113) minutes.
> >

> If it helps any, my 6Tb drive takes around 700 minutes.  My 8Tb drive
> takes around 1200 minutes.

Same for my 6 TB Reds in the NAS. But 1200 is a rather big increase. Did you
ever try this? Almost double for only one third more capacity.

I suspect that internally the drive can do the long selftest in parallel --
all platters at the same time. But when going from CMR to SMR, platter count
does not grow linearly with capacity. So the drive may have ⅓ more capacity,
but number of platters stayed the same.

> Yea, over two days.  O_O

Uhm, even without a calculator, I challange that. 1 hour is 60 minutes, so
10 hours = 600 minutes, making 1200 minutes a mere 20 hours. ;-)

> Dale
> 
> :-)  :-) 
> 

-- 
Grüße | Greetings | Qapla’
Please do not share anything from, with or about me on any social network.

Keyboard not connected, press F1 to continue.


signature.asc
Description: PGP signature


Re: [gentoo-user] Long boot time after kernel update

2022-01-02 Thread Dale
Frank Steinmetzger wrote:
> Am Mon, Dec 27, 2021 at 08:15:51AM -0600 schrieb Dale:
>
>> I don't recall seeing this mentioned but this may be part of the issue
>> unless I'm missing something that rules this out.  Could it be a drive
>> is a SMR drive?
>
> SMR may slow down drive response time and throughput, but it should never
> generate I/O errors in the syslog. If resetting or swapping the SATA cables
> does not help, then I’d suspect the drive going bad. A long selftest might
> be in order (smartctl -t long). smartctl -a shows how long this will take
> approximately (it’s rather accurate).
>
> For my PC’s rust drive (1 TB WD Blue) it says:
> Extended self-test routine
> recommended polling time:( 113) minutes.
>
>


That's true but weird things happen. 

If it helps any, my 6Tb drive takes around 700 minutes.  My 8Tb drive
takes around 1200 minutes.  Yea, over two days.  O_O 

Dale

:-)  :-) 



Re: [gentoo-user] Long boot time after kernel update

2022-01-02 Thread Frank Steinmetzger
Am Mon, Dec 27, 2021 at 08:15:51AM -0600 schrieb Dale:

> I don't recall seeing this mentioned but this may be part of the issue
> unless I'm missing something that rules this out.  Could it be a drive
> is a SMR drive?


SMR may slow down drive response time and throughput, but it should never
generate I/O errors in the syslog. If resetting or swapping the SATA cables
does not help, then I’d suspect the drive going bad. A long selftest might
be in order (smartctl -t long). smartctl -a shows how long this will take
approximately (it’s rather accurate).

For my PC’s rust drive (1 TB WD Blue) it says:
Extended self-test routine
recommended polling time:( 113) minutes.


-- 
Grüße | Greetings | Qapla’
Please do not share anything from, with or about me on any social network.

“If it’s true that our species is alone in the universe, then I’d have to say
the universe aimed rather low and settled for very little.” – George Carlin


signature.asc
Description: PGP signature


Re: [gentoo-user] installing virtual machine under gentoo

2022-01-02 Thread Rich Freeman
On Sun, Jan 2, 2022 at 1:01 PM Grant Taylor
 wrote:
>
> What I remember doing was re-configuring the (primary) network interface
> so that it came up without an IP address and was added as a member to a
> newly created bridge.  As part of that I moved the system's IP
> address(es) from the underlying Ethernet interface to the newly created
> Bridge interface.
>

That is basically what I had to do.

With systemd-networkd the relevant stuff in /etc/systemd/network was:
eth-bridge.network:
[Match]
Name=e*

[Network]
Bridge=brkvm

(you'll need a more specific name I'm guessing since you have more
than one interface - the name to match should be the physical
interface name)

brkvm.netdev:
[NetDev]
Name=brkvm
Kind=bridge

(this creates the bridge interface, which the physical interface will
attach to due to the previous file)

brkvm.network:
[Match]
Name=brkvm

[Network]
DNS=...
Address=...
Gateway=...

(this will give the bridge interface an IP/etc - most likely you'll
just set this file up the way you'd otherwise be setting up your
physical network.  This can use dhcp.)

Then you'll tell virt-manager to use the brkvm bridge for all your VMs
that you want bridged.  Note that you can also use this bridge for
things like containers if they use virtual interfaces.  They'll just
see the host network directly, with their own virtual interfaces, so
they can use DHCP to obtain IPs on the host network.

In your case with dual interfaces you'll want to put the bridge on
whichever interface you want shared.  The other interface can remain
untouched.

The brkvm interface will otherwise behave the way eth0 or whatever
used to behave before you attached it to the bridge.  Note that the
physical interface attached to the bridge will not have an IP.  It
just relays traffic from the bridge to the physical network.

You could also just drop the first file if you wanted to have a bridge
not attached to any physical networks.  VMs could still attach to it
and talk to each other.  Though, I think virt-manager might provide
other ways to do this.

All of the above are systemd-specific.  Most network managers provide
some proper way to set up a bridge and you should probably do that.
You can do it all with just a shell script instead I suppose.

-- 
Rich



Re: [gentoo-user] installing virtual machine under gentoo

2022-01-02 Thread Grant Taylor

On 1/2/22 12:14 AM, John Covici wrote:
OK, I fixed it, the group name was wrong when I tried the last time, I 
had libvirtd and its only libvirt and that seems to have fixed things.


Thank you for the clarifying follow up.  Here's hoping you same someone 
else time in the future.  :-)


On 1/2/22 9:58 AM, John Covici wrote:

OK, more progress and a few more questions.


Yay progress!

In the virt-manager, I could not figure out how to add disk storage 
to the vm.  I have a partition I can use for the disk storage -- 
is this different from the  virtual machine image?


It depends.™

KVM / libvirt / Qemu can use raw partitions, files on a mounted file 
system, logical volumes, ZFS vDevs, iSCSI, and other things for storage. 
 Each one is configured slightly differently.  So, which method do you 
want to use?


I'd suggest that you /start/ with files on a mounted file system and 
then adjust as you need / want to.  At least as long as you're getting 
your feet wet.


From memory, you need to define a directory as a storage location to 
KVM / libvirt.  --  I'm not currently using KVM so I'm working from a 
mixture of memory and what I can poke without spinning things up.


1)  Open VMM (virt-manager).
2)  Select the KVM host in the window.
3)  Edit -> Connection Details
4)  Go to the Storage tab.
5)  Click the plus below the left hand pane.
6)  Choose and enter a name for the storage pool.
7)  Choose "dir: Filesystem Directory" as the type.
8)  Choose a target path by typing or browsing to it.
9)  Click Finish.

Now the storage pool you created should appear as an option when 
creating a VM.


Of even more importance, how do I bridge the vm onto my existing 
network?


This is also done through host properties on the Virtual Networks tab.

I don't remember the specifics (and can't walk through it the same way 
for reasons).  I usually did most of the management via the 
/etc/conf.d/net file as I do a lot of things with networking that few 
things can properly administer (802.3ad LACP, 802.1q VLAN, bridging, l2 
filtering, l3 filtering, etc).


What I remember doing was re-configuring the (primary) network interface 
so that it came up without an IP address and was added as a member to a 
newly created bridge.  As part of that I moved the system's IP 
address(es) from the underlying Ethernet interface to the newly created 
Bridge interface.


With the bridge created and manged outside of VMM (virt-manager) I was 
able to add new VMs / containers to the existing Bridge interface.  Thus 
establishing a layer 2 connection from the VM(s) / LXC(s) to the main 
network.


Note:  This is somewhat of a simplification as there are VLANs and 
multiple physical interfaces with many logical interfaces on the machine 
that I'm replying to you from.  However, I believe, the concepts hold as 
I've written them.


I have a nic for internal items named eno1 and another nic which 
connects to the outside world, I would like to bridge to the internal 
network, that would give the vm a dhcp address, etc.


If you have a separate physical NIC, as I had suggested starting with, 
then you can avoid much of the bridge & IP re-configuration in the 
/etc/conf.d/net file and /mostly/ manage an independent bridge on the 
additional NIC from within VMM (virt-manager).


The 2nd NIC means that you don't end up with a chicken & egg problem 
trying to administer a network interface across the network, which is 
how I do much of my work.  Re-configuring things through the console 
also simplifies things in this regard.




--
Grant. . . .
unix || die



Re: [gentoo-user] installing virtual machine under gentoo

2022-01-02 Thread Mark Knecht
On Sun, Jan 2, 2022 at 9:59 AM John Covici  wrote:

> OK, more progress and a few more questions.
>
> In the virt-manager, I could not figure out how to add disk storage to
> the vm.  I have a partition I can use for the disk storage -- is this
> different from the  virtual machine image?
>
> Of even more importance, how do I bridge the vm onto my existing
> network?  I have a nic for internal items named eno1 and another nic
> which connects to the outside world, I would like to bridge to the
> internal network, that would give the vm a dhcp address, etc.
>
>
> --
> Your life is like a penny.  You're going to lose it.  The question is:
> How do
> you spend it?
>
>  John Covici wb2una
>  cov...@ccs.covici.com
>

There are some good YouTube videos on exactly these subjects.

I haven't been forced to do them myself yet so I don't want to
hazard a guess but

virt-manager bridged network

and...

virt-manager disk passthrough

will likely get you pretty close.

Good luck,
Mark



Re: [gentoo-user] installing virtual machine under gentoo

2022-01-02 Thread John Covici
On Sun, 02 Jan 2022 01:31:12 -0500,
Grant Taylor wrote:
> 
> On 1/1/22 11:05 PM, John Covici wrote:
> > Well, I foujnd out something.  If I go to the file menu, I can
> > add the connection manually and it works,
> 
> That sounds familiar.
> 
> > but I wonder why I have to do that?
> 
> Because the KVM Virtual Manager is designed such that it can
> administer KVM / libvirt / qemu on multiple systems.  It's really
> client-server infrastructure.  You're just needing to point the
> client at your local server one time.
> 
> > Also, before I do anything, it asks me for the root password
> > and says system policy prevents local management of virtual
> > machines. Do you know why this is so?
> 
> This also seems familiar.
> 
> Try re-starting the libvirt / kvm daemons.  They may not be aware
> that your user is now a member of the proper group.  --  Aside:
> This is why a reboot is ... convenient, but not required.
> 
> This /should/ be taken care of proper group administration for
> your normal user.
> 
> I ran into this a long time ago when I set up KVM on my last
> Gentoo system.  I don't remember exactly what I had to do to
> resolve it.  I do know that it was less than five minutes of
> searching the web to find the answer, cussing at what needed to
> be done, and doing it.  That system has been running perfectly
> fine for many years.
> 

OK, more progress and a few more questions.

In the virt-manager, I could not figure out how to add disk storage to
the vm.  I have a partition I can use for the disk storage -- is this
different from the  virtual machine image?

Of even more importance, how do I bridge the vm onto my existing
network?  I have a nic for internal items named eno1 and another nic
which connects to the outside world, I would like to bridge to the
internal network, that would give the vm a dhcp address, etc.


-- 
Your life is like a penny.  You're going to lose it.  The question is:
How do
you spend it?

 John Covici wb2una
 cov...@ccs.covici.com



Re: [gentoo-user] genkernel's new configs not used?

2022-01-02 Thread Neil Bothwick
On Sat, 1 Jan 2022 23:44:55 -0600, Dale wrote:

> > Can you explain this part a bit?  How it knows what version for
> > example to build against?  Does it follow the link in /usr/src/linux,
> > eselect info or something else?
> >
> >  
> >> dracut --kver=$(cat include/config/kernel.release) --xz  

> I got it figured out.  That's a little like cheating.  LOL 

One man's cheating is another man's optimisation ;-)


-- 
Neil Bothwick

There are only two hard problems in distributed systems:
 2. Exactly-once delivery
 1. Guaranteed order of messages
 2. Exactly-once delivery


pgpczecQxGa6K.pgp
Description: OpenPGP digital signature